forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
8EaDOGMPUL
PSHuman: Photorealistic Single-view Human Reconstruction using Cross-Scale Diffusion
[ "Peng Li", "Wangguandong Zheng", "Yuan Liu", "Tao Yu", "Yangguang Li", "Xingqun Qi", "Xiaowei Chi", "Siyu Xia", "Yan-Pei Cao", "Wei Xue", "Wenhan Luo", "Yike Guo" ]
Detailed and photorealistic 3D human modeling is essential for various applications and has seen tremendous progress. However, full-body reconstruction from a monocular RGB image remains challenging due to the ill-posed nature of the problem and sophisticated clothing topology with self-occlusions. In this paper, we propose **PSHuman**, a novel framework that explicitly reconstructs human meshes utilizing priors from the multiview diffusion model. It is found that directly applying multiview diffusion on single-view human images leads to severe geometric distortions, especially on generated faces. To address it, we propose a cross-scale diffusion that models the joint probability distribution of global full-body shape and local facial characteristics, enabling detailed and identity-preserved novel-view generation without any geometric distortion. Moreover, to enhance cross-view body shape consistency of varied human poses, we condition the generative model on parametric models like SMPL-X, which provide body priors and prevent unnatural views inconsistent with human anatomy. Leveraging the generated multi-view normal and color images, we present SMPLX-initialized explicit human carving to recover realistic textured human meshes efficiently. Extensive experimental results and quantitative evaluations on CAPE and THuman2.1 datasets demonstrate PSHumans superiority in geometry details, texture fidelity, and generalization capability.
[ "Human recontruction", "Cross-scale diffusion", "Generative model" ]
https://openreview.net/pdf?id=8EaDOGMPUL
https://openreview.net/forum?id=8EaDOGMPUL
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yHFvQ2WvQX", "OUtY2eCSsZ", "NZTzupkcZx", "LCl0k5qg1z", "KyJLqrrlbo" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730809671525, 1730495499406, 1731599939934, 1730443651929, 1730682225829 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3490/Reviewer_m1YH" ], [ "ICLR.cc/2025/Conference/Submission3490/Reviewer_D5kG" ], [ "ICLR.cc/2025/Conference/Submission3490/Authors" ], [ "ICLR.cc/2025/Conference/Submission3490/Reviewer_t5gK" ], [ "ICLR.cc/2025/Conference/Submission3490/Reviewer_QuWN" ] ], "structured_content_str": [ "{\"summary\": \"Reconstructing 3D human model from a single image is a long-standing problem. This is a very challenging task. For existing methods, they are still very difficult to recover detailed geometry especially for facial regions and garment wrinkles. To my best knowledge, this work is the first one that enables high-fidelity reconstruction of human face. To do so, they proposed several novel designs.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Overall, the work is well-motivated and the paper is well-presented.\", \"As reported in the figures, the results are really visually good.\", \"All the method designs seem reasonable. Including joint body-face diffusion module for high-fidelity face recon, SMPL-X guided multi-view diffusion, and the SMPL-X conditioned human mesh carving.\"], \"weaknesses\": \"I still have some concerns on the experiments:\\n1) Lacking an ablative analysis. The current paper discussed the effectiveness of the proposed CSD(cross-scale diffussion). However, the compared method is only removing the locally enhanced model and just using the global fusion branch. To me, there is another baseline, which is to keep the local branch and only discard the noise blending part, with a follow-up fusion part to fuse the output of the local part with the output of the global branch. \\n\\n2) Lacking quantitative analysis for many ablation studies. It is good to see many important things are discussed in the ablation study part, which includes SMPL-X condition, CSD and mesh carving module. However, only some visual results on a small set are shown. \\n\\n3) For the experiment on robustness to SMPL-X estimation. Currently, only random noise with a variance 0.05 is added. Although this is just a following of SIFU, more settings are encouraged to include. Because this work heavily relies on the estimated SMPL-X, and the SOTA estimation method is not that good, as known. More experiments are needed to test the robustness. And, adding extra noise on the face part is also needed to check the impact to the local prediction part.\\n\\n4) Lacking detailed discussions on the failure cases. As mentioned, the results quality relies on the accuracy of the SMPL-X estimation. I am curious if the SMPL-X is of low accuracy, what does the resulted mesh look like? For appearance, it is also commonly known the texture fusion is very challenging. Thus, are there cases where the seems among different view will happen? More discussions are needed. \\n\\nAlthough there are many issues, I still appreciate the SOTA results and think the paper is valuable to this area.\", \"questions\": \"Q1: As known, the optimization of the mesh, starting from SMPL-X to the target one, especially for loose garments, is very challenging. I am very curious how is the robustness of the proposed method. The authors are suggested to visualize the intermediate optimization stage for justification. Like the second example in Fig 7.\", \"q2\": \"It is mentioned that an optimization-based texture fusion is conducted to solve the cross-view inconsistency, however, it lacks details. How is the formulation of the optimization? What are variables and what are energy terms?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a single-view 3D human reconstruction method with higher fidelity and faster inference speed than prior works by leveraging multi-view diffusion models. The method utilizes a prior SMPL-X mesh and a latent diffusion model to generate multi-view consistent human renderings that are then used to refine the SMPL-X mesh. They showcase results on THuman and CustomHumans datasets and show competitive results.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Overall, the paper highlights the benefits of multi-view diffusion models for view-consistent realistic renderings of digital humans. By combining it with a strong prior like SMPL-X, they are able to refine the mesh representation to recover high-quality 3D reconstructions just from a single view. Additionally, they showcase how a single diffusion model can be used to generate both highly detailed body information and high-fidelity face details using a noise blending module, which is a common failure mode for prior methods. Finally, this method performs the entire reconstruction within a minute which is much faster than prior approaches.\", \"weaknesses\": [\"Central claim is not supported: One of the central claims of the paper is that prior work cannot recover highly-fidelity face details which this method can. But the paper fails to support this clam through quantitative evaluations that purely focus on the enhancements in face quality. Metrics in Tab. 1,2,3 entangle both full-body and face reconstruction quality.\", \"Quality of results: The overall results for this method do not look much better compared to ECON. Disregarding pose estimation, the high-frequency details either seem very exaggerated or similar to ECON's method for the body. Additionally, it is not clear if the method can handle self-occluding views since none of the figures show results for occluded regions. Finally, since SMPL-X is a strong prior that this method leverages, it would be good to consider adding pure SMPL-X mesh as a baseline for comparison.\", \"Vague writing: Overall, there needs to be more attention to the draft. There are a lot of inconsistencies, errors and missing details which confuses the reader and falls short on communicating the core message.\"], \"questions\": [\"In Fig.1 Row 1, 2, 4, are the hands replaced with the SMPL-X hands? It looks like it was grafted on top of the reconstruction making it look unnatural. Further, L317 mentions the method has an \\\"option\\\" of doing it. It would be better to clarify this in the image caption, if performed.\", \"In Fig. 4 caption,\", \"it mentions that the diffusion model generates \\\"six views of global full-body images and local face images\\\". But the figure just shows one face image as output. This is not clarified in the method.\", \"For the face diffusion, how is the SMPL-X face image cropped? This is not mentioned anywhere in the paper.\", \"For the input to the face diffusion model, what is concatenated with the SMPL-X face image? There is just one arrow pointing to the concat operation.\", \"What is the weight w (L244) used for training and inference? Is it a learned value or is it just a binary mask as mentioned in L251? There is no reference to this in the text.\", \"In Eq. 5, what is the orientation of the first body view? What is the reason behind blending just the \\\"first\\\" body view and none of the other views? Additionally, how are the 6 views chosen? Is it with respect some canonical orientation of the person and how would the method ensure that orientation?\", \"For table 1, what are the model inputs when not using the SMPL-X prior? And how is the mesh carving performed without it?\", \"The paper is riddled with typos, grammatical errors and unnatural sounding sentences:\", \"L80 remain -> retain\", \"L192 Draw -> Drawing\", \"L299 pixie -> pixel\", \"L430 imagen -> imagine\", \"L448 art -> arm\", \"L798 pixie -> pixel\", \"Placement of Fig. 3 is a bit premature since its not referenced until Page 5.\", \"L742 why is hand pose estimation done? Also in L317, what is the reason for substituting it with the SMPL-X result? Does the method perform worse on hands?\", \"Is the evaluation protocol mentioned in L745 borrowed from a previous paper? The sparse number of views might be insufficient to characterize the quality of the reconstructed texture color.\", \"In Fig. 12, why are the color reconstructions horizontally flipped wrt input image?\", \"Please specify the model implementation details in the main paper.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No ethical concerns.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper introduces a single-view human reconstruction approach structured in two stages. In the first stage, multi-view color and normal maps are generated using diffusion models. The process includes a dedicated diffusion model specifically for the face region, whose intermediate features are integrated into a second diffusion model for the full body. In the second stage, the generated color and normal maps from multiple views are aggregated through optimization. The SMPL template is iteratively optimized to fit observations from all views, leading to the final reconstructed output.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The use of a separate diffusion model for the face allows for detailed and clear facial reconstruction, improving the quality of the facial features in the final model.\\n2. The approach shows performance improvements over existing single-view human reconstruction methods.\", \"weaknesses\": \"1. The approach follows previous works that utilize diffusion models to generate multi-view images as an initial step [1][2][3]. Specifically, in [3], a SMPL-rendering guided control net is also proposed. The main innovation here\\u2014a separate diffusion model for the face region\\u2014offers improvement but is incremental.\\n\\n2. An alternative, potentially simpler, method for integrating generated face and body views could be: during the \\\"explicit human carving\\\", optimize shape and color regarding to all 12 views, with some masking approach to optimize face region only based on the generated face views. Will this work?\\n\\n3. Is a single diffusion network sufficient for both face and full-body multi-view generation? Or they have to be two different models?\\n\\n4. The explanation of diffusion network is unclear. How is the network conditioned on the input image? Concatenating with noise or by cross attention? How do you initialize the diffusion network? Trained from scratch? How consistent are the generate views? I assume without pose information, the generated poses could be slightly off.\\n\\n[1] Zero-1-to-3: Zero-shot One Image to 3D Object\\n\\n[2] Instant3D: Fast Text-to-3D with Sparse-view Generation and Large Reconstruction Model\\n\\n[3] SiTH: Single-view Textured Human Reconstruction with Image-Conditioned Diffusion\", \"questions\": \"The design of the joint denoising diffusion network requires more justification to clarify its advantages. Specifically, additional details are needed to explain why this joint approach was chosen over simpler alternatives, such as direct optimization techniques. It would be helpful if the authors could address the concerns raised in Weaknesses 2, 3, and 4.\\n\\nAdditionally, I recommend that the authors revise the paper to include more detailed explanations of these design choices, such as how the diffusion networks are employed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces PSHuman, a diffusion-based method for single-view 3D human reconstruction that generates detailed and photorealistic models. It uses a cross-scale diffusion model for high-fidelity face details and incorporates SMPL-X for pose guidance. PSHuman efficiently produces textured meshes with improved geometric accuracy.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Cross-scale diffusion for enhanced facial details.\\n\\nSMPL-X conditioning for better pose representation. \\n\\nEfficient generation of detailed 3D human models.\", \"weaknesses\": \"1.\\tWhat if the generated multi-views are not accurate? Will the reconstructed body be affected?\\n2.\\tHow much time does it take to reconstruct a human, and how does the speed compare with existing methods like ICON[A], HiLo[B], and D-IF[C]?\\n3.\\tMore baseline methods, such as HiLo[B] and D-IF[C], should be considered to fully demonstrate the effectiveness of the proposed method.\\n4.\\tI recommend that the author provide the six views generated by the proposed diffusion model with respect to Figure 7. It would be more straightforward to understand the effectiveness of the proposed method.\\n\\n**Reference**\\n\\n[A] Xiu, Yuliang, et al. \\\"Icon: Implicit clothed humans obtained from normals.\\\" 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022.\\n\\n[B] Yang, Yifan, et al. \\\"HiLo: Detailed and Robust 3D Clothed Human Reconstruction with High-and Low-Frequency Information of Parametric Models.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[C] Yang, Xueting, et al. \\\"D-if: Uncertainty-aware human digitization via implicit distribution field.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\", \"questions\": \"Please refer to the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
8EM1A6qfX5
Unearthing Large Scale Domain-Specific Knowledge from Public Corpora
[ "Zhaoye Fei", "Yunfan Shao", "Linyang Li", "Zhiyuan Zeng", "Conghui He", "Hang Yan", "Dahua Lin", "Xipeng Qiu" ]
Large language models (LLMs) have demonstrated remarkable potential in various tasks, however, there remains a significant lack of open-source models and data for specific domains. Previous work has primarily focused on manually specifying resources and collecting high-quality data for specific domains, which is extremely time-consuming and labor-intensive. To address this limitation, we introduce large models into the data collection pipeline to guide the generation of domain-specific information and retrieve relevant data from Common Crawl (CC), a large public corpus. We refer to this approach as Retrieve-from-CC. It not only collects data related to domain-specific knowledge but also mines the data containing potential reasoning procedures from the public corpus. By applying this method, we have collected a knowledge domain-related dataset named Retrieve-Pile, which covers four main domains, including the sciences, humanities, and other categories. Through the analysis of Retrieve-Pile, Retrieve-from-CC can effectively retrieve relevant data from the covered knowledge domains and significantly improve the performance in tests of mathematical and knowledge-related reasoning abilities.
[ "domain-specific knowledge", "data collection", "large language model" ]
Reject
https://openreview.net/pdf?id=8EM1A6qfX5
https://openreview.net/forum?id=8EM1A6qfX5
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uD6KCb8RIP", "sZ9ZcNwb3K", "giUGyNb3Cy", "fWqhHRy9Hu", "ZfvL8CVYVM", "X1SwLzXJdb", "VlLkcmc0Qf", "UzMWtBamYN", "S3P9aqgoly", "LtkvKkCioA", "LUNGGjeVAJ", "9wiG7RvbM1", "87LIFpVrOu", "1LMw9XXZBi" ], "note_type": [ "official_comment", "official_review", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732203329166, 1730618980166, 1730504534244, 1732378641615, 1737523778259, 1732202784306, 1732300232176, 1732203256493, 1732878207985, 1730723780685, 1734603434046, 1730710476228, 1732223905047, 1732203146102 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6586/Authors" ], [ "ICLR.cc/2025/Conference/Submission6586/Reviewer_U334" ], [ "ICLR.cc/2025/Conference/Submission6586/Reviewer_cx4Y" ], [ "ICLR.cc/2025/Conference/Submission6586/Reviewer_cx4Y" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6586/Authors" ], [ "ICLR.cc/2025/Conference/Submission6586/Authors" ], [ "ICLR.cc/2025/Conference/Submission6586/Authors" ], [ "ICLR.cc/2025/Conference/Submission6586/Reviewer_8mWq" ], [ "ICLR.cc/2025/Conference/Submission6586/Reviewer_8mWq" ], [ "ICLR.cc/2025/Conference/Submission6586/Area_Chair_FXVZ" ], [ "ICLR.cc/2025/Conference/Submission6586/Reviewer_KvvZ" ], [ "ICLR.cc/2025/Conference/Submission6586/Reviewer_cx4Y" ], [ "ICLR.cc/2025/Conference/Submission6586/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Very thank you for your helpful comments. Below, we will explain your concerns in detail:\\n\\nAbout Novelty\\n\\n1. We apologize for not elaborating on the differences between our method and other manually curated data collection approaches. In the revised version, we have provided a detailed comparison:\\nThe primary distinction between the Retrieve-from-CC and manually curated datasets lies in the reduction of human effort required to collect domain-specific data. Manually curated domain-specific datasets typically demand substantial human labor for data collection and filtering. For instance, the Pile dataset contains 22 different web domains, each requiring considerable human input for collection, formatting, and initialization. The OpenWebText dataset employs multiple filters designed to extract high-quality domain-specific text. In contrast, the Retrieve-from-CC collects high-quality domain-specific data by simply providing relevant keywords, thereby minimizing the need for manual intervention.\\n\\n2. I have some confusion regarding the term \\\"distinctive features\\\" that you mentioned. Could you clarify what specific aspects you are referring to? Our method requires only a few keywords to collect domain-relevant data. While collecting better domain-specific datasets as seed data, as in your example, could indeed enhance the query's quality and relevance. Also, it would increase human costs. Moreover, this is not the primary focus of our paper.\\n\\nAbout Methodology\\n\\n1. As mentioned in our paper, in our approach, LLMs only generate the query and do not directly affect the quality of the final data. Moreover, compared to manual data collection at this scale, model generation is not costly.\\n\\n2. In our paper, the retriever's role is solely to collect relevant documents, and there is no necessity to gather extremely precise data. Of course, a better retriever could fetch more relevant data. However, as noted earlier, as similar as data sources , and this is not the primary focus of our paper. Nevertheless, the points you raised are crucial for refining the data, and we will continue to improve the process in future work.\\n\\n3. We have discussed in the paper the potential risks of hallucination in LLMs. Since our data is retrieved from publicly available corpora, the hallucination issue does not directly affect the quality of our data.\\n\\nAbout Evaluation\\n\\n1. We acknowledge the extensive benchmarking involved in the pre-training process. However, since the core focus of this study is to analyze the effectiveness of Retrieve-Pile, the \\\"forgetting effect\\\" was not included in this experiment. We will discuss the benchmarking results related to forgetting in future works.\\n\\n2. We understand the importance of comparing auto-generated datasets with manually curated datasets. To this end, we have compared the \\\"educational value\\\" and \\\"QuRating\\\" of our data with those of manually curated datasets in the data quality analysis section. This comparison contrasts the data quailty of Retrieve-Pile and human-curated data. While model-based evaluations are also valuable, we can include a direct comparison with human-curated datasets in future versions and demonstrate the performance differences. Although we acknowledge the merit of comparing with other synthetic data generation methods, we have not included this in the current paper. We consider the data we collect to be real data, not synthetic. Nevertheless, we will add comparisons with related synthetic data work in subsequent revisions to clarify this issue.\\n\\nFor Suggestions\\n\\n1. We apologize for the confusion; however, our work is not directly related to synthetic data. The primary motivation of this paper is to leverage the generalization capabilities of large models to generate queries based on domain-specific keywords and retrieve domain-relevant data, which is essentially a data collection task. We will carefully consider this point in future revisions and provide a discussion comparing synthetic data.\\n\\n2. We sincerely apologize for any inconvenience caused to your reading. We are currently revising the paper to improve its clarity and readability.\\n\\nOnce again, we appreciate the time and effort you spent reviewing our paper and providing valuable feedback.\"}", "{\"summary\": \"The authors propose Retrieve-from-CC, a data collection pipeline consisting of a a) query generation process and b) a document retrieval process based on the generated queries. They argue that their proposed method makes it possible to automate the data collection process for high quality domain-specific data.\\nThe so created dataset is then evaluated with regard to its composition (sources and domains) and its data quality (quantified as QuRating).\\nFinally, the authors use their newly composed dataset to fine-tune two LLMs (based on Llama2 and Mistral) and to train a Llama model from scratch. These models are then evaluated with regard to their performance on various benchmark datasets on mathematical and knowledge oriented language understanding tasks.\\nTheir experiments indicate that their newly collected dataset of domain-specific high quality data can be used for fine-tuning LLMs and improve their performance on tasks that require specific knowledge. They additionally show that there is little to no dataset contamination when comparing the downstream task data with their own data.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"extensive experiments, showing that their proposed data is helpful for training \\u201csmaller\\u201d (i.e. 7B Parameter) LLMs on domain specific task\", \"automating the process of curating high-quality domain specific data\", \"addressing the issue of data contamination\", \"The authors propose a method of automating the process of curating high-quality data in specific domains. Furthermore, they showcase how their data can be used to fine-tune LLMs and improve their performance on benchmark tasks within the respective domain.\"], \"weaknesses\": \"1) writing and language throughout the paper needs improvement. This makes the present work difficult to follow at times:\\n\\nl. 67 \\u201cOtherwise, [\\u2026]\\u201d -> additionally?\\nl. 70 \\u201c[\\u2026] continuing learning [\\u2026]\\u201d -> unclear if they talk about fine-tuning\\nl. 82 \\u201c[\\u2026] for the quality and statistical of Retrieve-Pile [\\u2026]\\u201d -> statistics?\\nl. 82 \\u201cWe statistic [\\u2026]\\u201d -> usage as a verb?\\n\\nParagraph starting at l. 179: Is question evolution the same as question extension from the previous paragraph?\\n\\nl. 237 \\u201cOtherwise, we discuss about the different when improving different [\\u2026]\\u201d\\n\\nUnfortunately, these are only some highlighted examples where the quality of presentation is lacking.\\n\\n2) exposure of results:\", \"table_4\": \"It would be beneficial to mention the reported metrics. If not here, then at the description of the benchmark datasets.\\n\\nThis issue is persistent in most of the evaluation section. The authors report an increase of performance in \\u201cpoints\\u201d on multiple occasions, which frankly speaking could mean anything.\\n\\nIn the case of data quality, a little more in-depth explanation of the QuRating would be helpful in understanding the results. As this is by now not a wide-spread metric, it would be beneficial to explain how the values are obtained and what exactly they mean.\\n\\nOverall, the authors could improve the exposition of results by providing a more detailed explanation of the used metrics, as this is an important bit of information for the reader.\\n\\n3) focus:\\nOverall, the focus of the paper is not very well defined. First, the authors introduce a dataset collection method and provide a detailed overview of the collected dataset. For the present work to be a resource paper, the proposed Query Bootstrapping methods is not explained in sufficient detail. The paragraphs on \\u201cQuestion Extension\\u201d, \\u201cThought Generation\\u201d and \\u201cQuery post processing\\u201d are rather vague.\\n\\nThe other side of the spectrum would be a paper on an empirical study. For this to be the case, the evaluation section would require to be more detailed (regarding reported metrics).\\nIn addition to that, following one of the author\\u2019s main argument (automating the process of collecting high-quality domain-specific data), it would be nice to see how LLMs that are trained on their data perform vs. LLMs that are trained on hand-crafted datasets.\\n\\nMy overall impression is that the authors should focus on either the resource side of their work, or the empirical side. In its current state, the lack of focus in combination with a (at times) poor representation makes the paper appear inconsistent and at times hard to follow.\", \"questions\": \"My suggestion for improvement would be to include more details about the Query Bootstrapping method and the metrics reported in the empirical section. I am confident that the language issues could be easily resolved as well (maybe with the help of an LLM even).\\nMy main issue, however, is the unclear focus of the paper. For it to be a good resource/method paper, the data collection process should be described in more detail. For it to be a good empirical study paper, the experiments should reflect the argument of automating the dataset collection process vs. manual creation of a dataset. Unfortunately, this would require major revisions and potentially additional work. (The result however, might be two good papers (one with focus on the data, and another one with focus on the empirical evaluation), as the underlying questions are relevant and interesting)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work presents an automated pipeline to collect domain-specific data from public corpora by leveraging large language models (LLMs) for query expansion and BM25 for efficient retrieval. The resulting dataset spans multiple domains, including STEM, humanities, social science, and Misc. While the method emphasises scalability and cost-effectiveness over manual curation, it also faces challenges in ensuring data quality and distinctiveness compared to existing human-curated datasets. Experimental results indicate that models trained on the proposed dataset are improved in several reasoning benchmarks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"This work addresses a well-motivated goal: developing an accurate and scalable approach to extract training data from the evolving web, which is essential for keeping LLMs up-to-date.\", \"Experimental results show clear improvements in LLM performance on the listed benchmarks when further trained on the proposed dataset.\", \"The authors provide a detailed analysis of the proposed dataset, including multiple evaluation factors. Notably, the data leakage analysis between the pre-training dataset and the evaluation benchmarks is a valuable addition, helping to ensure the integrity of the results.\"], \"weaknesses\": [\"### About Novelty\", \"The authors have not clearly demonstrated how the proposed dataset differs from or adds unique value compared to existing high-quality, human-curated datasets, including the ones shown in Table 1.\", \"Additionally, the proposed pipeline lacks distinctive features that would make this automated construction process stand out as an innovative or superior alternative. For example, since this work focuses on domain-specific knowledge, it can be beneficial to leverage knowledge bases or other kinds of structured data to help improve the relevance and accuracy of the data points.\", \"### About Methodology\", \"While the authors emphasise scalability, fully relying on LLMs to refine and expand queries is both computationally expensive and prone to errors.\", \"Although BM25 offers efficiency and scalability in the retrieval phase, it does not guarantee high accuracy in the retrieved `(query, answer)` pairs. Even if standard dense retrieval techniques were employed, achieving consistently high accuracy would remain challenging.\", \"Consequently, errors introduced during both the query generation and retrieval phases could propagate, potentially compromising the overall quality of the final dataset.\", \"### About Evaluation\", \"Given that LLM pre-training typically involves a broad set of evaluation benchmarks, this work lacks an analysis of potential \\\"forgetting\\\" on benchmarks (e.g., HELM, GLUE, LLM leaderboard) not included in the listed experiments.\", \"When comparing with other pre-training data, if you aim to demonstrate that this automatically generated dataset holds value even against human-curated datasets, it would be essential to directly compare their performance. Ideally, this comparison would show that the proposed dataset lags only by a small margin. Alternatively, comparing it with well-established synthetic data generation methods (also discussed in Suggestion 1) would also help substantiate the dataset's value and highlight areas for potential improvement.\"], \"questions\": [\"### Suggestions\", \"Given the fully automated nature of the proposed pipeline, the resulting dataset is more like a synthetic dataset. This raises a different research question that how to generate high-quality synthetic datasets effectively and how this method compares to existing synthetic data generation approaches.\", \"I recommend that the authors invest effort in refining the paper's writing, as several language issues currently affect the fluency of reading.\", \"I recommend ensuring consistency in terminology throughout the paper. For instance, terms like 'Retrieve-Pile' and 'Knowledge Pile' appear to refer to the same dataset, which could cause confusion.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"n/a\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your responses. After reviewing the rebuttal, I would consider this work to be a borderline case.\\n\\nOn the positive side, the paper adopts a reasonable pipeline for retrieving documents from public corpora, which leads to improved performance on certain benchmarks. However, there are a few aspects that could be strengthened:\\n\\n- The writing, as noted in my initial review, has caused several points of confusion. While the revision has addressed some of these issues to a certain extent, further improvements in clarity would benefit the paper. For example, it would be good to see the examples to illustrate how the proposed approach retrieves reasonable and relevant documents from public corpora.\\n- The retrieval criteria, while functional, are not entirely convincing. That said, I understand that it may not be feasible for the authors to explore more comprehensive criteria within the constraints of the rebuttal phase.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Tanks for your helpful review, we will discuss these question below.\\n\\n1. Regarding the scope of seeds, we acknowledge that seed data plays a significant role in our approach. Exploring a broader knowledge base as a retrieval seed can yield more diverse and higher-quality data, representing an important direction for future research. In this paper, however, we aim to demonstrate that leveraging the generalization ability of a large language model enables the collection of data relevant to a specific field using only a few keywords. Corresponding experiments have shown that the data collected using our method improves performance on domain-specific benchmarks. Additionally, enhancing the scope and quality of seeds warrants further attention. We plan to enhance this approach in future work to generate more generalized and higher-quality data.\\n\\n2. We apologize for omitting this comparative analysis in the earlier version. In the revised manuscript, we will include additional experiments to compare the performance of our model with domain-specific models. \\n\\n| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine |\\n|-------------------------|-------------|------------------|---------|--------------|-----------------|------------------|\\n| Mistral 7B Instruct | 62.9 | 57 | 55.6 | 59.4 | 62.5 | 57.2 |\\n| BioMistral 7B\\uff083-shot\\uff09 | 60.9 | 61.7 | 49.6 | 55.1 | 56.9 | 55.5 |\\n| BioMistral 7B SLERP | 63.1 | 63.3 | 49.9 | 57.4 | 63.4 | 57.8 |\\n| Mistral-QoC | 70.57 | 74 | 60.74 | 58.82 | 77.08 | 67.63 |\\n\\nAs shown in the table above, compared to BioMistral, Mistral-QoC outperforms BioMistral across all MMLU-related subsets. This may be attributed to two factors: firstly, the retrieval of CC allows for the collection of a large amount of data at a minimal cost; secondly, most medical domain data is not concentrated, and BioMistral only utilizes a limited subset of PMC Open Access data, which restricts its sources and volume.\\n\\n3. In our evaluated benchmark, BBH is a highly complex and general reasoning benchmark that is less directly aligned with our approach and seed information. Nevertheless, it demonstrated a significant improvement.\\n\\nThank you for your review. If you have any further questions, we will respond promptly.\"}", "{\"comment\": \"Thank you for your timely response. Below, we will answer your questions in detail:\\n\\n1. What features or processes ensure that the generated datasets are relevant to the target domains in your framework?\\nThis study primarily focuses on generating relevant queries using a limited set of keywords and retrieving high-quality, domain-specific documents from public corpora. While utilizing high-quality knowledge bases could enhance the relevance of retrieved documents, this is not the primary focus of our work. Our approach does not impose strict domain limitations on the generated datasets. However, the results demonstrate that our dataset significantly improves performance in the target domains (as shown in Section 4.1). Incorporating knowledge bases or exploring methods to analyze and constrain the relevance of retrieved data to target domains could further enhance our approach. We plan to address this aspect in future work.\\n\\n2. Origin of Keywords in the Seed Information:\\nThe initial seed information was initialized based on the classifications proposed by Dan Hendrycks et al. Keywords were derived from these categories, supplemented with others manually collected from the internet. The initial number of keywords was limited, the expansion of the keyword set primarily relied on iterative processes involving question extension and thought generation. The generated texts were added to the seed information database, enabling further iterations to enrich the query. \\n\\n3. Domain-Specificity and Coverage of Several Topics:\\nWe believe that \\\"domain-specific\\\" and \\\"covering several topics\\\" are not mutually exclusive. Our primary focus is on enhancing datasets for the domains highlighted in Hendrycks et al. (2021a), which have lacked open-source datasets specifically constructed for these areas. Our dataset addresses this gap. Additionally, we evaluated our approach using the BIG-Bench Hard dataset, which focuses on complex reasoning tasks. We observed that training on large-scale knowledge-based data significantly improved the model's performance on such tasks. \\n\\nWe sincerely appreciate the reviewer\\u2019s insightful feedback. Should you have further questions, we will respond promptly.\\n\\n\\n[Hendrycks D et al.] Measuring massive multitask language understanding\"}", "{\"comment\": \"We greatly appreciate your valuable and insightful comments, which have been highly instructive. We are revising the manuscript in the following areas:\\n\\n1. We will carefully revise the manuscript to improve clarity and readability, ensuring that the language issues do not detract from the quality of the paper.\\n\\n2. In response to your suggestions regarding the presentation of results, we will revise all relevant sections to improve clarity and ensure better comprehension. Specifically, we will provide a more detailed explanation of the emerging QuRating metric, including its calculation method and significance, to enhance the reader's understanding of the experimental outcomes. Furthermore, with respect to the reported performance improvements (represented by 'points'), we will explicitly clarify the meaning of these 'points' in both the tables and the discussion section, providing the necessary background and context.\\n\\n3. We understand the reviewer\\u2019s concern regarding the lack of clarity in the focus of the manuscript. To address this, we plan to improve the manuscript\\u2019s structure and emphasis to better highlight its key contributions. Specifically, we will clarify the central argument and how the two aspects of the study, dataset collection automation and empirical evaluation, are interrelated. By doing so, we will more clearly distinguish between the two parts, ensuring a more cohesive narrative throughout the manuscript.\\n\\nWe sincerely thank you once again for your thoughtful feedback, which has been instrumental in guiding our revisions and improving the overall quality of our manuscript.\"}", "{\"comment\": \"I appreciate the author's additional contributions. The revisions to the Evaluation section effectively addressed enhanced the overall quality of the manuscript.\"}", "{\"summary\": \"This paper investigates the problem of bootstrapping domain knowledge from general public corpora in order to reduce cost and time for manual data collection of domain-specific corpora. Using manually defined seed the presented approach, Retrieve-from-CC, first identifies seed examples in a large input corpus using BM25. For every retrieved record a LLM generates questions and responses are augmented by Chain of Thought sequences. After a quality filtering step the approach outputs a domain-specific dataset.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Generating high quality datasets for augmenting the capabilities of LLMs into less covered domains is high relevant for the open-source as well as professional community. Data is the key for LLM success and this paper aims to present contribute for this matter.\", \"weaknesses\": [\"*Approach*:\", \"seeds: A downside of the approach is that it needs good seeds as input. At the same time, general-domain knowledge bases (dbpedia, yago) cover almost all domains to some degree. While the data goes beyond simple key phrases, the domains are probably only covered partially. The authors should consider leveraging this partial knowledge for generating input data for the bootstrapping phase. This drops the manual input requirement and might improve the dataset further.\", \"*Evaluation*:\", \"My biggest criticism of the paper is that the author didn't compare against domain-specific LLMs. The question: \\\"How does a LLM trained over a corpus generated with Retrieve-from-CC compare against a domain-specific LLMs?\\\" is highly relevant for this paper and is not answered. For instance, one could compare against LLMs from the [Open Medical-LLM Leaderboard](https://huggingface.co/spaces/openlifescienceai/open_medical_llm_leaderboard). There are probably other such domain-specific resources.\", \"Misc: Typo generted in Figure 2\"], \"questions\": \"Beyond the tasks you evaluated were there any performance changes after you further trained the LLM with Retrieve-Pile?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"**Summary**\\n\\nFinding reliable data for LLMs is the challenge this paper focus on. The authors propose Retrieve-from-CC, a data collection pipeline consisting of a a) query generation process based on existing language models and b) a document retrieval process based on the generated queries. LLMs are used as query expansion.\\n\\n**Strengths**\\n\\nThe goal of the paper is extremely important as find reliable data is one of the big challenges in improving LLMs\\n\\n**Weaknesses**\\n\\n- An evaluation of the method in term of quality of the data does not seem to be provided.\\n- The retrival cretiria should be expanded and explained.\\n\\n**Final remarks**\\n\\nThe paper fails to convince reviewers that have interacted with the authors.\", \"additional_comments_on_reviewer_discussion\": \"The discussion has been fruitful and the position of the authors and of the reviewers have been clarified.\"}", "{\"summary\": \"This paper provides a method called Retrieve-from-CC to curate domain specific data to train large language models. It uses a two phased approach where initial query keywords are given by humans and a LLM is used to generate queries which are fed to retriever (BM25) to gather data that is relevant for a specific domain. Authors publish a benchmark, Retrieve-Pile, that is covering four domains that includes sciences, humanities, Social Sciences and Miscellaneous. Paper shows that using this data set helps in improving the performance on some of the mathematical benchmarks along with standard language benchmarks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Peper described a method for collecting domain specific training data without manual intervention, which is beneficial for makings LLMs perform better for domains of interest and making them more suitable for practical applications for a domain of interest. Method being automatic and showing improvements over the base LLMs is promising and can be beneficial in gathering large data that LLMs need for their training. Empirical results show that data generated results in significant improvements using it in LLM training. Quality metric also show that curated data is of good quality. Data pile created shows improvements during per-training as well as further training existing open models, which shows that generated data is of good quality.\", \"weaknesses\": \"I see some places Knowledge-Pile is used without it being talked about anywhere. I guess its a typo instead of Retrieve-Pile it is used in evaluation section, tables, figures etc. This needs to be corrected. Lot of places I see space missing after Retrieve-Pile and other typos needs to be corrected as well.\", \"questions\": \"Which LLM is used for query generation module? Did you see any difference in the queries generated with various LLMs ?\\nAssumption is that LLMs are not great at domain specific tasks, how does that impact your automatic query generation. Did you analyze the quality of queries generated for the domains? \\nIs it the same model that is used for query generation or a bigger LLM is used ?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response. Here are some immediate thoughts after reading it (I will post more if any):\\n\\n**Regarding your comment: \\\"I have some confusion regarding the term \\\"distinctive features\\\" that you mentioned.\\\"**\\n\\nTo clarify my earlier point about \\\"distinctive features,\\\" I was referring to the mechanisms or methods you employ to ensure the actual relevance of the datasets. For example, leveraging knowledge bases is just one potential approach for verification, but my suggestion is not limited to this method. The key question is: what features or processes are in place in your framework to ensure the generated datasets are actually relevant to the domains?\\n\\n**Regarding your comment: \\\"Our work is not directly related to synthetic data.\\\"**\\n\\nUpon closer examination of the paper, it seems that the final data points come from public corpora rather than being generated by LLMs. If this understanding is correct, then the dataset indeed would not fall under the category of synthetic data. I would consider the points regarding synthetic datasets addressed.\\n\\n**Regarding your comment: \\\"In our approach, LLMs only generate the query and do not directly affect the quality of the final data.\\\"**\\n\\nI believe this statement may not be fair. LLMs are responsible for generating \\\"anchors\\\" used to retrieve or select useful documents. Since this step plays a critical role in shaping the final dataset, any irrelevant or low-quality information generated during this process could potentially impact the overall quality (especially the relevance) of the data.\\n\\n**Further questions**:\\n\\n- Could you clarify the origin of the keywords in the seed information? Are they manually curated, automatically generated, or derived from another source?\\n\\n- Another point of confusion is that this work claims to be domain-specific, yet the collected datasets encompass several topics. Nevertheless, I do appreciate the authors\\u2019 response to reviewer 8mWq to include at least an analysis focused on a particular domain.\\n\\n**A further suggestion**:\\n\\nSince the ultimate datasets are essentially subsets of existing pre-training datasets, further training on these subsets does not inherently guarantee better quality or performance. To validate this claim, it would be beneficial to include a comparison. For instance, you could evaluate LLMs trained on data points that are excluded from these subsets (not necessarily the entirety of the excluded data due to scale, but a reasonable portion).\"}", "{\"comment\": \"We sincerely thank the reviewer for their thoughtful comments on our paper. We will address your questions in detail below:\\n\\n1. Regarding the weaknesses, we apologize for the typographical errors present in the paper, which have been corrected in the revised version.\\n\\n2. We use the LLaMA2 13B model to generate all queries, including both questions and thoughts. We apologize for not providing a detailed explanation of the data collection process in the previous version. This content will be included in the revised version.\\n\\n3. In this paper, our primary goal is to collect high-quality domain-specific data, which makes the choice of LLMs relatively independent. However, the use of different models for query generation in retrieval is an area for promising future work, which seems to be of significant importance.\\n\\n4. In Retrieve-from-CC, the method does not rely on the model's performance on task-specific tasks in this field, but rather on its ability to generalize. For instance, the LLaMA 13B chat model, despite achieving only 54.8% accuracy on the MMLU benchmark, enables the smaller LLaMA 7B model to achieve or even surpass this performance when used to build Retrieve-Pile.\\n\\n5. Due to the hallucination problem in large models, we found that the queries generated by these models are often inaccurate. However, since we use the queries generated by the large model solely to retrieve data from a public corpus, the quality of the queries only minimally affects the final retrieved corpus.\\n\\n6. In the query generation process, we use the LLaMA2 13B model to generate the queries. During the training phase, to optimize training costs, we utilize the LLaMA2 7B model.\\n\\nWe greatly appreciate your valuable feedback, which will undoubtedly help us improve the quality of our work.\"}" ] }
8EB8k6DdCU
ToolACE: Winning the Points of LLM Function Calling
[ "Weiwen Liu", "Xu Huang", "Xingshan Zeng", "xinlong hao", "Shuai Yu", "Dexun Li", "Shuai Wang", "Weinan Gan", "Zhengying Liu", "Yuanqing Yu", "Zezhong WANG", "Yuxian Wang", "Wu Ning", "Yutai Hou", "Bin Wang", "Chuhan Wu", "Wang Xinzhi", "Yong Liu", "Yasheng Wang", "Duyu Tang", "Dandan Tu", "Lifeng Shang", "Xin Jiang", "Ruiming Tang", "Defu Lian", "Qun Liu", "Enhong Chen" ]
Function calling significantly extends the application boundary of large language models (LLMs), where high-quality and diverse training data is critical for unlocking this capability. However, collecting and annotating real function-calling data is challenging, while synthetic data from existing pipelines often lack coverage and accuracy. In this paper, we present ToolACE, an automatic agentic pipeline designed to generate accurate, complex, and diverse tool-learning data, specifically tailored to the capabilities of LLMs. ToolACE leverages a novel self-evolution synthesis process to curate a comprehensive API pool of 26,507 diverse APIs. Dialogs are further generated through the interplay among multiple agents, under the guidance of a complexity evaluator. To ensure data accuracy, we implement a dual-layer verification system combining rule-based and model-based checks. We demonstrate that models trained on our synthesized data---even with only 8B parameters---achieve state-of-the-art performance, comparable to the latest GPT-4 models. Our model and a subset of the data are publicly available at https://huggingface.co/Team-ACE.
[ "Tool leaning", "Function calling", "Large language models" ]
Accept (Poster)
https://openreview.net/pdf?id=8EB8k6DdCU
https://openreview.net/forum?id=8EB8k6DdCU
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yT2gFaaChE", "xX0Zedqr30", "x1KHEUhzR5", "wdp4FMPE77", "vNhJbCQlRh", "unmuIWonar", "sYIfjQzd5e", "sSTkKqxPoe", "puHUu71IdW", "oc9bRDH2Qs", "g8cgECC9aZ", "dGTh5wXSew", "cUhGzJ2pae", "bs2tjBWMAr", "OHqPtBXaLV", "K05zMJfW5H", "Id8dNssweI", "IHu5uo5P5F", "FBeVNiEuBH", "F5GauwlGiz", "8V025QlOb8", "499u0HmPzn", "3g2M8WgXiU" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730997075800, 1731853016228, 1732001743904, 1731853995601, 1731853244910, 1730617867757, 1732005675502, 1732496962011, 1730022190694, 1734543106473, 1730185429328, 1731853723300, 1731853160817, 1732126567628, 1731854138458, 1737523506475, 1732087586159, 1732514740554, 1731852869645, 1732375352884, 1731853447808, 1732070483814, 1731854261403 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2471/Reviewer_Pz3q" ], [ "ICLR.cc/2025/Conference/Submission2471/Authors" ], [ "ICLR.cc/2025/Conference/Submission2471/Reviewer_AcuD" ], [ "ICLR.cc/2025/Conference/Submission2471/Authors" ], [ "ICLR.cc/2025/Conference/Submission2471/Authors" ], [ "ICLR.cc/2025/Conference/Submission2471/Reviewer_AcuD" ], [ "ICLR.cc/2025/Conference/Submission2471/Authors" ], [ "ICLR.cc/2025/Conference/Submission2471/Authors" ], [ "ICLR.cc/2025/Conference/Submission2471/Reviewer_ygcU" ], [ "ICLR.cc/2025/Conference/Submission2471/Area_Chair_fjTy" ], [ "ICLR.cc/2025/Conference/Submission2471/Reviewer_ViF9" ], [ "ICLR.cc/2025/Conference/Submission2471/Authors" ], [ "ICLR.cc/2025/Conference/Submission2471/Authors" ], [ "ICLR.cc/2025/Conference/Submission2471/Reviewer_ViF9" ], [ "ICLR.cc/2025/Conference/Submission2471/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2471/Authors" ], [ "ICLR.cc/2025/Conference/Submission2471/Reviewer_ViF9" ], [ "ICLR.cc/2025/Conference/Submission2471/Authors" ], [ "ICLR.cc/2025/Conference/Submission2471/Authors" ], [ "ICLR.cc/2025/Conference/Submission2471/Authors" ], [ "ICLR.cc/2025/Conference/Submission2471/Reviewer_ygcU" ], [ "ICLR.cc/2025/Conference/Submission2471/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper tries to improve the function calling capability of LLM by finetuning on a newly collected function calling dataset. Specifically, the authors propose a pipeline to collect new API usage data. This dataset is then used to finetune llama 8b model and shows comparative performances on two API using benchmarks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Collecting new data in a scalable way is important\", \"The performance looks interesting as well by finetuning a small model\"], \"weaknesses\": \"The experiment section is not quite convincing yet. Since the authors want to show the effectiveness of using their newly collected API (which according to table 1) is much more comprehensive, the authors should compare the performance obtained by finetuning on Table 1 datasets e.g. ToolLLM and that obtained by finetuning on their newly collected API data\", \"questions\": [\"As mentioned in weakness, additional experiment with other baselines should be included e.g. ToolLLM. Even the authors provide some results of xLAM in Table 2, I noticed that they are tuned on different base models other than 8b. So it is hard to draw conclusions and give credit to the dataset itself or to the base models. According to Table 3, 8b seems already very good at API calling evaluations.\", \"The evaluation and experiment process is not quite clear. e.g. what are the benchmark APIBank, BFCL evaluating? What are their input, output, ground truth, etc? What is the metric used in Table 2, Table 3?\", \"Table 2 has many categories in the performances: Single Turn, Multi Turn, Live, Hallucination. Compared to the base model used by author (llama 8b), some categories have only limited performance gain while some could be much higher due to finetuning (Multi turn), thus leading to a higher average score. I don't understand these comparison categories and I don't see authors' analysis in that.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Question 2\", \"comment\": \"**Q2:** **What are the benchmark APIBank, BFCL evaluating? What are their input, output, ground truth, etc? What is the metric used in Table 2, Table 3?**\\n\\n**A2:** Both the BFCL and API-Bank benchmarks assess the function-calling capabilities of LLMs, but they differ slightly in their evaluation setups.\\n\\n- **BFCL**\\n The Berkeley Function-Calling Benchmark (BFCL) is a comprehensive evaluation framework for assessing the function-calling capabilities of LLMs across various languages, application domains, and complex use cases. BFCL covers tasks including multiple function calls, parallel function calls, multi-turn function calls, and multi-step function calls. BFCL contains 4,951 test cases: 3,951 single-turn cases and 1,000 multi-turn cases, focusing on dynamic, real-world scenarios.\", \"bfcl_splits_the_evaluation_into_three_categories\": [\"**Single-turn:** Evaluating function calls in a single interaction. Single-turn is further evaluated in three settings: non-live (AST), non-live (Executable), and live (AST). Non-live (AST) compares the abstract syntax tree of the function output to the ground truth and the function definition. Non-live (Executable) assesses the accuracy of the generated API call by executing it and comparing the output with the ground-truth output. Live (AST) employs live, user-contributed function documentation and queries with abstract syntax tree comparison, avoiding the drawbacks of dataset contamination and biased benchmarks.\", \"**Multi-turn:** Evaluating the ability to maintain state and make function calls across multiple interactions, making it possible for LLMs to navigate through complex tasks by asking clarifying questions.\", \"**Hallucination:** Evaluating whether the model generates irrelevant or incorrect responses, rather than valid function calls. Hallucination is further categorized into relevance and irrelevance detection. Relevance evaluates the model's ability to output function calls relevant to the user query. Irrelevance measures the model's ability to refrain from making function calls given irrelevant user queries.\", \"**API-Bank**\", \"API-Bank consists of 314 tool-use dialogues with 753 API calls to assess LLMs\\u2019 capabilities in planning, retrieving, and calling APIs, with 363 single calls and 122 multiple calls. API-Bank assesses LLM performance across two capabilities:\", \"**Call:** The ability to call an API based on a given query when the APIs are known.\", \"**Retrieval+Call:** The ability to retrieve and call a single API when the APIs are unknown.\", \"**Common Input, Output, and Ground Truth for Both Datasets:**\", \"**Input:** The model receives a list of candidate tools (e.g., available APIs or functions) and the conversation history, which includes the user\\u2019s request or query.\", \"**Output:** The expected output is the model\\u2019s function call (e.g., an API request) or, in the case of hallucinations, an irrelevant or erroneous response in natural language.\", \"**Ground Truth:** The ground truth is the correct function call, which is either determined by matching the generated function call with a predefined correct one or by checking the execution results for valid function calls (if execution is possible).\", \"**Evaluation Metric:**\", \"The metric used in Table 2 and Table 3 is **accuracy**, which measures the proportion of correct function calls generated by the model, as compared to the ground truth.\", \"A more precise explanation of the benchmarks is included in **Appendix C.1** in our revision.\"]}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"For W1 and W3, the response is good.\\n\\nFor W2 and A2, Maybe you think that introducing a lot of concepts (e.g., fancy adjectives) will make the paper more innovative, but from the reader's point of view, it will only distract from what the paper is trying to argue. No matter how much you try to explain.\\n\\nI will keep my score and hope I good results.\\n\\nThanks.\"}", "{\"title\": \"Responses to Weakness 4, 5, 6\", \"comment\": \"**W4:** **While the paper compares ToolACE to several other function-calling models, the comparison is often superficial. The benefits of using ToolACE versus simpler data augmentation techniques are not well articulated, and it is unclear how much of the improvement can be attributed to the synthesis method versus the increased volume of data.**\\n\\n**A4:** Thanks for the valuable point. We recognize the importance of a more fair comparison and have conducted additional experiments under a controlled setting in revision in Appendix E.1. Specifically, we compare the performances of using ToolACE and other state-of-the-art function-calling data (ToolLLM and xLAM) to train the same base model (Llama3.1-8B). All data is uniformly sampled to 25,000 for a fair comparison. The corresponding results on BFCL are presented in the table below. These results demonstrate that the model trained with our data consistently outperforms the other models in all categories, further validating the effectiveness of our approach. Notably, the model trained on the xLAM dataset exhibits relatively poor performance in irrelevance detection, likely due to a lack of diverse sample types, such as cases where provided tools cannot solve the task. Moreover, the ToolLLM dataset, which primarily focuses on multi-step and dependent cases, demonstrates weak generalization on the BFCL benchmark.\\n\\n**Table 2. Performances of training with different training datasets.**\\n| Training data | Overall | Non-live(A) | Non-live(E) | Live(A) | Multi turn | Rel | Irrel|\\n| -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |\\n| ToolLLM(2.5w) | 24.90 | 42.46 | 36.36 | 39.45 | 0.00 | 100.00 | 4.41 |\\n| xLAM(2.5w) | 40.51 | 81.94 | 81.77 | 43.18 | 4.38 | 73.17 | 11.87 |\\n| ToolACE(2.5w) (Ours) | **58.19** | **86.96** | **84.73** | **71.35** | **16.50** | **75.61** | **86.42** |\\n\\n**W5:** **The paper claims that ToolACE-8B is competitive with GPT-4 series models. However, it does not fully address the limitations of ToolACE-8B in terms of generalization and applicability to a broader range of tasks beyond function calling. A more detailed discussion of these limitations would provide a more balanced perspective.**\\n\\n**A5:** We thank the reviewer for the valuable suggestion. ToolACE-8B is specifically designed for function calling tasks. We highlight the competitiveness of ToolACE in the domain it was designed to excel at. Meanwhile, to show the generalization ability of ToolACE, we have conducted experiments to assess ToolACE-8B\\u2019s performance on broader tasks in Section 3.6. The results, shown in Figure 8, demonstrate that ToolACE-8B maintains competitive performance relative to its base model, Llama3.1-8B-Instruct, across general tasks such as coding, math, and reasoning. Our findings highlight that a smaller, specialized model like ToolACE-8B can outperform a more generalized model like GPT-4 in areas where it is specifically optimized, while still preserving robust general performance. \\n\\nOur experiments demonstrate that ToolACE-generated data can significantly enhance function-calling capabilities without substantially compromising other abilities. However, as you pointed out, we have not investigated how to simultaneously improve other capabilities alongside function-calling performance, which remains an open question in the field. This issue is beyond the scope of this paper, as our primary goal is to develop a specialized model for function calling. Nonetheless, our data synthesis approach may offer insights for other domains, such as strategies to enhance data accuracy, diversity, and complexity.\\n\\n\\n**W6:** **The font size in Figures is too small, which is unclear for readers.**\\n\\n**A6:** Thanks for the suggestion! We have updated the figures in our revised manuscript (uploaded to the system).\"}", "{\"title\": \"Response to Question 3\", \"comment\": \"**Q3:** **Compared to the base model used by author (Llama 8B), some categories have only limited performance gain while some could be much higher due to fine-tuning (Multi-turn), thus leading to a higher average score. I don't understand these comparison categories.**\\n\\n**A3:** Compared with the Llama-3.1-8B-Instruct, the model after fine-tuning achieves significant improvements over all categories, and the improvement on multi-turn is much higher, as illustrated in Table 2. This can be attributed to two major reasons:\\nFirst, the multiple rounds function calling is more challenging than single-turn samples, and the probability of failure will be exponential of the single-turn calling (i.e. the error rate will increase from $p$ (single-turn) to $1-(1-p)^n$. Second, benefiting from the diversity of the ToolACE dataset, multi-turn, and dependent samples are included in the training data, enhancing the model's ability in dealing with such long-context problems.\\n\\nAdditionally, the gain of irrelevance detection (Irrel) is also high, this may be because Irrel measures the model's ability to NOT call functions given irrelevant user queries, which requires the model to precisely understand the user intents and the tool functionality. The Irrel score reflects a different aspect of function calling\\u2014it measures the model's ability to withhold action when inappropriate, which requires a higher level of judgment and comprehension. While ToolACE covers a more diverse set of examples, including multi-type data with both function calling data and non-tool data, thus better helping the model in learning when to refrain from invoking a tool.\\n\\n**Table 2. Performance improvements on various categories.**\\n| Model | Overall | Non-live(A) | Non-live(E) | Live(A) | Multi turn | Rel | Irrel|\\n| -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |\\n| Llama-3.1-8B-Instruct | 43.80 | 86.23 | 83.48 | 48.02 | 05.12 | 82.93 | 23.05 |\\n| ToolACE-8B (Ours) | 59.22 | 89.27 | 90.07 | 73.21 | 14.37 | 85.37 | 83.81 |\"}", "{\"summary\": \"Great idea and well written. The authors address the challenges of collecting accurate, diverse, and complex tool-usage data for LLMs by introducing a novel self-evolution synthesis process. ToolACE synthesizes a comprehensive API pool and generates data through agent-based dialogues, guided by a complexity evaluator to ensure the difficulty level is suited to the model's capabilities. The paper presents dual-layer verification (DLV) to maintain data quality, combining rule-based checks and model-based validation. Experiments demonstrate that models trained on ToolACE data achieve state-of-the-art performance in function calling, outperforming models such as GPT-4 in specific benchmarks like BFCL and APIBank.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"ToolACE introduces a unique self-evolution synthesis method, which is a systematic approach to generating diverse and complex data for function calling, addressing a key limitation in existing tool-augmented LLMs. The paper provides extensive experiments and ablation studies, comparing ToolACE-trained models with existing benchmarks on widely used datasets like BFCL and APIBank, and demonstrating superior performance.\", \"weaknesses\": \"1. Please include a complete example of a prompt and LLM response in the appendix so that readers can intuitively understand how the process works in practice.\\n\\n2. The paper lacks clarity and involves overly complex technical concepts. Although constructing a simulated dataset and fine-tuning the model are effective approaches to enhancing the LLM's function call capabilities, the additional concepts introduced, such as Self-Evolution, Self-Guided, Dual-Layer, and Multi-Agent, make the main idea harder to discern, leading to confusion for the reader. While the authors may believe these terms add richness to the paper, they detract from its central focus.\\n\\n3. In the ablation study, it would be valuable to compare the Retrieval-Augmented Generation (RAG) approach for retrieving task-relevant tools with In-Context Learning to optimize tool usage. Given the same level of engineering effort, explore whether these methods could achieve results comparable to fine-tuning.\", \"questions\": \"In Figure 3, the \\\"without model\\\" approach occasionally outperforms the \\\"with model\\\" approach. Please provide an analysis to explain the reasons for this phenomenon.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you\", \"comment\": \"We thank the reviewer for the positive score. We're glad to learn that most of the concerns have been solved, and we'll make sure the writing is improved in our final version!\"}", "{\"title\": \"Follow-Up on Second-Round Responses to Your Comments\", \"comment\": \"Dear Reviewer ViF9,\\n\\nAs the end of the discussion period is approaching, we would like to kindly follow up to see if you have had a chance to review our second-round responses to your comments. We sincerely appreciate the valuable feedback you have provided, which has been essential in improving our work and providing a more balanced perspective. We have carefully addressed your suggestions and would be happy to clarify or address any remaining concerns you might have.\"}", "{\"summary\": \"ToolACE sampled API-related data from LLM pre-training corpora, obtaining 26,507 APIs, and used a User Agent - Assistant Agent - Tool Agent structure to synthesize appropriate conversational data to obtain API call training data. Rules + LLM were used during the process to ensure the effectiveness of the synthetic data. Finally, it used an 8B model with LoRA to validate the effects.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"4\", \"strengths\": \"1. Ranks third on the BFCL-v3 leaderboard (updated on 09/20/2024), and first among open-source models on API-Bank.\\n2. The relatively large volume of synthetic data demonstrates the benefits of model fine-tuning at a larger scale.\\n3. Covers various categories of tool calls including Nested, Parallel, Dependent, and Multi-type.\", \"weaknesses\": \"1. The paper shows high complexity with many unclear details. For example, how are the varying complexity levels (easy, medium, hard) actually defined?\\n2. While the paper repeatedly mentions Nested, Parallel, Dependent, and Multi-type, it doesn't analyze their connection to actual performance or conduct ablation studies.\\n3. The relationship between data scale and performance is not demonstrated, making it difficult to determine which aspects actually contributed to the effectiveness.\\n4. There are concerns about potential data leakage between BFCL and TSS - can the authors prove there isn't significant data leakage?\", \"questions\": \"1. Since Nested, Parallel, Dependent, and Multi-type are essentially subsets of programming language capabilities, if they are indeed effective, does this suggest that direct training with programming languages (like Python) would be better? Furthermore, is the Data Interpreter[1] approach of directly exposing tool interfaces through Python a better solution? This needs further analysis.\\n\\n[1] Hong, Sirui, et al. \\\"Data interpreter: An LLM agent for data science.\\\" arXiv preprint arXiv:2402.18679 (2024).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a method to improve tool-calling for LLMs by generating a dataset using a multi-agent framework. The paper mainly focuses on this data creation part, which consists of three components (Tool Self-evolution Synthesis, Self-Guided Dialog Generation, and Dual-Layer Validation Process) -- which uses LLM agents and evaluators. The goal is to create accurate, complex, and diverse data for training LLMs to perform function-calling. Experiments on BFCL and API-Bank demonstrate that ToolACE-8B (trained using supervised fine-tuning from LLaMA3.1-8B-Instruct) achieves promising results, even compared to GPT-4, on these specific benchmarks. The reviewers and AC appreciate the importance of curating new data as well as the promising results from the LLM trained using this dataset. The weakness of the paper is that the pipeline to create the dataset is complex with many details and reviewers raised concerns on the depth of the experiments (e.g. how does performance scale with data). Ultimately, despite the weaknesses, there is unanimous decision to accept this work, and I think the demonstration of the value of data for function-calling will have broad interest and significance to the wider community.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers initially raised concerns on the experimental design, complexity of the data generation pipeline, and clarifications (e.g. on the prompt). The authors were able to provide detailed responses. A few questions from reviewers remain after and the AC highly encourages the authors to take this into account even after acceptance (e.g. complexity of the pipeline itself, json/code representation, additional experiments to better understand impact of the amount/type training data). All reviewers had unanimous decision to accept after the rebuttal period.\"}", "{\"summary\": \"The paper \\\"ToolACE: Enhancing Function Calling with Accuracy, Complexity, and Diversity\\\" presents a novel data generation pipeline for function-calling tasks LLMs. The approach leverages a tool self-evolution synthesis module, a self-guided dialog generation module, and a dual-layer verification module to create accurate, complex, and diverse tool-calling scenarios. ToolACE aims to improve LLMs' zero-shot function-calling capabilities by generating comprehensive training data that is validated through rule-based and model-based checks. The experiments show promising results, particularly with the ToolACE-8B model, which outperforms several existing LLMs.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The introduction of ToolACE's multi-step data generation, including evolutionary diversity and self-guided complexity, provides an innovative solution for generating complex and diverse function-calling data.\", \"The DLV system, combining rule-based and model-based checks, enhances the reliability of the generated data. This is a strong point, as it helps maintain data quality, which is critical for training LLMs effectively.\", \"The paper provides an extensive set of experiments, including comparisons with state-of-the-art models and an ablation study to assess the contribution of different components like accuracy, complexity, and diversity in the dataset. These experiments illustrate the potential benefits of the proposed pipeline.\"], \"weaknesses\": [\"The evaluation scenarios are limited to synthetic function-calling tasks and benchmarks like BFCL and APIBank. The paper would benefit from more realistic evaluations or applications in real-world tool usage scenarios. This would better demonstrate ToolACE\\u2019s utility beyond controlled benchmark settings.\", \"The self-guided dialog generation process heavily relies on the LLM being trained to evaluate the complexity of generated data. This creates a circular dependency where the model is used both as a learner and an evaluator, which may introduce bias in the complexity estimation. More external validation or use of independent evaluators would make the results more robust.\", \"The use of complexity-based sampling to dynamically adjust dialog difficulty has merit but may lead to unintended biases, as data that is either too simple or too complex is filtered out. The approach may fail to fully explore the impact of diverse and extreme cases, leading to gaps in the model\\u2019s capabilities in certain contexts.\", \"While the paper compares ToolACE to several other function-calling models, the comparison is often superficial. The benefits of using ToolACE versus simpler data augmentation techniques are not well articulated, and it is unclear how much of the improvement can be attributed to the synthesis method versus the increased volume of data.\", \"The paper claims that ToolACE-8B is competitive with GPT-4 series models. However, it does not fully address the limitations of ToolACE-8B in terms of generalization and applicability to a broader range of tasks beyond function calling. A more detailed discussion of these limitations would provide a more balanced perspective.\", \"The font size in Figures is too small, which is unclear for readers.\"], \"questions\": \"Please refer to weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Responses to Weakness 1, 2, 3\", \"comment\": \"**W1:** **The evaluation scenarios are limited to synthetic function-calling tasks and benchmarks like BFCL and APIBank. The paper would benefit from more realistic evaluations or applications in real-world tool usage scenarios. This would better demonstrate ToolACE\\u2019s utility beyond controlled benchmark settings.**\\n\\n**A1:** We thank the reviewer for the comments. Concerning this question, we would like to clarify that:\\n\\n1. BFCL and APIBank are two widely adopted benchmarks to evaluate and compare LLMs' function-calling capability.\\n2. Most of the APIs in BFCL and APIBank are real APIs, and some of them can even be executed and evaluated by actually executing these APIs (Executable category in BFCL). Furthermore, the instances in the Live category of BFCL are collected from user-contributed function documentation and queries, \\\"to more faithfully measure the LLM's function-calling performance in real-world scenarios\\\" ([source](https://gorilla.cs.berkeley.edu/blogs/12_bfcl_v2_live.html)). Considering this, we believe the two benchmark evaluations can reflect ToolACE's utility in real-world scenarios, at least to some extent.\\n\\nMoreover, ToolACE has already been deployed to a real-world travel planning scenario, serving real online users, with an accuracy of 84\\\\%. We believe this evidence can better demonstrate the effectiveness of ToolACE.\\n\\n**W2:** **The self-guided dialog generation process heavily relies on the LLM being trained to evaluate the complexity of generated data. This creates a circular dependency where the model is used both as a learner and an evaluator, which may introduce bias in the complexity estimation. More external validation or use of independent evaluators would make the results more robust.**\\n\\n**A2:** The self-guided method is based on the assumption that the most appropriate training data is contingent on the current capabilities of the trained model. Thus, we intentionally utilize the model both as a learner and an evaluator. While previous research has drawn similar conclusions [1,2], we acknowledge that additional validation is beneficial. Hence we conduct an additional experiment in our revised manuscript in Appendix E.3, where we use an independent model (Qwen1.5-7B-Chat, selected to maintain a comparable size for fairness) as the evaluator. The results, presented in the table below, indicate that using the model being trained as the complexity evaluator offers more accurate guidance, leading to improved performance on the BFCL benchmark.\\n\\n**Table 1. Ablation study on complexity evaluator.**\\n| Evaluator | Learner | Overall | Non-live(A) | Non-live(E) | Live(A) | Multi turn | Rel | Irrel|\\n| -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |\\n| Qwen1.5-7B-Chat | LLaMA-3.1-8B-Instruct |57.61 | 90.42 | 85.88 | 71.30 |13.12 | 87.80 | 78.12 |\\n| LLaMA-3.1-8B-Instruct | LLaMA-3.1-8B-Instruct | 59.22 | 89.27 | 90.07 | 73.21 | 14.37 | 85.37 | 83.81 |\\n\\n\\n[1] Du et al. 2023, MoDS: Model-oriented Data Selection for Instruction Tuning \\n[2] Ren et al. 2024, Learning or Self-aligning? Rethinking Instruction Fine-tuning\\n\\n**W3:** **The use of complexity-based sampling to dynamically adjust dialog difficulty has merit but may lead to unintended biases, as data that is either too simple or too complex is filtered out. The approach may fail to fully explore the impact of diverse and extreme cases, leading to gaps in the model\\u2019s capabilities in certain contexts.**\\n\\n**A3:** As shown in Section 3.3.2, our results indicate that the model performs optimally when trained on a medium-complexity subset of data, suggesting that both overly simple and overly complex data are less effective for model training. While the model\\u2019s foundational knowledge likely stems from its pre-training stage, we believe that the key to improving model performance during fine-tuning is identifying the most appropriate training set. If the data is too simple for itself, the model has already mastered the ability and has no need to learn it again. If the data is too complex for the model, its prediction can still be incorrect even if similar data has been trained in the finetuning phase.\\n\\nWe acknowledge that non-uniform sampling can introduce bias, such as causing the model to struggle with learning difficult examples after one round of training, effectively remaining in its \\\"comfort zone.\\\" However, based on previous studies[3], iteratively training the model using samples generated by the model itself has been shown to extend the model's knowledge boundary, achieving effects akin to curriculum learning. In future work, we will further explore the proposed complexity-based sampling strategy to perform iterative training and sampling over multiple rounds, thereby progressively enhancing the model's generalization capability on more challenging samples.\\n\\n[3] Huang, Jiaxin, et al. \\\"Large language models can self-improve.\\\" EMNLP(2023).\"}", "{\"title\": \"Examples of BFCL and API-Bank\", \"comment\": \"### BFCL example\\n```\", \"system\": \"Based on the given API description and the existing conversation history 1..t, please generate the API request that the AI should call in step t+1 and output it in the format of [ApiName(key1='value1', key2='value2', ...)], replace the ApiName with the actual API name, and replace the key and value with the actual parameters.\\nYour output should start with a square bracket \\\"[\\\" and end with a square bracket \\\"]\\\". Do not output any other explanation or prompt or the result of the API call in your output. \\nThis year is 2023.\", \"here_is_a_list_of_functions_in_json_format_that_you_can_invoke\": \"[{\\\"name\\\": \\\"get_weather_data\\\", \\\"description\\\": \\\"Fetches weather data from the Open-Meteo API for the given latitude and longitude.\\\", \\\"parameters\\\": {\\\"type\\\": \\\"dict\\\", \\\"properties\\\": {\\\"coordinates\\\": {\\\"type\\\": \\\"array\\\", \\\"items\\\": {\\\"type\\\": \\\"float\\\"}, \\\"description\\\": \\\"The latitude and longitude of the location.\\\"}}, \\\"required\\\": [\\\"coordinates\\\"]}}, {\\\"name\\\": \\\"calc_binomial_probability\\\", \\\"description\\\": \\\"Calculates the probability of getting k successes in n trials.\\\", \\\"parameters\\\": {\\\"type\\\": \\\"dict\\\", \\\"properties\\\": {\\\"n\\\": {\\\"type\\\": \\\"integer\\\", \\\"description\\\": \\\"The number of trials.\\\"}, \\\"k\\\": {\\\"type\\\": \\\"float\\\", \\\"description\\\": \\\"The number of successes.\\\"}, \\\"p\\\": {\\\"type\\\": \\\"float\\\", \\\"description\\\": \\\"The probability of success.\\\"}}, \\\"required\\\": [\\\"n\\\", \\\"k\\\", \\\"p\\\"]}}]\", \"user\": \"8 am tomorrow. Today is 2021-10-13.\\n\\nAssistant (expected output): [AddAlarm(token=\\\"z9x8c7v6b5n4m3q2w1\\\", time=\\\"2021-10-14 08:00:00\\\")]\", \"input\": \"\", \"ai\": \"[AI's plain text]\\n...\", \"expected_output\": \"[ApiName(key1='value1', key2='value2', ...)]\", \"api_descriptions\": \"[{\\\"name\\\": \\\"GetUserToken\\\", \\\"description\\\": \\\"Get the user token by username and password.\\\", \\\"input_parameters\\\": {\\\"username\\\": {\\\"type\\\": \\\"str\\\", \\\"description\\\": \\\"The username of the user.\\\"}, \\\"password\\\": {\\\"type\\\": \\\"str\\\", \\\"description\\\": \\\"The password of the user.\\\"}}, \\\"output_parameters\\\": {\\\"token\\\": {\\\"type\\\": \\\"str\\\", \\\"description\\\": \\\"The token of the user.\\\"}}}, {\\\"name\\\": \\\"AddAlarm\\\", \\\"description\\\": \\\"The API for setting an alarm includes a parameter for the alarm time.\\\", \\\"input_parameters\\\": {\\\"token\\\": {\\\"type\\\": \\\"str\\\", \\\"description\\\": \\\"User\\\"s token.\\\"}, \\\"time\\\": {\\\"type\\\": \\\"str\\\", \\\"description\\\": \\\"The time for alarm. Format: %Y-%m-%d %H:%M:%S\\\"}}, \\\"output_parameters\\\": {\\\"status\\\": {\\\"type\\\": \\\"str\\\", \\\"description\\\": \\\"success or failed\\\"}}} ]\", \"assistant\": \"An alarm has been set for 8 am tomorrow.\\n```\", \"tool\": \"[AddAlarm Response: \\u201dsuccess\\\"]\"}", "{\"comment\": \"Thank you for your response. However, it did not address my concern that \\\"the self-guided dialog generation process heavily relies on the LLM being trained to evaluate the complexity of the generated data.\\\"\\n\\n\\\"A more detailed discussion of these limitations would provide a more balanced perspective\\\" remains unaddressed.\\n\\n I would prefer to retain my scores.\"}", "{\"title\": \"Responses to Weakness 1, 2\", \"comment\": \"**W1:** **The paper shows high complexity with many unclear details. For example, how are the varying complexity levels (easy, medium, hard) actually defined?**\\n\\n**A1:** Thank you for raising this point. The three levels of complexity (easy, medium, hard) are defined based on the complexity scores of the training samples, which are computed using Eq.(1). We first calculate the complexity of each sample and then sort all samples in ascending order of their complexity. The top 60,000 samples are classified as the \\\"hard\\\" subset, the bottom 60,000 as the \\\"easy\\\" subset, and the middle 60,000 as the \\\"medium\\\" subset. The detailed explanation has been included in our revised manuscript in Sect. 3.3.2, paragraph 1. \\n\\nMoreover, to make our manuscript clearer, we provide extended experimental details in \\nC.1, and an example of the prompt and the response for the two benchmarks in Appendix F.\\n\\n**W2:** **While the paper repeatedly mentions Nested, Parallel, Dependent, and Multi-type, it doesn't analyze their connection to actual performance or conduct ablation studies.**\\n\\n**A2:** Thank you for your insightful comment. While we emphasize the significance of incorporating diverse data types such as Nested, Dependent, and Multi-type, there is currently no publicly available evaluation set that specifically addresses these categories. However, we acknowledge the importance of exploring the relationship between these data types and overall function-calling performance. We have conducted additional experiments in our revision in Appendix E.2. Specifically, we maintain the same overall dataset size and selectively replace samples from the Nested, Parallel, Dependent, and Multi-type categories with samples from other data types. We then train the Llama3.1-8B model using these modified subsets and evaluate its performance on the BFCL benchmark. The results are presented below. \\n\\nThe findings show that removing parallel execution data significantly impairs the model's ability to invoke multiple tools concurrently. This leads to a notable decrease in performance on Non-live AST and execution tasks, which rely heavily on parallel tool usage. Furthermore, excluding multi-type samples hampers the model's ability to detect when the candidate tools are irrelevant to the question, resulting in only 6.99\\\\% accuracy in irrelevance detection. The model's ability to handle multi-turn function calls is also impaired. In multi-turn testing, the models sometimes are required not to call functions but to ask clarifying questions instead.\\n\\nIn contrast, removing nested and dependent samples has a relatively minor effect on the model's tool-using ability in the BFCL task. Few test samples require nested arguments, and almost none involve dependent tool usage. However, including Dependent and Nested data types contributes to greater data diversity, leading to slight improvements in overall performance.\\n\\n**Table 1. Ablation study on various types of data in ToolACE datasets.**\\n| Data | Overall | Non-live(A) | Non-live(E) | Live(A) | Multi turn | Rel | Irrel|\\n| -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |\\n| w.o. Parallel | 50.60 | 74.75 | 77.30 | 72.19 | 1.75 | 78.05 | 85.05 | \\n| w.o. Dependent | 57.97 | 87.63 | 85.55 | 71.17 | 15.50 | 80.49 | 85.62 |\\n| w.o. Nested | 57.19 | 85.46 | 84.48 | 70.19 | 15.38 | 78.05 | 86.45 |\\n| w.o. Multi-type | 42.71 | 89.46 | 85.50 | 47.89 | 1.75 | 95.12 | 06.99 |\\n| ToolACE(2.5w) | 58.19 | 86.96 | 84.73 | 71.35 | 16.50 | 75.61 | 86.42 |\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for re-evaluating our manuscript and for your thoughtful feedback. We appreciate your recognition of the significance of our data synthesis strategy and your consideration of its impact on the field. Your insights regarding the path to code have been highly inspiring, and we plan to delve deeper into this direction in our future work and hope to present more interesting results.\"}", "{\"comment\": \"Thank you for your responses. I have updated my score.\"}", "{\"title\": \"Response to Question 1\", \"comment\": \"**Q1:** **Performances obtained by finetuning the same base model on Table 1 datasets e.g. ToolLLM.**\\n\\n**A1:** Thank you for your valuable suggestion. We did not include results for using the datasets in Table 1 to train the same model primarily because most related works optimize their fine-tuned models by choosing base models, data, and training settings. However, we recognize that including these results can further demonstrate the value of our generated data, which is a good complement to the existing results. Hence we have conducted additional experiments under a controlled setting in our revised version in Appendix E.1. Specifically, we compare the performances of using ToolACE and other state-of-the-art function calling data mentioned in Table 1 (ToolLLM and xLAM) to train the same base model (Llama3.1-8B). All data are uniformly sampled to 25,000 for a fair comparison. The corresponding results on BFCL are presented in the table below. These results demonstrate that the model trained with our data consistently outperforms the other models in all categories, further validating the effectiveness of our approach. Notably, the model trained on the xLAM dataset exhibits relatively poor performance in irrelevance detection, likely due to a lack of diverse sample types, such as cases where provided tools cannot solve the task. Moreover, the ToolLLM dataset, which primarily focuses on multi-step and dependent cases, demonstrates weak generalization on the BFCL benchmark.\\n\\n\\n**Table 1. Performances of training with different training datasets.**\\n| Training data | Overall | Non-live(A) | Non-live(E) | Live(A) | Multi turn | Rel | Irrel|\\n| -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |\\n| ToolLLM(2.5w) | 24.90 | 42.46 | 36.36 | 39.45 | 0.00 | 100.00 | 4.41 |\\n| xLAM(2.5w) | 40.51 | 81.94 | 81.77 | 43.18 | 4.38 | 73.17 | 11.87 |\\n| ToolACE(2.5w) (Ours) | **58.19** | **86.96** | **84.73** | **71.35** | **16.50** | **75.61** | **86.42** |\"}", "{\"title\": \"Responses to the concerns\", \"comment\": \"Thank you for your response. We would like to take this chance to make further clarifications about the two concerns. If there are any remaining unclear points, we would be more than happy to further clarify them during the discussion.\\n\\n**Concern about the Data Complexity Evaluator**\\n\\nTo address your concerns regarding our data complexity evaluator, we have extended our analysis by conducting experiments using Qwen1.5-7B-Chat and Qwen1.5-14B-Chat as complexity evaluators. The results, summarized in Table 1 below, indicate that using the model being trained as the complexity evaluator provides more effective guidance, leading to improved performance on the BFCL benchmark.\\n\\nNotably, when the complexity scores are assessed using a more advanced model (e.g., Qwen-14B), certain simpler training samples\\u2014those marked as \\\"easy\\\" by the evaluator but not necessarily by the learner\\u2014may be excluded. This exclusion leads to slight performance gains on more challenging tasks (e.g., Live AST) but results in performance degradation on Non-live AST tasks$^1$. Conversely, when the evaluator is less capable than the learner, the retained samples tend to be relatively easier for the learner, leading to better results on Non-live AST tasks while causing a decline in performance on Live AST tasks.\\n\\nWe acknowledge that our current self-guided evaluator may not be the ideal solution for identifying the most suitable training set. For instance, it may be sensitive to model size, and scaling it up for larger datasets may present challenges. We have also discussed the potential bias introduced in W3 in our previous responses. However, the experiments we conducted demonstrate that the current approach is both effective and reasonably appropriate. If these concerns have not yet been fully addressed, we would appreciate further clarification on the specific aspects you believe require more attention.\\n\\n**Table 1. Ablation study on complexity evaluator.**\\n| Evaluator | Learner | Overall | Non-live(A) | Non-live(E) | Live(A) | Multi turn | Rel | Irrel|\\n| -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |\\n| Qwen1.5-7B-Chat | LLaMA-3.1-8B-Instruct |57.61 | 90.42 | 85.88 | 71.30 |13.12 | 87.80 | 78.12 |\\n| Qwen1.5-14B-Chat | LLaMA-3.1-8B-Instruct |57.67 | 87.98 | 87.02 | 73.30 |11.75 | 87.80 | 84.00 |\\n| LLaMA-3.1-8B-Instruct | LLaMA-3.1-8B-Instruct | 59.22 | 89.27 | 90.07 | 73.21 | 14.37 | 85.37 | 83.81 |\\n\\n\\n> $^1$ Live AST tasks involve rarer and more complex functions compared to Non-live AST tasks, as detailed in BFCL's documentation.\\n\\n**Concern about Analysis of Limitations**\\n\\nWhile we have conducted extensive experiments to demonstrate the effectiveness of our synthesized dataset in enhancing functional-calling performance, several challenges remain.\\n\\n- Computational Complexity: The data complexity evaluation is influenced by the size of the model being trained, which limits scalability as both the model size and the number of training samples increase. Despite this, we believe the approach we have evaluated remains an effective strategy.\\n \\n- Model Performance Limitations: Although our model shows strong performance in functional calling, it still lags behind GPT-4 in other capabilities. To compare general capabilities, we have evaluated GPT-4 across several benchmarks, with results presented in Figure 8 of the revised manuscript. As expected, our ToolACE-8B model performs below GPT-4 in areas such as reasoning, mathematics, and coding. This is primarily due to the scale of the model and its training corpus. When compared to its base model, LLaMA-3.1-8B-Instruct, ToolACE-8B demonstrates substantial improvements in functional calling with minimal negative impact on other capabilities. While this success highlights the potential of specialized models in one specific domain, the challenge of simultaneously enhancing multiple capabilities, alongside functional-calling performance, remains an open question.\\n\\nThe analysis about the limitations are all updated in our revised manuscript in Appendix H.\"}", "{\"title\": \"Responses to Weaknesses and Questions\", \"comment\": \"**W1:** **Please include a complete example of a prompt and LLM response in the appendix so that readers can intuitively understand how the process works in practice.**\\n\\n**A1:** We thank the reviewer for the helpful suggestion to make our manuscript clearer. A revision has been updated in the system with a complete example of a prompt and an LLM response, as shown in Appendix F. We also include a detailed explanation of the two benchmarks in Appendix C.1.\\n\\n**W2:** **The paper lacks clarity and involves overly complex technical concepts. The additional concepts introduced, such as Self-Evolution, Self-Guided, Dual-Layer, and Multi-Agent, make the main idea harder to discern, leading to confusion for the reader. While the authors may believe these terms add richness to the paper, they detract from its central focus.**\\n\\n**A2:** Thank you for your valuable feedback. We understand your concern about the complexity and the introduction of new concepts in the paper. Our central focus, as reflected in the title, is to improve the accuracy, complexity, and diversity of synthetic data to enhance the function-calling capabilities of LLMs. To achieve this, we propose specific methods for each dimension:\\n\\n- **Accuracy:** We introduce Dual-Layer data verification, which improves synthetic data accuracy by using both rule-based and model-based checkers.\\n- **Complexity:** We propose Self-Guided dialog generation, where a newly defined complexity evaluation metric helps guide the dialog generation process.\\n- **Diversity:** We present Tool Self-Evolution Synthesis, a method that generates a large scale of diverse tools through a self-evolution process to increase data diversity.\\n\\nRegarding Multi-Agent dialog generation, this is part of our Self-Guided approach, and while it is a widely used technique for generating synthetic dialogs, it is not one of our primary contributions.\\n\\nWe hope this clarifies the structure of our approach and how each method contributes to the overall goal.\\n\\n**W3:** **In the ablation study, it would be valuable to compare the Retrieval-Augmented Generation (RAG) approach for retrieving task-relevant tools with In-Context Learning to optimize tool usage. Given the same level of engineering effort, explore whether these methods could achieve results comparable to fine-tuning.**\\n\\n**A3:** In this paper, we focus primarily on the **tool calling** capability of LLMs, where candidate tools are already provided as part of the model's input. While enhancing tool retrieval through RAG methods could be valuable in real-world applications, it is outside the scope of this study, as it addresses a different aspect of the overall tool-usage pipeline (i.e., tool retrieval vs. tool calling). Regarding the comparison between in-context learning and fine-tuning for optimizing tool usage, we acknowledge that in-context learning can be a viable alternative, but its effectiveness is highly dependent on the model\\u2019s initial capabilities. We have added experimental comparison and discussion on it in Appendix G of our revised manuscript. Specifically, we use the training samples as few-shot candidates and retrieve the top 3 most relevant samples according to the user's question and the provided tools with the BGE model to guide in-context learning. The results below show that few-shot in-context learning not only underperforms fine-tuning in BFCL but also falls short of the zero-shot setting. In many cases, the model is misled by the tools in the few-shot examples, selecting those instead of the tools in the test sample, which further exacerbates the model's hallucination phenomenon, such as the example illustrated in Figure 16 in Appendix G.\\n\\n**Table 1. Comparison between in-context learning and finetuning**\\n| Model | Non-live(A) | Non-live(E) | Live(A) | Rel | Irrel|\\n| -------- | -------- | -------- | -------- | -------- | -------- |\\n| Llama-3.1-8B-Instruct (Zero-shot) | 86.23 | 83.48 | 48.02 | 82.93 | 23.05 |\\n| Llama-3.1-8B-Instruct (3-shot) | 58.81 | 53.32 | 36.83 | 82.93 | 23.66 |\\n| ToolACE-8B (Ours) | 89.27 | 90.07 | 73.21 | 85.37 | 83.81 | \\n\\n\\n**Q1:** **In Figure 3, the \\\"without model\\\" approach occasionally outperforms the \\\"with model\\\" approach. Please provide an analysis to explain the reasons for this phenomenon.**\\n\\n**A4:** The model verification layer depends on the model's ability to identify errors in the data. As a result, false negatives\\u2014where data without any errors is incorrectly flagged as erroneous\\u2014can occur. These false negatives may cause occasional performance drops in some categories. Despite this, the performance decline in these cases is generally not significant, and we consider it to be within an acceptable tolerance. The overall improvement provided by the model verification layer still outweighs these occasional discrepancies.\"}", "{\"comment\": \"The reply didn't address all of my concerns, in particular, I still think the path seems to be directly to code, not json. But after a long period of thinking, I think this paper brings a new data synthesis strategy, although it may be a bit complicated, which has significant practical significance and is good for the field. I decided to raise my score.\"}", "{\"title\": \"Responses to Weakness 3, 4 and Question 1\", \"comment\": \"**W3:** **The relationship between data scale and performance is not demonstrated, making it difficult to determine which aspects actually contributed to the effectiveness.**\\n\\n**A3:** Thanks for the valuable point. We recognize the importance of a more fair comparison and have conducted additional experiments under a controlled setting in revision in Appendix E.1. Specifically, we compare the performances of using ToolACE and other state-of-the-art function calling data (ToolLLM and xLAM) to train the same base model (Llama3.1-8B). All data are uniformly sampled to 25,000 for a fair comparison. The corresponding results on BFCL are presented in the table below. These results demonstrate that the model trained with our data consistently outperforms the other models in all categories, further validating the effectiveness of our approach. Notably, the model trained on the xLAM dataset exhibits relatively poor performance in irrelevance detection, likely due to a lack of diverse sample types, such as cases where provided tools cannot solve the task. Moreover, the ToolLLM dataset, which primarily focuses on multi-step and dependent cases, demonstrates weak generalization on the BFCL benchmark.\\n\\n**Table 2. Performances of training with different training datasets.**\\n| Training data | Overall | Non-live(A) | Non-live(E) | Live(A) | Multi turn | Rel | Irrel|\\n| -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |\\n| ToolLLM(2.5w) | 24.90 | 42.46 | 36.36 | 39.45 | 0.00 | 100.00 | 4.41 |\\n| xLAM(2.5w) | 40.51 | 81.94 | 81.77 | 43.18 | 4.38 | 73.17 | 11.87 |\\n| ToolACE(2.5w) (Ours) | **58.19** | **86.96** | **84.73** | **71.35** | **16.50** | **75.61** | **86.42** |\\n\\n**W4:** **There are concerns about potential data leakage between BFCL and TSS - can the authors prove there isn't significant data leakage?**\\n\\n**A4:** We employ both the N-gram-based method and the similarity-based method to show that there is no significant data leakage in our ToolACE dataset.\\n\\n- N-gram-based method: Following the method used in Llama2, we consider a token to be contaminated if it appears in any token n-gram longer than 10 tokens in both the evaluation sample and the training set. A tool is classified as leaked if more than 10% of the tokens in its JSON string are contaminated. Under this setting, only a negligible percentage of 0.148% tools in the ToolACE dataset are leaked, compared to 0.610% in xLAM dataset.\\n- Similarity-based method: We define a tool as leaked if the cosine similarity between the given tool and any tool in the evaluation dataset exceeds 0.9. We choose the BAAI/bge-large-en in huggingface as the encoder to get representations of all tools. Using this method, the proportion of leaked tools in our dataset is only 0.974%, compared to 5.214% in the xLAM dataset.\\n\\n\\n**Q1:** **Since Nested, Parallel, Dependent, and Multi-type are essentially subsets of programming language capabilities, if they are indeed effective, does this suggest that direct training with programming languages (like Python) would be better? Furthermore, is the Data Interpreter approach of directly exposing tool interfaces through Python a better solution? This needs further analysis.**\\n\\n**A5:** We appreciate this insightful suggestion and agree that incorporating programming data like Python has the potential to enhance the model's function-calling capabilities. Previous research has similar ideas\\u2014for example, xLAM-7B uses DeepSeek-Coder-7B-instruct-v1.5 as its base model instead of a more general-purpose instruction model. While this paper focuses primarily on generating synthetic dialogue data for function calling, exploring the possibility of translating our synthetic data into Python code snippets would be an interesting and exciting research topic. We plan to investigate it in our future work.\"}" ] }
8DuJ5FK2fa
Trained Models Tell Us How to Make Them Robust to Spurious Correlation without Group Annotation
[ "Mahdi Ghaznavi", "Hesam Asadollahzadeh", "Fahimeh Hosseini Noohdani", "Soroush Vafaie Tabar", "Hosein Hasani", "Taha Akbari Alvanagh", "Mohammad Hossein Rohban", "Mahdieh Soleymani Baghshah" ]
Classifiers trained with Empirical Risk Minimization (ERM) tend to rely on attributes that have high spurious correlation with the target. This can degrade the performance on underrepresented (or 'minority') groups that lack these attributes, posing significant challenges for both out-of-distribution generalization and fairness objectives. Many studies aim to enhance robustness to spurious correlation, but they sometimes depend on group annotations for training. Additionally, a common limitation in previous research is the reliance on group-annotated validation datasets for model selection. This constrains their applicability in situations where the nature of the spurious correlation is not known, or when group labels for certain spurious attributes are not available. To enhance model robustness with minimal group annotation assumptions, we propose Environment-based Validation and Loss-based Sampling (EVaLS). It uses the losses from an ERM-trained model to construct a balanced dataset of high-loss and low-loss samples, mitigating group imbalance in data. This significantly enhances robustness to group shifts when equipped with a simple post-training last layer retraining. By using environment inference methods to create diverse environments with correlation shifts, EVaLS can potentially eliminate the need for group annotation in validation data. In this context, the worst environment accuracy acts as a reliable surrogate throughout the retraining process for tuning hyperparameters and finding a model that performs well across diverse group shifts. EVaLS effectively achieves group robustness, showing that group annotation is not necessary even for validation. It is a fast, straightforward, and effective approach that reaches near-optimal worst group accuracy without needing group annotations, marking a new chapter in the robustness of trained models against spurious correlation.
[ "Spurious Correlation", "Group Robustness", "Zero Group Annotation", "Distribution Shift", "Out-of-Distribution Generalization" ]
Reject
https://openreview.net/pdf?id=8DuJ5FK2fa
https://openreview.net/forum?id=8DuJ5FK2fa
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w7xbRS2FBx", "vdymuoTIGO", "tnbXZeGc0q", "qM1UjXPh3r", "nRIDrNgNDw", "lSychAqAg2", "iQacYYWrP9", "YMry7P7HNO", "Ra7pXPkibc", "QqUxb6mnzF", "OAT7M3yMd2", "LdGVNkTHqB", "KgQfHMEZkW", "JduwN2yClJ", "IV6HziJSmO", "Gv5N8bDPOP", "Eh8QvMMS7y", "7h745UVwlL", "6qmNJ9AJaH", "3rVf47y2Cu" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1731969375550, 1732028603605, 1732622333876, 1730433954361, 1737524174392, 1731976343571, 1732317396239, 1731955213759, 1732029552179, 1731954720332, 1731972695851, 1732984848497, 1731955724926, 1732032996770, 1730883423119, 1734379948948, 1732563915486, 1730594395756, 1730411217750, 1732565182965 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12227/Authors" ], [ "ICLR.cc/2025/Conference/Submission12227/Authors" ], [ "ICLR.cc/2025/Conference/Submission12227/Authors" ], [ "ICLR.cc/2025/Conference/Submission12227/Reviewer_9oxU" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12227/Authors" ], [ "ICLR.cc/2025/Conference/Submission12227/Reviewer_3v9E" ], [ "ICLR.cc/2025/Conference/Submission12227/Authors" ], [ "ICLR.cc/2025/Conference/Submission12227/Authors" ], [ "ICLR.cc/2025/Conference/Submission12227/Authors" ], [ "ICLR.cc/2025/Conference/Submission12227/Authors" ], [ "ICLR.cc/2025/Conference/Submission12227/Authors" ], [ "ICLR.cc/2025/Conference/Submission12227/Authors" ], [ "ICLR.cc/2025/Conference/Submission12227/Authors" ], [ "ICLR.cc/2025/Conference/Submission12227/Reviewer_jr76" ], [ "ICLR.cc/2025/Conference/Submission12227/Area_Chair_yFVJ" ], [ "ICLR.cc/2025/Conference/Submission12227/Authors" ], [ "ICLR.cc/2025/Conference/Submission12227/Reviewer_3i2u" ], [ "ICLR.cc/2025/Conference/Submission12227/Reviewer_3v9E" ], [ "ICLR.cc/2025/Conference/Submission12227/Reviewer_3v9E" ] ], "structured_content_str": [ "{\"title\": \"Reply to Reviewer 3i2u - Part 1/3\", \"comment\": \"Dear Reviewer 3i2u,\\n\\nWe appreciate your positive feedback on the effectiveness of our scheme and the clarity of our paper.\\n\\n---\\n## Weakness 1\\nEVaLS is a scheme designed to ensure annotation-free robustness of trained models against spurious correlations (L74\\u201376). In this scheme, modules could be replaced while completely maintaining the overall framework and contribution. As shown in Appendix F, we have conducted comprehensive ablation studies to evaluate these components:\\n- Loss-based sampling alternatives: Examined in Sec. F.1 and F.2.\\n- Replacing EIIL with other environment inference techniques: Detailed in Sec. F.3.\\n\\nIt is important to clarify that our work has not proposed using or combining previous methods as a novelty. Instead, our contributions lie in demonstrating the following points (L106\\u2013120):\\n1. Contrary to prior approaches (Appendix A, L728\\u2013750), we show that exact group annotations are not a prerequisite for robustness to spurious correlations, and identifying environments with group shifts (Sec. 3.2) proves effective for model selection.\\n2. When ground-truth annotations are unavailable or spurious correlations are unknonw, leveraging the trained model\\u2019s learned representations can effectively enhance robustness to spurious correlations (Figure 4(b) and Table 1\\u2014UrbanCars).\\n3. Loss-based sampling (Sec. 3.1) is not only an effective method for ensuring robustness to group shifts (compared to other methods such as those in JTT and SELF; see EVaLS and EVaLS-GL in Table 1), but it is also supported by a theory of data balancing with general assumptions (Sec. 3.3 - Theoretical Analysis).\\n---\\n## Weakness 2\\n### Theoretical Analysis\\n\\nWe **do not** assume that the losses on the majority and minority samples follow Gaussian distributions. As we state in L291-292 (311-312 in the revision), \\u201cWe assume a general assumption that in feature space (output of $g_\\\\theta$), samples from the minority and majority of a class are derived from Gaussian distributions,\\u201d where $g_\\\\theta$ is the feature extractor (L183-184 (208-209 in the revision)). To our knowledge, this is one of the most general assumptions in the theoretical analysis of majority-minority dynamics. It does not even assume different dimensions for core and spurious features (as [6] or [7] do to formulate spurious correlation for other goals and analysis). By this assumption, the Gaussian distribution of our logits (not losses) is derived (Lemma D.1). As mentioned in L294-296 (314-316 in the revision), the order of samples in logit space and loss space is monotonic within each class, and thus the tails are equivalent in these two spaces. Thus, loss-based sampling is backed by our proposed theoretical analysis.\\n\\nMoreover, please note that, as we state in L285-287 (306-308 in the revision) and as observed in Proposition D.1, it is not obvious and always the case that loss-based sampling could create two group-balanced sets of data. Condition (i) (Eq. (5) in L924) and Condition (ii) (L926-943) are necessary and sufficient conditions for this purpose. Whether these conditions are met depends on the classifier, distance between distributions and their variances in feature space (and consequently in logit space), and the amount of spurious correlation.\\n\\nAdditionally, as stated in L310-311 (330-331 in the revision), we have added a practical justification in Appendix D.2 to show that the conditions of Proposition 3.1 (and more completely in D.1) are met in practice.\\n\\n### Comparison to [2]\\n\\n[2] proposes SELF, which has been compared to our approach in Table 1. It uses a different criterion for sample selection, which, based on results in Table 1, is less effective than our loss-based sampling. More precisely, loss-based sampling (EVaLS-GL) outperforms disagreement-based sample selection by SELF with a similar level of group supervision in 4 of 5 benchmarks. Moreover, As it is stated in L371-372 (390-391 in the revision), SELF requires training information (an early-stopped model), in contrast to our approach and methods like DFR and AFR (indictated by $\\\\star$ in Table 1).\\n\\n[6] Nagarajan et al., Understanding the failure modes of out-of-distribution, ICLR, 2021. \\n\\n[7] Sagawa et al., An investigation of why overparameterization exacerbates spurious correlations, PMLR, 2020.\"}", "{\"title\": \"Reply to Reviewer 9oxU - Part 1/3\", \"comment\": \"Dear Reviewer 9oxU,\\n\\nThank you for your valuable feedback and acknowledgment of the effectiveness of our work. We appreciate your attention to various aspects of the framework. Below, we aim to clarify confusing points and provide a comprehensive discussion and analysis on the questions you have raised.\\n\\n---\\n## Weakness 1\\n### Compared to Sampling Method in JTT\\nPlease refer to Section F.2, *Comparison of High-Loss and Misclassified-Sample Selection*.\\n\\nHaving a hyperparameter ($k$) to control the number of selected samples from high loss samples provides more flexibility to LS compared to JTT. There is a tradeoff between the purity and the number of selected high-loss samples: our observations (Figure 2 (3 in the revision)) demonstrate that minority samples are more commonly found among those with high loss in the ERM model. As the number of selected high-loss samples increases, the proportion of minority samples among them decreases. However, selecting more samples could improve the overall training for those samples. The flexibility to choose among various numbers of selected samples allows EVaLS to find an optimal point in this tradeoff. Sensitivity to the selection of $k$ is shown in Figure 9 of the paper. Choosing only misclassified samples cannot handle this tradeoff effectively, particularly when the number of misclassified samples is either too high or too low. See Table 11 for performance comparison.\\n\\n### Compared to SELF\\nSELF uses a different criterion for selecting minority samples. It relies on the disagreement between the outputs of a trained model and an early-stopped model, leveraging the observation that models behave differently with respect to spurious correlations during the final and early stages of training. In contrast, we base our approach on the observation that minority samples are more prevalent among high-loss samples than low-loss ones.\\nBoth approaches yield effective results. However, loss-based sampling (EVaLS-GL) outperforms disagreement-based sample selection by SELF in 4 out of 5 benchmarks, under a similar level of group supervision. Moreover, as stated in L371\\u2013372 (390-91 in the revision), SELF requires additional training information (an early-stopped model), whereas loss-based sample selection does not (see methods with $\\\\star$ in Table 1). This makes SELF less feasible in many cases where the model or training data is large, and early-stopped checkpoints are unavailable.\"}", "{\"comment\": \"Thank you for your constructive review and consideration of our rebuttal.\\n\\nBest regards,\\n\\nAuthors of Submission #12227\"}", "{\"summary\": \"This paper introduces a method called EVaLS, which trains a classifier that is robust to spurious correlations without requiring group labels. The method constructs a balanced dataset based on loss values and utilizes environments for hyperparameter tuning. Experiments on several datasets demonstrate the effectiveness of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The method of selecting high-loss and low-loss data proposed in this paper effectively mitigated the problem of group imbalance, with both experiments and theoretical analysis effectively validating this point.\", \"weaknesses\": [\"The authors may need to provide a more detailed description of the advantages of loss-based sampling over other sampling methods. For instance, it would be beneficial to compare it specifically with methods like SELF, highlighting the unique benefits or efficiencies achieved by the proposed method.\", \"The authors' claim that their method 'completely eliminates the need for group annotations' as a primary contribution seems somewhat tenuous. The methodology presented in the paper can be categorized into two main components: Environment-based Validation (EV) and Loss-based Sampling (LS). Notably, LS appears to require group labels for hyperparameter tuning, while EV utilizes environment labels generated through Environment Inference for Invariant Learning (EIIL) as a stand-in for group labels. However, EIIL was originally proposed by Creager et al., 2021, and it was inherently designed to address scenarios where group labels are not available. Although Creager et al., 2021 used true group labels for model selection within GroupDRO in their experiments, it appears that environment labels could also be suitably employed for this step.\"], \"questions\": \"- From lines 452-457, the authors state that annotation-free methods can mitigate the impact of both labeled and unlabeled shortcut features more effectively. However, EVaLS-GL, which utilizes group labels, achieved better results than EVaLS. Could the authors provide their insights on this phenomenon?\\n- \\nIn the experiments, the performance of EVaLS and EVaLS-GL varied, with each having its strengths and weaknesses. Could the authors discuss the advantages and disadvantages of using inferenced environments versus group labels based on these findings?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Reply to Reviewer 3i2u - Part 3/3\", \"comment\": \"## Question 2\\n\\nAs stated in Sec. 2.1 (L139-L149), there exist multiple types of subpopulation shifts (see response to weakness 4). Refer to Yang et al. (2023b) in the paper for a categorization of subpopulation shifts and their formulation (Table 1 in their paper). EVaLS is designed for robustness to *spurious correlation*, but CivilComments and MultiNLI are examples of other forms of subpopulation shifts (class imbalance and attribute imbalance). It is stated in Abstract L100-101, Experiments L348-350 (368-370 in the revision), and Appendix E.3 L1274-1276 (1277-1279 in the revision), and reported also by Yang et al. (2023b) (see Table 2 in their paper).\\n\\nAs explained in L458-466, patterns distinguishing groups in CivilComments and MultiNLI are **not predictive for the target** class (see tables below). This reduces their visibility in the model\\u2019s final layers (see Lee et al. (2023) for the role of various layers of neural networks in different types of shifts). Since environment inference algorithms like EIIL (see Appendix F.3 for further investigation) depend on the last layers of a trained model, they cannot infer environments with notable group shifts (defined in L253-254 (257-259 in the revision)) in CivilComments and MultiNLI. The group shifts in CivilComments and MultiNLI (L464-466 (467-469 in the revision)) are significantly lower than those of the datasets reported in Table 2. Thus, the focus of environment-based validation is on datasets with spurious correlations (see limitations (L528-529 (530-531 in the revision)) and future works (L534-535 (537-539 in the revision)) in the Discussion section).\\n\\nNevertheless, loss-based sampling (LS) is effective for CivilComments and MultiNLI. Our EVaLS-GL, using ground-truth group labels for model selection and loss-based sampling for retraining, outperforms other methods with a similar level of group supervision on MultiNLI. Also, its WGA on CivilComments is also only $2.1$% lower than the state-of-the-art method (by DFR [1]).\\n\\n**Proportion of attributes in each class for CivilComments dataset (Table 9 in the paper)**\\n|Toxicity (Class)|Male|Female|LGBTQ|Christian|Muslim|Other Religions|Black|White|\\n|-|-|-|-|-|-|-|-|-|\\n|0|0.11|0.12|0.03|0.10|0.05|0.02|0.03|0.05|\\n|1|0.14|0.15|0.08|0.08|0.10|0.03|0.10|0.14|\\n\\n**MultiNLI training sets statistics**\\n|Group|Class (entailment)|Attribute|# Train Data|\\n|-|:-:|:-:|:-:|\\n|$G_1$|0|No Negations|57498 (28%)|\\n|$G_2$|0|Has Negations|11158 (5%)|\\n|$G_3$|1|No Negations|67376 (32%)|\\n|$G_4$|1|Has Negations|1521 (1%)|\\n|$G_5$|2|No Negations|66630 (32%)|\\n|$G_6$|2|Has Negations|1992 (1%)|\\n\\n--- \\n\\n## Question 3\\nNote that for last-layer retraining and model selection data, we follow the same scheme as in DFR [1] (please see their Sec. 6, \\u201cFeature Reweighting Improves Robustness,\\u201d Hyper-parameters paragraph), which is also used in other works (see Appendix D: Training Details in SELF [2]). If the number of minority samples $n$ is larger than ~20 in the validation set, which is the case in all the datasets we have, the number of samples in $D_{MS}$ can be approximated by a normal distribution $\\\\mathcal{N}(\\\\mu = \\\\frac{n}{2}, \\\\sigma^2 = \\\\frac{n}{4})$. By calculating a confidence interval, the probability that fewer than $\\\\frac{n}{4}$ of the samples are in $D_{MS}$ or $D_{LL}$ is approximately $2\\\\Phi(-\\\\frac{\\\\sqrt{n}}{2})$, which would be very low. Thus, like every other case in machine learning, and specifically in our datasets and settings similar to DFR [1], random splits work properly and place an acceptable percentage of samples in each of $D_{MS}$ and $D_{LL}$ with a probability of almost 1. However, in the rare cases you described or other cases with a limited number of data points (as we state in Section 5: Discussion, L526-528 (529-530 in the revision), as a limitation), EVaLS will struggle if not enough minority samples exist in their retraining and model selection dataset.\"}", "{\"comment\": \"Thank you for the reply. Clearly EVaLS outperforms AFR and AFR+EIIL in the Dominoes-CMF, and it would be a great addition to Figure 4 (b) and Table 6. Is it possible to have them updated in the pdf?\\n\\nRegarding the dataset, I understand that it is not a trivial task for a rebuttal process.\"}", "{\"title\": \"Reply to Reviewer jr76 - Part 2/2\", \"comment\": \"## Weakness 3: EVaLS Consistency:\\n\\nNote that EVaLS does not utilize group annotations information regarding spurious correlation for model selection or (re)training, and thus, as we clarify in our Abstract L32-34, its **near-optimal** accuracy showcases its effectiveness. This is a significant advancement, particularly in scenarios where group labels are unavailable or unreliable (a common and challenging real-world scenario).\\n\\nIn other words, it is not expected that EVaLS, which does not utilize group-level data, will surpass all models that use this additional information. Nonetheless, the fact that EVaLS achieves comparable results under these constraints demonstrates its robustness and practical utility.\\n\\nAs you can see in Table 1, when group annotations are available for model selection, EVaLS-GL outperforms other methods with a similar level of group supervision on most datasets. However, it underperforms but achieves near-optimal worst group accuracy compared to DFR and GDRO with higher levels of group supervision in most datasets, as expected. It also outperforms DFR and GDRO in the case of the UrbanCars dataset, where annotations for one of its spurious attributes are not available during training (see L454-457 (457-460 in the revision)).\\n\\nFinally, the claim that EVaLS's better performance on certain datasets is due to chance contradicts our findings. Across diverse datasets with spurious correlations, EVaLS consistently performs well, supported by statistical measures to minimize random variance. Its comparable performance to methods utilizing group information underscores its strength.\"}", "{\"title\": \"Reply to Reviewer 9oxU - Part 2/3\", \"comment\": \"Before proceeding, we want to clarify some points that may have been misunderstood:\\n\\n#### **1. EVaLS-GL, which utilizes group labels, achieved better results than EVaLS**\\nThis is logical and not a contradiction. For every spurious attribute that its exact group annotations are available, using the annotations to create a balanced (re)training dataset or for model selection is the optimal approach to ensure robustness to correlation shifts in that spurious attribute. This can be observed by comparing DFR and GDRO with other methods in Table 1. In other words, it is not expected that EVaLS, which does not use group-level data, will surpass models that leverage this additional information for robustness to those groups. Nonetheless, as we state in our Abstract (L32\\u201334), the fact that EVaLS achieves **near-optimal** results under these constraints demonstrates its robustness and practical utility, especially since it can be used in scenarios where the aforementioned group annotations are not available.\\n\\nThis property makes EVaLS applicable to scenarios where **the spurious correlations a trained model relies on are unknown** during (re)training. Since no solution before us has solved this problem\\u2014prevalent across machine learning models in real-world applications\\u2014EVaLS represents a significant advancement in improving model robustness against spurious correlations. **Achieving near-optimal accuracy without group annotations, while also addressing robustness to unknown spurious correlations, makes EVaLS an effective solution for real-world scenarios where some spurious correlations are known and others are not.**\\n\\nTo more effectively illustrate this point, we proposed Dominoes-CMF. In Figure 4(b), you can see that EVaLS achieves higher worst-group accuracy across all 8 groups (as demonstrated in Figure 4(a) and explained in Section 2.2, *Robustness of a Trained Model to Unknown Shortcuts*) compared to EVaLS-GL, which requires group annotations for model selection, and DFR, which uses group annotations for both retraining and model selection.\\n\\n#### **2. LS appears to require group labels for hyperparameter tuning, while EV utilizes environment labels generated through Environment Inference for Invariant Learning (EIIL) as a stand-in for group labels**\\nEnvironments are not considered equivalent to groups. They are diverse subsets of the data. In our framework, what we require from environments is that they depict group shifts (see Section 3.2, L214\\u2013215 (259-260 in the revision) and L244\\u2013258 (263-269 and 285-292 in the revision); Abstract, L26\\u201330; Introduction, L73\\u201374 and L77\\u201380). For example, majority samples of a class may be more prevalent in two environments, but this is not problematic as long as there is a notable shift in the percentage of minority samples between them (see Table 2 for group shifts in the obtained environments across datasets).\\nConsequently, the performance of EVaLS demonstrates that LS does not necessarily require group labels for hyperparameter tuning. Instead, combining it with worst-environment accuracy, derived from environments with the above properties, can achieve near-optimal accuracy.\"}", "{\"title\": \"Reply to Reviewer jr76 - Part 1/2\", \"comment\": \"Dear Reviewer jr76,\\n\\nWe appreciate the feedback and the opportunity to clarify our work's contributions. We are delighted that you find our scheme novel and interesting.\\n\\n---\\n\\n## Weakness 1: Issue with High-Loss Minority Samples\\n\\nNote that EVaLs aims to enhance a trained model's robustness against **spurious correlations it relies on** (L523-524 (525-526 in the revision)). For a grouping based on an attribute \\\\(a\\\\), if the loss distribution for groups within a class\\u2014distinguished by the presence or absence of attribute \\\\(a\\\\)\\u2014does not reflect distinct populations, it indicates that the model does not rely on attribute \\\\(a\\\\) for its predictions.\\n\\nThe loss distribution of a trained model across different groups has **certain properties**. We would like to provide additional clarification and evidence on this matter:\\n1. **Our Empirical Observations**:\\nAs stated in Section 3.1 (L205-208 (249-252 in the revision)), our observations (Figure 2 (3 in the revision)) strongly support the assumption that minority samples are overrepresented among high-loss examples.\\n2. **Loss Dynamics Across Minority and Majority Groups**:\\nIn the majority groups, both the core and spurious patterns are aligned and predictive of the target. In these scenarios, both patterns lead to lower loss. Conversely, in minority groups, core and spurious patterns exhibit contradictory signs for predicting labels. As a result, if the model relies on spurious correlations, the loss for minority groups will be higher.\\n3. **Empirical Risk Minimization (ERM) Principles**:\\nIn an ERM framework, the learning process prioritizes minimizing the *overall loss* across the dataset. In the context of spurious correlation (Sec 2.1, L143-L156), *majority* groups dominate the data and contribute more the overall loss. Thus, a model which is trained on the data with ERM shoud exhibit lower loss on these groups. Conversely, *minority* groups are *underrepresented*, may incur higher losses. If a majority group consistently exhibits high loss, it suggests a failure in the learning process to converge, contradicting ERM principles.\\n4. **Evidence in Literature**:\\nAs stated in L61-63, \\u201cthe loss value of the model, or its alternatives, are popular signals for recognizing minority groups,\\u201d which has been utilized in previous works (Liu et al., 2021a; Qiu et al., 2023; Nam et al., 2020; Noohdani et al., 2024). \\n---\\n \\n## Weakness 2: Environment Inference:\\n\\n\\nAs detailed in Appendix F.3, EVaLS could be used with other environment inference methods. As we stated in Section 1 (Introduction, L81-83) and demonstrated in Table 12, EVaLS-RC, which employs a random linear classifier for environment creation, outperforms EIIL on some datasets. These results further highlight that the effectiveness of EVaLS does not depend exclusively on EIIL.\\n\\nWhat EVaLS requires for model selection is a set of diverse environments that exhibit correlation shift (Abstract, L26-30; Introduction, L73-74 and L77-80; Sec 3.2, L214-215 (259-260 in the revision) and L244-258 (263-269 and 285-292 in the revision)). As discussed in L244-258 (263-269 and 285-292 in the revision) and analyzed in Table 2 of the main paper, modest group shifts in the environments are sufficient for EVaLS to perform well. It does not rely on finding \\\"perfect\\\" environments.\\n\\nAlso, as discussed in Appendix A (Related Work, L722-727), environment inference is a different area of research and a broader problem addressed in invariant learning (L713-727). \\n\\nWe should emphasize that EVaLS is a group annotation-free framework that its performance across various inference methods underscores its flexibility.\\n\\n---\"}", "{\"title\": \"Reply to Reviewer 3i2u - Part 2/3\", \"comment\": \"## Weakness 3\\n\\nNote that for last layer retraining and model selection data, we follow the same scheme as in DFR [1] (Please see their Sec. 6 \\u201cFeature Reweighting Improves Robustness,\\u201d Hyper-parameters paragraph), which is also used in future works (see Appendix D Training Details in SELF [2]). The important point here is that last layer retraining data is not used for feature learning. It is used for reweighting the features that are learned by the feature extractor $g_\\\\theta$ (L183 (208 in the revision)) during training, which is a relaxed form of feature selection (see Figure 1 in DFR [1]).\\n\\nRegarding BAM [4], we noticed there are evidences of unreliable results as you can see below.\\nBefore stating them, note that as we review in Sec. 2. Problem Setting (L139-149), there exist several types of subpopulation shifts with different sources, including class-imbalance (L140-141) and spurious correlation (L142-157). Following Yang et al. (2023b): Given input $x = (x_{c}, x_{s}) \\\\in \\\\mathcal{X}$, which consists of core pattern $x_{c}$ and spurious pattern $x_{s}$, and label $y \\\\in \\\\mathcal{Y}$, we can write the classification model: $$ P(y|x) = \\\\frac{P(x|y)}{P(x)}P(y) = \\\\frac{P(x_{c}, x_{s}|y)}{P(x_{c}, x_{s})}P(y) = \\\\frac{P(x_{c}|y)}{P(x_{c})}\\\\frac{P(x_{s}|x_{c}, y)}{P(x_{s}|x_{c})}P(y) $$ Spurious correlation occurs when $P(x_{s}|x_{c}, y) \\\\gg P(x_{s}|x_{c})$, while class imbalance represents the scenario where $P(y) \\\\gg P(y')$ for $y, y' \\\\in \\\\mathcal{Y}$. Thus, such shifts can occur independently in datasets (see Section 2 Preliminaries L139-149), .\\n\\nThe model selection criterion used by BAM [4], ClassDiff, is a criterion for enhancing robustness to class-imbalance, and it was not reasonable to use it for spurious correlation. To better illustrate why, note that if you have a completely random classifier, w.h.p. it achieves near zero ClassDiff, while its WGA w.h.p. would not be much higher than random. In a more concrete example, setting their auxiliary coefficient ($\\\\lambda$) to 0 and #Epochs in Stage 1 ($T$) to 0, and upweight factor ($\\\\mu$) to 1 in BAM, is equivalent to simply training an ERM model. Using such hyperparameters to train a model on a dataset results in the same WGA as ERM. If the dataset contains a spurious correlation but has no class imbalance, both ClassDiff and WGA will be low.\\n\\nWe reviewed their Appendix B Training details and found that they used a limited range of hyperparameters (see BAM [4] Table 6). By avoiding scenarios that lead to lower (better) ClassDiff but lower (worse) WGA, the results are reported for settings that have a minimum level of bias amplification during initial training and a proper version of the initially trained model. \\nNote that the advantage of bias amplification (Nam et al., 2021 in the paper) and using an early-stopped version for identifying spurious correlation (Zhang et al. in the paper, JTT) are known, and it is expected that for a set of hyperparameters the method works. However, the effectiveness of ClassDiff was questionable, and we did not find enough evidence of its effectiveness (because the BAM was tested on a narrow range of hyperparameters). To further justify our thoughts, for UrbanCars, we tested BAM [4] on a set of hyperparameters outside the reported range, and in comparison to in-range hyperparameters, we observed that while the ClassDiff became lower, WGA did not show improvement compared to that of ERM.\\n\\nThus, we concluded that there is not enough evidence of effectiveness to benchmark BAM [4] and decided to exclude it from the comparison.\\n\\n---\\n\\n## Weakness 4\\nThe worst group accuracy and average accuracy (in parentheses) results you have requested are as follows:\\n\\n|model selection criteria|Waterbirds|CelebA|UrbanCars|\\n|-|-|-|-|\\n|minimum class difference [4]|$80.7_{\\\\pm4.1}$($90.1_{\\\\pm0.2}$)|$75.0_{\\\\pm2.8}$($92.9_{\\\\pm0.4}$)|$82.1_{\\\\pm0.5}$($88.0_{\\\\pm0.6}$)|\\n|worst-class accuracy [5]|$89.1_{\\\\pm1.0}$($95.3_{\\\\pm0.2}$)|$71.3_{\\\\pm5.5}$($93.6_{\\\\pm0.4}$)|$81.6_{\\\\pm0.8}$($88.2_{\\\\pm0.7}$)|\\n\\nBy comparing EVaLS results in Table 1 and Table 4, we can see that in most setups, both criteria underperform compared to EV. *Worst-class accuracy* achieves a $0.7$% higher WGA on Waterbirds and *minimum class difference* shows similar WGA but lower average accuracy on UrbanCars compared to EV. On CelebA, both criteria underperform EV by a margin of more than $10$%. *Minimum class difference* results in a $7.7$% lower WGA on Waterbirds, and *worst-class accuracy* underperforms EV by $0.5$% on UrbanCars.\"}", "{\"title\": \"General Response and Final Reminder\", \"comment\": \"Dear Reviewers,\\n\\n\\nThank you for your time reviewing our work. **Only 3 days remain** until the end of the discussion period. We have addressed all your concerns and questions. Please read them and let us know if you feel further clarification, justification, evidence, or experiments are needed. \\n\\nWe uploaded a new version of the paper by the ICLR deadline to correct minor errors and include new experiments reported and requested by Reviewer 3v9E. Please review our response to Reviewer 3v9E and Appendix G in the revised paper for the results of the new dataset added to the paper. The line references in our responses have been updated to reflect the revision.\\n\\nAlso, please read the following for a review and clarification of our contribution: \\n\\nIn continuation of previous efforts in the field for robustness to spurious correlations without group annotations (L57-69, L738-751), requiring group annotations for model selection remains a limitation (L67-69, L749-751). \\n\\nWe have found that the availability of a set of disjoint samples (i.e., environments) of data depicting group shifts can effectively serve this purpose, using worst environment accuracy as a reliable surrogate for model selection for robustness to spurious correlations. This is particularly important as there are known (e.g., using EIIL) or very simple (e.g., applying a random linear classifier on top of the feature space of the model) ways to obtain such sets for datasets with spurious correlations. However, please note that inferring environments is a different field of research (L722-727). In other words, what we have shown is that the availability of group annotations can be effectively relaxed to a condition that is possible to satisfy using current knowledge (through environment inference methods, simply applying a random linear classifier on the feature space, or other methods \\u2014 see Appendix F.3).\\n\\nWe have also proposed an enhanced sampling technique, *loss-based sampling*, which is shown to be more effective than other group balancing schemes for robustness to subpopulation shift (see Table 1 and Table 11 for comparisons of EVaLS-GL with methods at a similar level of group supervision, such as AFR and SELF, and Appendix F.2 for another comparison with the sampling method in JTT). We have backed our balancing scheme with practical observations and theoretical analysis to provide better insights into why it could be effective.\\n\\nIt is expected that approaches that do not use group annotations will underperform compared to those that use group-level information (compare results of double-ticked methods with others in Table 1). In other words, when group annotations based on a spurious attribute are available, using them is logically more effective for achieving robustness to that attribute than not using them. However, *\\u201cin many real-world applications, the process of labeling samples according to their respective groups can be prohibitively expensive and sometimes impractical\\u201d* (L67-69). EVaLS (Environment-based Validation and Loss-based Sampling), without using group annotations, demonstrates **near-optimal** performance on spurious correlation datasets (L32-33, results in Table 1). While this represents important progress in the field, another property arises in EVaLS that is even more significant.\\n\\nThe ability to achieve robustness to spurious correlations without group annotations enables EVaLS to improve robust to ***unknown spurious correlations***. This means EVaLS can effectively make trained models robust to spurious correlations they rely on, whether identified or unidentified. Without this capability, even if a model becomes robust to a known spurious correlation using current approaches, a **persistent concern** remains about the presence of unknown spurious correlations. Such correlations may affect the model\\u2019s predictions and remain undetected, posing significant performance and safety risks. As discussed in Figure 4(b) and its corresponding explanations, as well as in our discussion with Reviewer 3v9E (Appendix G in the revision, https://openreview.net/forum?id=8DuJ5FK2fa&noteId=Eh8QvMMS7y), previous methods could not be responsible for achieving robustness to *unknown* spurious correlations. In contrast, EVaLS improves robustness to both known and unknown attributes. \\n\\nIn other words, when a spurious correlation is unknown, methods requiring group annotations for robustness are not applicable. However, EVaLS, which demonstrates effectiveness in enhancing robustness to spurious correlations without group annotations, improves robustness to all known and unknown spurious attributes and achieves higher worst-group accuracies among all groups (full combinations of core and spurious attributes).\\n\\n---\\n\\nPlease let us know if you have any other questions. We try to answer them in the remaining time promptly.\"}", "{\"title\": \"Reply to Reviewer 3v9E\", \"comment\": \"Dear Reviewer 3v9E,\\n\\nFirst, we appreciate your acknowledgment of the importance and impact of results that demonstrate robustness in scenarios where some or all spurious correlations are unknown. We believe this is a highly influential step in the field.\\n\\n--- \\n\\n## Questiion 1\", \"wga_of_afr_on_dominoes_cmf_are_as_follows\": \"|Correlation of the unknown spurious feature|AFR|AFR+EIIL|\\n|-|-|-|\\n|85|$65.7_{\\\\pm0.2}$|$69.1_{\\\\pm0.1}$|\\n|90|$54.2_{\\\\pm0.2}$|$61.5_{\\\\pm0.2}$|\\n|95|$40.3_{\\\\pm0.5}$|$40.4_{\\\\pm0.1}$|\\n\\nA similar improvement from EVaLS to EVaLS-GL is observable between AFR adn AFR+EIIL.\\n\\n--- \\n\\n## Question 2\\n\\nA dataset that has multiple spurious shortcuts, each highly correlated with the target independently, requires a large amount of data to capture both spurious correlations. Constructing such a dataset with proper quality, along with training and evaluating models on it, requires some time to accomplish. We are working on it and will inform you.\\n\\n--- \\n\\n## Minor points\\nThank you for pointing out the slips and other aspects that may cause potential confusion in our paper. We will correct the slips and try to address your suggested improvements in the revision.\"}", "{\"title\": \"Reply to Reviewer 9oxU - Part 3/3\", \"comment\": \"## Weakness 2\\nFirst, please refer to point 2 above for clarification. Results for GDRO+EIIL (Creager et al., 2021) are shown in Table 1. It can be seen that it underperforms other methods in the table in most comparisons, indicating that their approach is not as effective as others. Specifically, GDRO+EIIL underperforms GDRO, which utilizes group annotations, as expected. You can find more information about results on spurious correlation datasets below.\\n\\n--- \\n\\n## Question 1\\nFirst, please refer to point 1 above for clarification. When group annotations are available, we expect methods that utilize them to outperform others. So, it is expected that EVaLS-GL outperforms EVaLS when exact group annotations are available. \\n\\n--- \\n\\n## Question 2\\nLet\\u2019s first review datasets with spurious correlations in Table 1 and 6, and then compare the results of EVaLS and EVaLS-GL.\\n\\nOn *Waterbirds* with available group annotations, EVaLS-GL outperforms EVaLS by $1$%.\\n\\nRegarding *CelebA*, EVaLS achieves a slightly higher WGA ($0.7$%). It is known (L1449-1453 (1455-1457 and 1470-1471 in the revision)) that annotation noise is present in this dataset, as evidenced also by previous research (e.g., [\\\\*]). Consequently, group annotations in CelebA are not completely accurate. EVaLS bypasses this issue by utilizing inferred environments instead of relying on noisy group annotations.\\n\\nOn datasets with two spurious attributes\\u2014one known and one unknown (*UrbanCars* and *Dominoes-CMF*)\\u2014utilizing inferred information rather than available annotations provides a solution for fairly achieving robustness to all spurious attributes. Based on the results of both experiments, EVaLS outperforms EVaLS-GL by an average margin of $1.89$%. \\n\\n### Conclusion\\nIf group annotations for a known spurious correlation are available, using them is more effective for robustness to the spurious correlation than utilizing environments. However, environment-based validation can also achieve near-optimal accuracy (L517\\u2013518 (520-521 in the revision)). If group annotations are noisy, this may result in lower performance when utilizing them. In such cases, there is a trade-off between using noisy group annotations and inferred environments. \\n\\nIf a spurious correlation that a trained model relies on is unknown during training, using inferred environments is useful for robustness, whereas methods that require explicit annotations are not applicable. \\n\\nCombining the above points, for model selection:\\n\\n- **If the goal is to ensure robustness to a known spurious correlation with ground-truth annotations, using group annotations is an effective solution (depending on the quality of the ground-truth annotations).**\\n- **If the goal is to generally improve the robustness of a trained model to both known and unknown spurious correlations, using inferred environments is an effective solution.**\\n\\n[*] Speth et al., *Automated Label Noise Identification for Facial Attribute Recognition*, CVPR, 2019.\"}", "{\"summary\": \"The paper EVaLS, a method to improve model robustness against spurious correlations without requiring group annotations. EVaLS balances high- and low-loss samples from an ERM-trained model and applies a simple last-layer retraining (on a loss-based sampled dataset), thus enhancing group robustness. The approach also uses worst environment accuracy to for model selection. Experimental results on diverse dataset shows competitive performance to baseline methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The proposed approach EValS, using environment inference and Loss-based sampling, is novel and interesting.\\n2. EValS has competitive performance with other methods across diverse datasets.\", \"weaknesses\": \"1. The assumption that \\\"minority samples are more prevalent among high-loss samples, while majority samples dominate the low-loss category\\\" is questionable. It is easy to construct distributions that does not satisfy this assumption.\\n2. The performance of EValS seems to rely on the EIIL to find the correct environment. However, how to find the environments may be a challenging problem itself. \\n3. The peformance of EValS does not seem consistently better than other baseline methods across all datasets. It is not clear whether the better performance in specific dataset is by chance.\", \"questions\": \"See weakness above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper having been borderline during the review process, I have read it and made my own idea of its content.\\n\\nFirst, the paper makes the repeated claim, also repeated here in rebuttals, that its approach is near-optimal. But there is absolutely *no* formal result in the paper that allow to state such near optimality (certainly not Proposition 3.1, see below). The paper and rebuttal clearly attach to this claim of near-optimality the results of one *experimental* table (1). Such a claim could eventually be supported *iff* the optimal mark was known for the domains (e.g. if the domains had been simulated, with clear evidence or a proof of the best result). This is clearly not the case here and the authors insistence on this so-called near-optimality give the opposite results of the one intended: it ends up overselling results otherwise fine. \\n\\nIt is important to insist at this point that absolutely *nothing* in the paper supports *any* claim of near-optimality of the method proposed.\\n\\nThen, regarding the theory, there is little in the paper that serves the authors claim of generality. Proposition 3.1 is unfortunately poorly formulated and it is hard to understand the meaning of (1), which does not appear in a sentence (reading the appendix, I suspect it is the result of an unfortunate cut in the paper but any fix I can imagine certainly does not bring any substantial part of the generality claimed). The background of Proposition 3.1 is however *extremely* specific and shallow and *certainly* cannot serve as an illustration of the paper's claims about being of broad appeal (weakness #2 of reviewer 3i2u).\\n\\nI can only stress the importance of *either* a strong formal analysis of the problem, *or* a more thorough experimental section -- indeed, the range of the key values of the approach are so different between domains (Section E.4) that it seems extremely complicated to figure out a simple rule of thumb to give a new user as a starting point to use the technique. In this context, writing \\\"our results demonstrate that for most datasets, multiple hyperparameter combinations yield optimal or near-optimal performance, reducing the need for exhaustive searches\\\" (L1399) makes little sense for *two reasons*, not just for the near-optimality claim (see above), but also because it is clear that the authors have spent a lot of time doing **domain dependent choices** for the parameters used without much details on why / how choices were made.\\n\\nIn fine, I believe the paper starts with an interesting idea -- indeed, a loss can often be viewed as a negative log-likelihood and so the idea of loss sampling can make sense --, but fails to give the proofs that the idea developed justifies the extremely strong claims made here and there in the paper. I can only recommend that the authors redraft the paper, dig in the theory and orient new experiments to show the approach is simple to use and its (hyper)parameters are at least reasonably easy to figure out even for just a reasonable range.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers gave the authors the opportunity to justify some bold claims made in the paper, but the rebuttal(s) essentially failed to provide substance (apart from the authors insisting further on (near)optimality claims yet without any additional part), see in particular 3i2u and 9oxU.\"}", "{\"title\": \"New Dataset\", \"comment\": \"### Minor Points\\n\\nAs you can now see in the revised version, we have added AFR results to Figure 4(b). However, we feel that the figure has become too crowded, reducing its readability. Therefore, we think it might be better to keep the figure as it was before, avoid adding further information and move the additional information to Table 6, which we believe is more readable and preferable. Please let us know if you strongly prefer otherwise.\\n\\nWe have corrected the slips in the original submission. Since changes to the body of the paper would invalidate the line references in our responses to other reviewers, we will update Table 6, change the references and/or order of the figures, and add results of the dataset below later in the rebuttal period to avoid confusion for the other reviewers.\\n\\n---\\n\\n### New Dataset\\nWe have added a new dataset for evaluating the robustness of trained models to unknown shortcuts (Sec. 2.2). We have used a subset of the CelebA dataset. The new dataset consists of real-world images and further confirms our findings on the effectiveness of EVaLS in robustness against known and unknown spurious correlations.\\n\\nThe *\\u201cStraight Hair\\u201d* attribute is considered as the label, *\\u201cSmiling\\u201d* as the known spurious attribute, and *gender* as the unknown spurious attribute. Worst group accuracies (WGA) are reported in the table below among 8 groups (all binary combinations of the label and spurious attributes). We set the spurious correlation of the known attribute to 80% and conduct experiments for various levels of unknown spurious correlation (similar to the Dominoes-CMF experiments). Spurious correlations are imposed by subsampling from the original CelebA dataset.\\n\\nThe results demonstrate that methods that do not rely on group annotations for retraining or model selection achieve higher WGA among groups based on both known and unknown attributes. Specifically, *EVaLS* achieves higher WGA compared to *EVaLS-GL*, which uses loss-based sampling to create the retraining dataset and relies on group annotations for model selection. Furthermore, *EVaLS-GL* outperforms *DFR*, which depends on group annotations for both retraining and model selection. *EVaLS* improves the WGA of ERM by $22.7$% on average. The *Oracle* model uses group annotations based on both known and unknown spurious attributes during retraining and model selection. Its WGA is $40.8$% higher than that of DFR on average, which only uses annotations of the known spurious attribute.\\n\\nThese results further confirm the findings in *Figure 6(b)* for Dominoes-CMF. \\n\\n|Method|85% Unknown Spurios Corr.|90% Unknown Spurios Corr.|95% Unknown Spurios Corr.|\\n|-|-|-|-|\\nDFR (Oracle)|$63.1_{\\\\pm 0.9}$|$59.2_{\\\\pm 1.9}$|$58.4_{\\\\pm 5.0}$|\\nDFR|$27.2_{\\\\pm 2.2}$|$18.9_{\\\\pm 0.7}$|$12.3_{\\\\pm 1.6}$|\\nAFR+EIIL|$41.3_{\\\\pm 5.7}$|$36.3_{\\\\pm 4.5}$|$45.0_{\\\\pm 5.3}$|\\nAFR|$28.1_{\\\\pm0.4}$|$24.3_{\\\\pm 2.1}$|$15.7_{\\\\pm2.6}$|\\nEVaLS|$\\\\boldsymbol{45.2}_{\\\\pm 2.9}$|$\\\\boldsymbol{44.9}_{\\\\pm 3.1}$|$\\\\boldsymbol{45.7}_{\\\\pm2.2}$|\\nEVaLS-GL|$30.5_{\\\\pm5.2}$|$26.3_{\\\\pm 6.4}$|$19.3_{\\\\pm 3.2}$|\\nERM|$28.3_{\\\\pm 0.6}$|$23.9_{\\\\pm 1.5}$|$15.6_{\\\\pm 2.6}$|\\n\\nWe have addressed all your questions. If further clarification or information is needed, we are eager to address them promptly.\"}", "{\"summary\": \"Many studies that enhance robustness to spurious correlation require group annotations for training. This paper aims to enhance the robustness with minimal group annotation assumptions. Specifically, the losses from an ERM-trained model are used to construct a balanced dataset of high-loss and low-loss samples, mitigating group imbalance in data. Moreover, using environment inference methods to create diverse environments has been shown to potentially eliminate the need for group annotation in model selection. Experiments demonstrate the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written and easy to follow.\", \"The paper proposes a practical method to mitigate the reliance on spurious correlations without any group annotations.\", \"A new dataset is constructed that demonstrates the effectiveness of the proposed method in mitigating unknown shortcuts.\"], \"weaknesses\": \"- The proposed method is incremental and has limited technical contributions. Retraining the last-layer using group-balanced validation data to mitigate the reliance on spurious correlations has been used in [1,2]. Inferring environment is a direct follow-up of [3].\\n\\n- The theoretical analysis does not really explain why loss-based sampling within a class can be used to create a group-balanced dataset. The analysis assumes that the losses on the majority and minority samples follow Gaussian distributions. Under this assumption, it is obvious that the loss-based sampling could create two group-balanced sets of data. However, whether this assumption holds in practice is questionable. Moreover, a previous study [2] has found that that model disagreement may effectively upsample worst-group data, or in other words, may create a more group-balanced dataset. Thus, the loss-based sampling may not be as effective as proved in the paper.\\n\\n- The comparison in Table 1 isn't fair for some methods. EVaLS uses new data, i.e., a part of validation data, for retraining, while methods including GDRO + EIIL, JTT, and ERM do not have access to the new data. Moreover, the existing work [4] also propose a method that aims to mitigate spurious correlations without group annotations. It would be beneficial to compare with this method under the same setting. \\n\\n- There are some model selection methods that do not require group annotations, such as minimum class difference [4] and worst-class accuracy [5]. It would be helpful to analyze the effectiveness of the proposed worst environment accuracy in comparison with these techniques.\\n\\n[1] Kirichenko et al., Last layer re-training is sufficient for robustness to spurious correlations, ICLR 2023.\\\\\\n[2] LaBonte et al., Towards last-layer retraining for group robustness with fewer annotations, NIPS, 2023.\\\\\\n[3] Creager et al., Environment inference for invariant learning, ICML, 2021.\\\\\\n[4] Li et al., Bias Amplification Enhances Minority Group Performance, TMLR, 2024.\\\\\\n[5] Yang et al., Change is hard: A closer look at subpopulation shift, ICML, 2023.\", \"questions\": [\"See the weaknesses.\", \"In Table 1, why the experiments on the CivilComments and MultiNLI datasets are out of the scope of the method?\", \"In L189, the authors mention that they randomly divide the validation set into $\\\\mathcal{D}^{LL}$ and $\\\\mathcal{D}^{MS}$. What if the random division results in a poor set of $\\\\mathcal{D}^{MS}$ which does not have sufficient samples to represent a minority group of samples?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper works in the sub-population shift setting, in which the data consists in samples of different groups that shares a property. The goal is to learn a model robust to sub-population shifts, such as class imbalance, attribute imbalance, and spurious correlations. They propose a method (EVaLS) that improves the robustness of models trained using ERM without using any group annotation data. The method is well motivated both empirically and theoretically, and they show that the model has competitive performance with and without using group annotation. Moreover, they show that EVaLS is robust to scenarios where there is an unknown spurious attribute in comparison to the state-of-the-art.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Simple method, that works in multiple scenarios (with and without group annotations).\", \"The method do not make strong assumptions about the trained model. The only requirement is a model trained by ERM (the training data or any other training information is not necessary), while the method acts in a post-training phase.\", \"It shows competitive performance in comparison to the literature, while it has a less strict set of requirements in comparison to most of them.\", \"EVaLS outperforms the DFR (one of the state-of-the-art models) in cases with multiple spurious attributes\"], \"weaknesses\": [\"Make a comparison of EVaLS with more methods (e.g. AFR, since it also doesn't depend on ERM training) in the multiple spurious attributes scenario.\", \"Have more evidence that EVaLS outperforms the DFR/other methods in cases with multiple spurious attributes (e.g. using more datasets). I believe that this is the strongest part of the results, and it is a clear advantage for EVaLS (besides cases in which group annotation is not available).\"], \"questions\": \"Some questions:\\n1) Is it feasible to add AFR results to the multiple spurious attributes experiment? \\n2) Is it feasible to add an extra dataset to the multiple spurious attributes experiment?\\n\\nAdding these extra results will address the points mentioned in the weakness, showing stronger empirical evidence about EVaLS advantage in cases with multiple spurious attributes.\", \"minor_points\": [\"There are references to Figure 1 (Line 83) and Figure 3 (Lines 172-177) that come before Figure 2 and were a bit confusing to me while I was reading the paper for the first time. In my opinion, removing these references will improve the readability of the paper.\", \"Figure 2 instead of figure 2 (Line 205).\", \"Have the oracle results as a reference in Table 6 to follow Figure 4 (b).\", \"Figure 4 (b) could include the standard deviation (such as in Table 6).\", \"Sometimes you use sub-population (Line 131) or subpopulation (Line 139). Keep it consistent across the paper.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your answer.\\n\\nYes, Figure 4 (b) has too much information. Comparing the results using Table 6 only seems easier for me.\\n\\nRegarding the new dataset, that's indeed a very good result. I am increasing my score accordingly.\"}" ] }
8Ds99sdp3U
A Semantic Data Augmentation driven Nested Adapter for Video Moment Retrieval
[ "Arkaprabha Bhandari", "KAJAL KANSAL", "Yongkang Wong", "Jianquan Liu", "Mohan Kankanhalli" ]
Existing transformer-based video-moment retrieval models achieve sub-optimal performance when using the pretrain-finetuning learning paradigm – a pretrained multimodal encoder is finetuned using the target training data. While current work has explored different model architectures and training paradigms to explore this problem, the problem of data dilemma has been under addressed. Specifically, there exists high diversity of how semantic is captured in textual query and the training dataset only consist of limited moment-query pairs for the highly diverse moments. This work addresses this problem with a novel nested adaptor and a LLM-driven semantic data generation pipeline. First, a LLM-driven data augmentation generates queries that are semantically similar to the ground truth, which enrich the semantic boundary captured by textual query. We empirically analyze the effectiveness of data augmentation, and proposed a simple yet effective quality measure to retain high quality samples. Second, we propose a novel nested adapter that utilises both augmented queries and human annotated queries for model coarse-tuning and fine-tuning, respectively. By combining semantic perturbation with domain adaptation, our approach addresses the variability in video content while capturing nuanced features more effectively. Experimental results on various baseline models show the efficacy of our proposed approach.
[ "Moment Retrieval", "Highlight Detection", "Adapter", "Data Augmentation" ]
https://openreview.net/pdf?id=8Ds99sdp3U
https://openreview.net/forum?id=8Ds99sdp3U
ICLR.cc/2025/Conference
2025
{ "note_id": [ "voMjAIq4lK", "rJ9Mo7p4zR", "rE2I6YydKT", "o6ZtSaEVSS", "NiaSqMu1v9", "G4Vyv9D4gp" ], "note_type": [ "official_review", "official_review", "official_review", "official_comment", "comment", "official_review" ], "note_created": [ 1730636774962, 1730380386567, 1729949576895, 1732521217567, 1732530936555, 1730686410008 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9631/Reviewer_twnk" ], [ "ICLR.cc/2025/Conference/Submission9631/Reviewer_F28H" ], [ "ICLR.cc/2025/Conference/Submission9631/Reviewer_PPHd" ], [ "ICLR.cc/2025/Conference/Submission9631/Authors" ], [ "ICLR.cc/2025/Conference/Submission9631/Authors" ], [ "ICLR.cc/2025/Conference/Submission9631/Reviewer_tqJW" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces a novel approach to enhance video-moment retrieval models that traditionally struggle with the pretrain-finetuning paradigm due to the data dilemma. The LLM-driven data augmentation enriches the semantic boundary, while the nested adapter leverages both augmented and human-annotated queries for effective model training. The approach addresses variability in video content and captures nuanced features more effectively.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"A. The text of this article is clearly written and easy to understand.\\nB. This paper provides a detailed description of the key benefits of using LLMs for query augmentation.\", \"weaknesses\": \"A. This paper mentions \\\"domain gap\\\" in Line 58, but the corresponding example does not adequately demonstrate this gap; a more detailed explanation should be provided.\\nB. This paper discusses the differences in data format and task requirements for LLMs in MR in the related work section, but it fails to address the relationship between MLLMs and MR.\\nC. In Line 211, this paper mentions the use of GPT-3.5-turbo and GPT-4-o mini, but it does not explain why these two LLMs were chosen, nor does it discuss more diverse evaluation methods to assess the effectiveness of query generation.\\nD. The comparative methods in this paper are insufficient; additional baselines should be included for comparison.\\nE. This paper does not provide information on the number of trainable parameters and the speed of operation for the proposed method, raising concerns about its feasibility in practical applications.\\nF. There is a lack of visual results, such as visualizations of data augmentation and retrieval results.\\nG. The meaning of the bolded data in Table 2 is unclear; for instance, \\\"65.6\\\" in \\\"CG DETR\\\" for R1@0.5 is bolded, but \\\"65.7\\\" is not (GPT-3.5-turbo, 8x Aug).\\nH. This paper should provide an explanation for the values of R1@0.7 and mAP (None, None) in \\\"CG DETR\\\" in Table 2, particularly why the performance declined after data augmentation was applied.\", \"questions\": \"see the above weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper targets the video moment retrieval (MR) task. In this paper, authors try to use the CLIP network to handle the task. LLMs are used for query augmentations, which can improve the query diversity. Two linear layers are used in the nested adapter module. T-SNE visualization and ablation study are designed to analyze the model performance. Some experiments based on some MR baselines are conducted.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The presented method is model-agnostic. It can be used in some baselines (e.g., Moment DETR, QD DETR, and CG DETR) to improve their performance.\\n\\nLLMs are utilized in the designed method to generate various queries for data augmentation. Moreover, a filter is adopted to select the generated queries. Different LLMs are used to evaluate the model performance.\\n\\nThe t-SNE visualization can help reader understand the feature distribution.\\n\\nAuthors provide some details about the model implementation in Section 4.2.\", \"weaknesses\": \"Figure 1 is wrong. The text encoder and the image encoder should swap positions. The video input contains multiple clips, not images. Why the image encoder can be used to obtain the clip features?\\n\\nWhy authors only consider the R@1 results and ignore the R@5 results?\\n\\nWhy not present the retrieval visualization results and the e\\ufb03ciency results? They are very important for the video moment retrieval task.\\n\\nThe number of baseline methods are not enough. Only three methods (Moment DeTR (Lei et al., 2021), CG DETR (Moon et al., 2023a) and QD DETR (Moon et al., 2023b)) are used in the experiment section. Authors should compare more baseline (e.g., MH-DETR (Xu et al., 2023)).\\n\\nThe nested adapter is not novel since it only contains two linear layers. The linear layer is very common in many networks.\\n\\nThe caption of Figure 1 needs to be revised since there are two \\u201cFigure 1:\\u201d.\\n\\nIn Section 3.1, why the query corresponds to two different symbols in Line 189 and Line 191?\\n\\nFigures 2-4 are not clear (especially lines) and the texts are two small. Authors should polish them carefully.\\n\\nSome grammatical errors, such as \\u201cWhile current work has explored different model architectures\\u201d and \\u201cthe training dataset only consist of \\u2026\\u201d in Abstract.\\n\\nSome punctuation marks are incorrectly used, such as \\u2019Very Good\\u2019 in Line 405 and \\u201c(see Fig: 2)\\u201d in Line 269.\", \"questions\": \"Please address the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a nested adaptor and a LLM-driven semantic data generation pipeline to improve video moment retrieval. The pipeline generates semantically similar queries to enrich query diversity, while the nested adapter uses both augmented and human-annotated queries for coarse-tuning and fine-tuning.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. Using LLM-generated captions to enhance video moment retrieval is reasonable.\\n2. Experiments demonstrate some effectiveness.\", \"weaknesses\": \"1. The writing and figures require improvement.\\n2. The contribution is limited, with minimal improvements. Using LLMs to enhance textual semantics has been explored with more substantial gains (e.g. [1-2]). The nested adaptor is also trivial.\\n\\n [1] ChatVTG: Video Temporal Grounding via Chat with Video Dialogue Large Language Models.\\n\\n [2] Context-Enhanced Video Moment Retrieval with Large Language Models.\\n\\n3. The experiments are insufficient, lacking results on popular datasets like Charades-STA and ActivityNet Captions.\\n4. Minor errors, such as \\\"Figure 1: Figure 1:\\\".\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Withdrawal of Submission to ICLR\", \"comment\": \"We would like to extend our heartfelt gratitude to the reviewers for their time, effort, and valuable feedback on our submission. Your constructive comments and suggestions have been instrumental in helping us identify areas for improvement.\\nAfter careful consideration, we have decided to withdraw our paper from the ICLR review process. We acknowledge that the manuscript in its current form does not fully justify our claims. However, we have addressed the points raised during the review process, and our responses are now supported by additional empirical evidence.\\nWe are committed to refining our work based on your thoughtful feedback and will resubmit an improved version to another venue soon.\\nThank you again for your insightful contributions, which have greatly enhanced the quality of our research.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The manuscript proposes a data augmentation process driven by a Large Language Model (LLM) to enrich the semantic boundaries of textual query capture by generating queries semantically similar to real queries. Experimental results on several baseline models show the effectiveness of the proposed approach.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The manuscript proposes a simple quality metric for retaining high-quality data-enhanced samples.\\n2. A new nested adapter is proposed, which utilizes augmented and manually annotated queries for coarse and fine-tuning the model, respectively. \\n3. The proposed data augmentation approach is model-independent and can be applied to different architectures, enhancing its adaptability\", \"weaknesses\": \"1. Explainability of Data Augmentation: while the manuscript proposes an LLM-driven approach to data enhancement, it may not adequately explain the selection criteria for the enhanced data and the specific impact on model performance. For example, it is too\\nsimplistic to do research only on the number and proportion of generated queries. Second, for queries that may have semantic annotation errors, does this practice of generating approximate semantic sentences amplify the errors and degrade performance, and how robust is the model, although the model allows for the presence of a certain amount of noise. \\n2. Figure content error: In Figure 1, the \\u2018Text Encoder\\u2019 and \\u2018Image Encoder\\u2019 of the CLIP feature extractor are incorrectly drawn. This visual error may mislead the reader and negatively affect the accuracy and professionalism of the article. \\n3. Irregularity in the title of the figure: the title of Figure 1 repeats the phrase \\u2018Figure 1: Figure 1\\u2019, which is unnecessary and should be simplified to \\u2018Figure 1\\u2019. Figure 4 has no full stop.\", \"questions\": \"My question has been shown in the weaknesses above. In addition, are there any clear conclusions from Figures 3 and 4?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
8Dj6OEMj6W
Embedding Self-Correction as an Inherent Ability in Large Language Models for Enhanced Mathematical Reasoning
[ "Kuofeng Gao", "Huanqia Cai", "Qingyao Shuai", "Dihong Gong", "Zhifeng Li" ]
Accurate mathematical reasoning with Large Language Models (LLMs) is crucial in revolutionizing domains that heavily rely on such reasoning. However, LLMs often encounter difficulties in certain aspects of mathematical reasoning, leading to flawed reasoning and erroneous results. To mitigate these issues, we introduce a novel mechanism, the Chain of Self-Correction (CoSC), specifically designed to embed self-correction as an inherent ability in LLMs, enabling them to validate and rectify their own results. The CoSC mechanism operates through a sequence of self-correction stages. In each stage, the LLMs generate a program to address a given problem, execute this program using program-based tools to obtain an output, subsequently verify this output. Based on the verification, the LLMs either proceed to the next correction stage or finalize the answer. This iterative self-correction process allows the LLMs to refine its reasoning steps and improve the accuracy of its mathematical reasoning. To enable the CoSC mechanism at a low cost, we employ a two-phase finetuning approach. In the first phase, the LLMs are trained with a relatively small volume of seeding data generated from GPT-4, establishing an initial CoSC capability. In the second phase, the CoSC capability is further enhanced by training with a larger volume of self-generated data using the trained model in the first phase, without relying on the paid GPT-4. Our comprehensive experiments demonstrate that CoSC significantly improves performance on traditional mathematical datasets among existing open-source LLMs. Notably, our CoSC-Code-34B model achieved a 53.5\% score on MATH, the most challenging mathematical reasoning dataset in the public domain, surpassing the performance of well-established models such as ChatGPT, GPT-4, and even multi-modal LLMs like GPT-4V, Gemini-1.0 Pro, and Gemini-1.0 Ultra. It's important to note that, unlike these proprietary models, our CoSC performs inference in a zero-shot manner, without the need for demonstrations. The code and data for this work will be released once this paper is accepted.
[ "Large Language Models", "Mathematical Reasoning", "Self-correction" ]
Reject
https://openreview.net/pdf?id=8Dj6OEMj6W
https://openreview.net/forum?id=8Dj6OEMj6W
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z85RVlGIJI", "ykiL9DCJVR", "xyY3uGp4YT", "wP15Wh1xic", "vVLFw4Z7OT", "qE4XAiiPSd", "pCZPb71bgK", "oTUCCGq6M6", "oAPR3AWILs", "nkNmf9N4Tr", "lLsyVkCJuv", "l9QRt98rXi", "i8G8NgRMMD", "gTVXpDzF5F", "eSvAtQRdFC", "cM81xner03", "ba2SVxt40d", "bRE0GiAYp0", "bKqwXSaGWT", "axANXjzcr8", "TmNTLQ9Mvh", "QvrdY7uURk", "PTU6eSPBHM", "MxcLlZMjHY", "LJjMrb6VGe", "LEOeYS3KSq", "I4xoeYIlzl", "GjsRHZJUFN", "2YQ9m2rR8p", "1L4agiWfrz", "115lrCmb8R" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision" ], "note_created": [ 1732515101800, 1733062342656, 1732515576987, 1732515054840, 1732649388588, 1733123680547, 1733009847571, 1732907735182, 1732515374402, 1732623924550, 1730494054398, 1732515425027, 1735275151468, 1732970516105, 1733089697929, 1732515817860, 1732782689833, 1732704348923, 1729779520008, 1730677768821, 1732795072865, 1733068996417, 1732518371445, 1733089847880, 1732515735170, 1730677644767, 1732634814858, 1732795339977, 1732514850048, 1733123598013, 1737523820279 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7154/Authors" ], [ "ICLR.cc/2025/Conference/Submission7154/Reviewer_gRDy" ], [ "ICLR.cc/2025/Conference/Submission7154/Authors" ], [ "ICLR.cc/2025/Conference/Submission7154/Authors" ], [ "ICLR.cc/2025/Conference/Submission7154/Reviewer_ediq" ], [ "ICLR.cc/2025/Conference/Submission7154/Authors" ], [ "ICLR.cc/2025/Conference/Submission7154/Reviewer_ediq" ], [ "ICLR.cc/2025/Conference/Submission7154/Authors" ], [ "ICLR.cc/2025/Conference/Submission7154/Authors" ], [ "ICLR.cc/2025/Conference/Submission7154/Authors" ], [ "ICLR.cc/2025/Conference/Submission7154/Reviewer_KVMJ" ], [ "ICLR.cc/2025/Conference/Submission7154/Authors" ], [ "ICLR.cc/2025/Conference/Submission7154/Area_Chair_BfjP" ], [ "ICLR.cc/2025/Conference/Submission7154/Authors" ], [ "ICLR.cc/2025/Conference/Submission7154/Authors" ], [ "ICLR.cc/2025/Conference/Submission7154/Authors" ], [ "ICLR.cc/2025/Conference/Submission7154/Reviewer_gRDy" ], [ "ICLR.cc/2025/Conference/Submission7154/Authors" ], [ "ICLR.cc/2025/Conference/Submission7154/Reviewer_gRDy" ], [ "ICLR.cc/2025/Conference/Submission7154/Reviewer_ediq" ], [ "ICLR.cc/2025/Conference/Submission7154/Authors" ], [ "ICLR.cc/2025/Conference/Submission7154/Authors" ], [ "ICLR.cc/2025/Conference/Submission7154/Reviewer_ediq" ], [ "ICLR.cc/2025/Conference/Submission7154/Authors" ], [ "ICLR.cc/2025/Conference/Submission7154/Authors" ], [ "ICLR.cc/2025/Conference/Submission7154/Reviewer_VQBN" ], [ "ICLR.cc/2025/Conference/Submission7154/Reviewer_KVMJ" ], [ "ICLR.cc/2025/Conference/Submission7154/Authors" ], [ "ICLR.cc/2025/Conference/Submission7154/Authors" ], [ "ICLR.cc/2025/Conference/Submission7154/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal by Authors [2/2]\", \"comment\": \"***Q2: Unlike these proprietary models, our CoSC performs the inference in a zero-shot manner without demonstration - I don\\u2019t understand this. How do the proprietary models considered in the evaluation use demonstrations?***\\n\\nThank you for your feedback. **In the evaluation of proprietary models such as GPT-4 and Gemini, the results we report are based on their respective official technical reports [2,3].** These reports indicate that the proprietary models often utilize few-shot prompting, where a small set of examples is provided within the prompt to guide their reasoning processes.\\n\\u00a0\\nIn contrast, our method, CoSC, does not rely on such few-shot demonstrations during inference. Instead, CoSC leverages its intrinsic capabilities, enabled by the fine-tuning process, to perform zero-shot reasoning directly. This distinction highlights a fundamental difference in the inference paradigms: proprietary models benefit from carefully constructed few-shot examples to achieve optimal performance, whereas CoSC is designed to generalize robustly without such external demonstrations.\\n\\n[2] OpenAI. GPT-4 Technical Report. In 2023.\\\\\\n[3] Google Deepmind. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. In 2024.\"}", "{\"comment\": \"Thank you for the further details and providing the comparison table. The difference that ToRA only corrects when external tool errors are encountered is a good qualitative difference between the two approaches. However, overall, the notion of self-correction with respect to code and mathematical reasoning is becoming very prevalent in current research, so it is difficult to clearly separate the novelty of your technique. Perhaps you can also comment on the following related works in this area, which again use the LLM to critique and refine their solutions and also use code-based approaches in doing so, where I dont think correction is based only on runtime errors encountered (these would be relevant related works for you to discuss in the paper anyway I think).\\n\\n[1] Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification. Zhou et al, ICLR 2024\\n\\n[2] Self-Refine: Iterative Refinement with Self-Feedback. Madaan et al, NeurIPS 2023. \\n\\nIn terms of significance of the experimental results, you claim that the marginal difference is due to prompting rather than fine-tuning. I think to make this clasim it should be shown that prompting vs fine tuning on the same model would produce such a huge difference (e.g. the prompting approach on your open source models performs so much worse than your fine-tuned versions). However, the marginal improvements with respect to SoTA will still remain. \\n\\nOverall, my two concerns remain about a very distinct novelty in the technique and marginal empirical improvements over SoTA approaches. Hence I would prefer to keep my current rating of borderline accept.\"}", "{\"title\": \"Rebuttal by Authors\", \"comment\": \"Thank you for your valuable review and suggestions. Below we respond to the comments in **Weaknesses (W)** and **Questions (Q)**.\\n\\n---\\n\\n***W1: The authors train a small model on the output of a large model and find that it improves performance a lot (compare \\\"model distillation\\\" [1]) and then train the resulting model on its own output and find that it improves performance further (compare born again networks [2] or more generally \\\"expert iteration\\\").*** \\n\\nThank you for your insightful comment. We have now included references to these papers [1,2] on model distillation and expert iteration in Line 163 of our revised paper. Additionally, we have revised the paper to better contextualize our work and to explicitly acknowledge the contributions of these related research fields. Thank you for bringing this to our attention.\\n\\n[1] Cheng-Yu Hsieh, Chun-Liang Li, et al. Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes. In ACL, 2023.\\\\\\n[2] Tommaso Furlanello, Zachary C. Lipton, et al. Born Again Neural Networks. In ICML, 2018.\\n\\n---\\n\\n***W2: The comparison landscape is incomplete and potentially misleading: Notable omission of open-source models with strong mathematical performance (QWEN2, Orca-Math, ...) that match/surpass the achieved performance.***\\n\\nThank you for your valuable feedback. Qwen2 serves as a pretrained base LLM, whereas Orca-Math is fine-tuned on Mistral. In contrast, our CoSC model is fine-tuned on CodeLLaMA, highlighting the foundational differences in the base models. **These differences result in inherent disparities in model capabilities, which limit the direct comparison of the results in their current form.**\\n\\nOur primary objective in this work is not to achieve the best performance on mathematical dataset leaderboards but to evaluate the novel capabilities introduced by our method, such as self-correction and iterative reasoning. These features represent significant advancements in enhancing reasoning robustness and accuracy, which are independent of the choice of base model.\\n\\n**In this revised version, we have included a broader discussion of Qwen2 and Orca-Math in Section 2.1 to provide context for their performance.** However, given the limited time during the rebuttal phase, we plan to leverage our CoSC training dataset to fine-tune Qwen2 and Mistral in future work. This will enable a fairer, more direct, and comprehensive comparison of their reasoning capabilities under a unified framework.\\n\\n---\\n\\n***W3: The authors should indicate exactly which GPT-4 version was used for training data generation.***\\n\\nThank you for your valuable suggestion. We have clarified that `gpt-4-0613` was used to generate our training data and added this information in Line 368 of the revised paper. \\n\\n---\\n\\n***W4: The phrase \\\"for some complex solutions we can only get < 1\\\" is slightly awkward.***\\n\\nThanks for pointing that out. We have revised it accordingly in the updated version of the paper.\\n\\n---\\n\\n***Q1: Do the experiments for Table 4 use a fewshot prompt for the non-CoSC models to tell them how to utilize multiple rounds of reasoning? How do the authors explain that the fraction of \\\"more than one round of reasoning\\\" very close to 0% for those models?***\\n\\nThank you for your insightful question. **We would like to clarify that no few-shot prompts were used for any of the models presented in Table 4.** The observation that the fraction of instances involving \\\"more than one round of reasoning\\\" is nearly 0% for the non-CoSC models, i.e., CodeLlama and ToRA-Code, highlights their limited capability to perform multi-round reasoning effectively. It is worth noting that both CodeLlama and ToRA-Code used in this work are the official open-source inference models provided by their respective developers. This result underscores the distinct advantage of CoSC in enabling multi-round reasoning processes.\"}", "{\"title\": \"Rebuttal by Authors [1/2]\", \"comment\": \"Thank you for your valuable review and suggestions. Below we respond to the comments in **Weaknesses (W)** and **Questions (Q)**.\\n\\n---\\n\\n***W1 & Q1: How is ToRA prompted in this evaluation? Is it possible to simply modify the prompt to allow it to self-correct? Is fine-tuning in CoSC necessary?***\\n\\nThank you for your insightful comment. To clarify, **ToRA is a fine-tuning-based method trained on data generated by GPT-4, as described in Line 86-87 and Line 437-438 of our original paper.** Unlike prompting methods, ToRA does not rely on instruction-based input during inference. After fine-tuning, it operates in a zero-shot manner without requiring external prompts or examples for reasoning tasks.\\n\\nIn this evaluation, ToRA is directly applied to test tasks, without additional prompt modifications or few-shot examples. While it is theoretically possible to adapt ToRA by incorporating some prompts that simulates self-correction, this would fundamentally change its methodology and deviate from the original design.\\n\\nOur method, CoSC, is designed with self-correction as an intrinsic capability. Achieving this requires specific data collection and fine-tuning to enable robust iterative reasoning. This necessity arises from the fact that embedding self-correction as an inherent ability in LLMs **demands altering the model's underlying parameters, a process that extends beyond the scope of simple prompt adjustments.**\\n\\n---\\n\\n***W2: Consider moving the algorithm a little bit earlier in section 3.2.1.***\\n\\nThank you for your helpful suggestion! Based on your feedback, we have moved the reference to Algorithm 1 to Line 260 in Section 3.2.1 in our revised paper to make it more accessible and to improve the flow of the presentation.\\n\\n---\\n\\n***W3: From Figure 1, it seems that ToRA does not perform any self-correction, so what does the 0.1% cases for ToRA in Round=2 mean?***\\n\\nThank you for your valuable comment. According to the ToRA paper [1] and Figure 1 of our paper, ToRA generates a sequence consisting of a natural language guidance (r), a program (p), and an output (o) for a given question. This process is repeated until the model places its final answer within the \\u201c\\\\boxed{}\\u201d symbol. The resulting trajectory is denoted as r\\u2081p\\u2081o\\u2081...r\\u2099\\u208b\\u2081p\\u2099\\u208b\\u2081o\\u2099\\u208b\\u2081r\\u2099, where r\\u2099 contains the answer.\\n\\n**For the case of Round=2 in ToRA, it implies that after the first round of reasoning (r\\u2081p\\u2081o\\u2081), the answer generated by the model is insufficient.** As a result, the answer is not placed within the \\u201c\\\\boxed{}\\u201d symbol. Consequently, ToRA continues the reasoning in the second round (r\\u2082p\\u2082o\\u2082), and the final answer is placed within the \\u201c\\\\boxed{}\\u201d symbol in r\\u2083. For a detailed description of this generation process, please refer to Section 2.1 of the ToRA paper [1].\\n\\nOne thing to note is that although the design of ToRA may suggest the use of multi-round reasoning, it does not explicitly possess the capability for multi-round inference, as clearly demonstrated in Table 4. Instead, the reasoning process of ToRA typically involves a single round of reasoning, with additional iterations occurring only in rare cases when the initial response is insufficient.\\n\\n[1] Zhibin Gou, Zhihong Shao, Yeyun Gong, Yujiu Yang, Minlie Huang, Nan Duan, Weizhu Chen, et al. Tora: A tool-integrated reasoning agent for mathematical problem solving. In ICLR, 2024.\\n\\n---\\n\\n***W4: Line 142: \\u201crevolutionize the accuracy\\u201d - please remove revolutionize here.***\\n\\nThank you for your comment. We have revised the phrase in our updated version of the paper, removing the word \\\"revolutionize\\\" as suggested.\\n\\n---\\n\\n***W5: Line 215: what if the conclusion is never reached?***\\n\\nThank you for your feedback. As stated in Line 367 of our paper, we set a maximum limit of three self-correction stages if a conclusion is not reached. We have further clarified this point in Line 241 of our revised paper.\\n\\n---\\n\\n***W6: Grammar issues.***\\n\\nThank you for your careful reading. We have made the necessary revisions to address the grammar issues in the updated version of our paper.\\n\\n---\\n\\n***W7: Can you add more description to the appendix? Some parts of the appendix like Appendix B are quite unclear.***\\n\\nThank you for your valuable feedback. In Appendix B, we provide an example of how our CoSC model generates corresponding answers in response to a query. Specifically, Line 1147-1148 contain the question posed to the CoSC model, while Line 1149-1210 present the answer generated by the model. We have clarified these details in the revised version of our paper for better understanding.\"}", "{\"comment\": \"Thank you for the response.\\n\\nI would recommend the authors to include the key details on the prompting method in Table 2 if possible rather than in the Appendix. \\n\\n> we have conducted an additional experiment using the CoSC prompts for evaluation on ToRA-Code-7B. The results on the MATH and GSM8K datasets are shown in the table below.\\n\\nCan you include the details of the additional experiment and the exact prompt used in the Appendix?\"}", "{\"title\": \"A Kind Reminder of Further Discussion\", \"comment\": \"Dear Reviewer KVMJ,\\n\\nThank you once again for your valuable comments and suggestions, which have been extremely helpful to us. We have posted detailed responses to the concerns you raised and have included additional experimental results.\\n\\nWe fully understand that this is a particularly busy period, and **we deeply appreciate it if you could take some time to provide further feedback on whether our responses address your concerns.** If there are any additional comments, we will try our best to address them promptly.\\n\\nSincerely,\\\\\\nAuthors of Submission 7154\"}", "{\"comment\": \"Thank you for your response. I am updating my overall score to 6.\\n\\nI would encourage the authors to explore incorporating evaluation methods that incentivize \\\"self-correction\\\" through the use of a large number of in-context examples, rather than relying solely on fine-tuning. My understanding is that fine-tuning a model for self-correction might reduce its performance on other tasks. If this understanding is incorrect, I would appreciate clarification.\\nAdditionally, I would like to see a more detailed discussion on the future directions of this research. For instance, How does this research fit in the context of more recent models trained for reasoning such as GPT4-o1, DeepSeek R1, or Qwen QwQ?\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"Thank you for your feedback! Below we respond to the remaining questions.\\n\\n---\\n\\n***Q1: A more detailed comparison table that clearly lists the concrete conceptual differences between CoSC and ToRA approaches (without any empirical differences).***\\n\\nThanks for your insightful comment. We provide a more detailed and clear comparison between ToRA and our CoSC as follows.\\n\\nThe main goal of ToRA is to integrate **external tools** into natural language reasoning to enhance its reasoning capabilities. ToRA generates a sequence consisting of an initial rationale, a program, an output, and a second rationale for a given question. Specifically, the initial rationale is used to analyze the problem before generating the program, while the second rationale is used to generate the final answer after executing the code, as described in Fig. 2 of the ToRA paper [1]. As shown in Table 4 of our paper, we acknowledge that ToRA can perform multi-round reasoning, but this happens only in extremely rare instances. Based on our analysis of all multi-round cases in ToRA, it is important to emphasize that ToRA only regenerates rationales in a new round when **external execution failures** occur, such as runtime errors. Consequently, ToRA is unable to generate a result to be placed within the `\\\\boxed{}` symbol, which serves as the stopping condition, and therefore proceeds to the next round, as stated in Lines 8-9 on Page 4 of the ToRA paper [1].\\n\\nThe main goal of our CoSC is to teach LLMs using **inherent ability** to achieve self-correction. Different from ToRA, our CoSC generates a sequence consisting of a program, an output, a detailed verification, and a conclusion in one round for a given question. CoSC proceeds to the next round when errors are detected through self-correction. Specifically, our CoSC designs **a detailed two-step self-correction format** in Line 300-304 of our paper. It can teach LLMs how to perform self-correction by verifying the consistency among the question, the python program, and the program outputs. Our self-correction allows the model to autonomously identify and correct errors, akin to the slow thinking process of humans, offering greater flexibility even without external feedback. \\n\\nIn short, CoSC leverages **inherent ability** for self-correction, reducing dependence on external tools, which enhances its autonomy and scalability. We highlight the differences between ToRA and our CoSC in the table below.\\n\\n| Difference | ToRA | CoSC (Ours) |\\n|-----------|-----------|-----------|\\n| **Principle** | integrating **external** tools into natural language reasoning | using **inherent ability** to achieve self-correction |\\n| **When to correct** | only **external** execution failures, i.e., runtime errors | errors checked by **inherent ability**|\\n\\n[1] Zhibin Gou, Zhihong Shao, Yeyun Gong, Yujiu Yang, Minlie Huang, Nan Duan, Weizhu Chen, et al. Tora: A tool-integrated reasoning agent for mathematical problem solving. In ICLR, 2024.\\n\\n---\\n\\n***Q2: Can you discuss the statistical significance of these 1-3% improvements (both over ToRA with your fine-tuned model and over proprietary models with your prompting-based approach).***\\n\\nThanks for your valuable feedback. Our experiments show that GPT-4o with our CoSC prompts performs slightly better than the original GPT-4o. On one hand, this demonstrates the broad effectiveness of the proposed method. On the other hand, the lack of a significant performance improvement reflects the limitations of the prompt-only version of our approach. This is one of the key reasons that we chose to focus on fine-tuning in this paper.\\n\\nIn contrast to the prompting approach, our fine-tuning method on the CodeLLaMA base model results in a significant performance improvement. Specifically, our fine-tuning-based CoSC boosts the average accuracy on both datasets (MATH and GSM8K) by 35.9%, 33.9%, and 29.3% over CodeLLaMA for the 7B, 13B, and 34B model sizes, respectively.\\n\\nWhen comparing our CoSC model to ToRA, our method consistently outperforms ToRA across all three model sizes (7B, 13B, and 34B) on both the MATH and GSM8K datasets. This demonstrates the generalizability and stability of our approach. Specifically, on the MATH dataset\\u2014the more challenging benchmark\\u2014CoSC achieves improvements of 3%, 2.2%, and 2.7% in accuracy for the 7B, 13B, and 34B model sizes, respectively. These gains are particularly significant, considering the difficulty of the MATH dataset and the fact that ToRA is already considered a top-performing method among open-source models.\"}", "{\"title\": \"Rebuttal by Authors [1/3]\", \"comment\": \"Thank you for your valuable review and suggestions. Below we respond to the comments in **Weaknesses (W)** and **Questions (Q)**.\\n\\n---\\n\\n***W1 & Q1: The comparison is not apples to apples. Did you try any normalizing experiment between CoSC and ToRA where both are trained on the same number of samples?***\\n\\nThank you for your insightful comment. Our method is not a continuation or follow-up to ToRA. Instead, it fundamentally differs from ToRA in both its underlying principles and functionality. ToRA integrates Chain-of-Thought (CoT) and Program-of-Thought (PoT) approaches to enhance the reasoning capabilities of LLMs in mathematical problem-solving. However, it is primarily restricted to single-round reasoning and lacks self-correction capabilities. In Table 4, we can see that the reasoning process in ToRA typically involves only a single round, with additional iterations occurring only in extremely rare cases. In contrast, our method incorporates self-correction as an intrinsic feature of LLMs and is explicitly designed to support multi-round reasoning. These advancements enable more robust and iterative problem-solving.\\n\\nTo address your concern about the fairness of the comparison, we conducted an experiment to directly compare our CoSC model with ToRA [1], the top-performing open-source method. For fairness, we used the same number of training samples (69K) as reported in the official ToRA paper, representing its best official result. The results, presented in the table below, clearly demonstrate that CoSC outperforms ToRA on both datasets.\\n\\nThis result underscores that the superior performance of CoSC is not merely due to a larger training dataset but is instead attributed to the effectiveness of its self-correction mechanism. This mechanism enables CoSC to iteratively refine its outputs, providing a significant advantage over ToRA's approach. We believe this focused evaluation highlights the robustness and efficiency of our method under comparable settings.\\n\\n| | MATH | GSM8K |\\n|-----------|------|------|\\n| ToRA-Code-7B [1] | 44.6 | 72.6 |\\n| CoSC-Code-7B (Ours) | 47.0 | 74.2 |\\n\\n[1] Zhibin Gou, Zhihong Shao, Yeyun Gong, Yujiu Yang, Minlie Huang, Nan Duan, Weizhu Chen, et al. Tora: A tool-integrated reasoning agent for mathematical problem solving. In ICLR, 2024.\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"Thank you for your feedback! Below we respond to the remaining questions.\\n\\n---\\n\\n***Q1: Exact prompting method used for each evaluation.***\\n\\nWe appreciate the reviewer\\u2019s valuable feedback. Table 2 in the original paper already includes an identifier \\\"ZS\\\", which denotes whether the LLMs are evaluated in a zero-shot inference setting without demonstrations. To clarify further, we summarize below the prompting methods employed for each evaluation:\", \"proprietary_models\": [\"GPT-4o [1]: Zero-shot CoT prompting for MATH; 8-shot CoT prompting for GSM8K.\", \"GPT-4V [2]: 4-shot prompting for MATH; 5-shot CoT prompting for GSM8K.\", \"GPT-4 and ChatGPT [3]: CoT prompting for MATH; 5-shot CoT prompting for GSM8K.\", \"Gemini family [4]: 4-shot Minerva prompting for MATH; 11-shot prompting for GSM8K.\", \"Claude family [5]: Zero-shot CoT prompting for both datasets.\", \"PaLM-2 [6]: 4-shot CoT prompting for MATH; 8-shot CoT prompting for GSM8K.\"], \"open_source_models\": \"- LLaMA-2 [7] and Platypus-2 [8]: CoT prompting for both datasets.\\n- CodeLLaMA [9]: Program-Aided Language (PAL) model prompting for both datasets.\\n- LLaMA-2 SFT [10], LLaMA-2 RFT [10], WizardMath [11], MetaMath [12], ToRA [13], and our CoSC method: Fully zero-shot, requiring no demonstrations.\\n\\nWe have included these information in Appendix C.2 of the revised paper for reference.\\n\\n[1] https://openai.com/index/hello-gpt-4o/. \\\\\\n[2] https://openai.com/index/gpt-4v-system-card/. \\\\\\n[3] GPT-4 Technical Report.\\\\\\n[4] Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context.\\\\\\n[5] https://www.anthropic.com/news/claude-3-family. \\\\\\n[6] PaLM 2 Technical Report.\\\\\\n[7] Llama 2: Open Foundation and Fine-Tuned Chat Models.\\\\\\n[8] Platypus: Quick, cheap, and powerful refinement of llms.\\\\\\n[9] Code Llama: Open Foundation Models for Code.\\\\\\n[10] Scaling relationship on learning mathematical reasoning with large language models.\\\\\\n[11] Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct.\\\\\\n[12] Metamath: Bootstrap your own mathematical questions for large language models.\\\\\\n[13] Tora: A tool-integrated reasoning agent for mathematical problem solving.\\n\\n---\\n\\n***Q2: Why is a restriction of inherently embedding self-correction justified if one could easily achieve the same level of accuracy with few in-context examples.***\\n\\nThanks for your insightful comments. As suggested by the reviewer, we have conducted an additional experiment using the CoSC prompts for evaluation on ToRA-Code-7B. The results on the MATH and GSM8K datasets are shown in the table below. \\n\\n| | MATH | GSM8K |\\n|-----------|------|------|\\n| ToRA-Code-7B | 44.6 | 72.6 |\\n| ToRA-Code-7B with CoSC prompt | 42.8 | 68.0 |\\n| CoSC-Code-7B (ours) | 47.6 | 74.7 |\\n\\nAs shown in the table, applying CoSC prompting to ToRA not only fails to outperform the original ToRA model but also results in a decline in performance. As clearly demonstrated in Table 4 of the original paper, ToRA inherently lacks the robust multi-round reasoning capabilities needed for effective self-correction. When CoSC prompting is applied, it introduces complexity that the model is ill-equipped to handle, leading to confusion and errors in the iterative process. Similarly, during the development of the CoSC algorithm, we also attempt to apply self-correction prompts to the base CodeLLaMA model. However, this approach did not yield good performance and was significantly lower than the previous state-of-the-art results in open-source models. **This led us to adopt a fine-tuning strategy instead.**\\n\\nIn contrast, our CoSC model, which integrates self-correction as an inherent capability via fine-tuning, achieves superior results on both datasets. These findings suggest that for open-source LLMs, few-shot prompting alone is insufficient to effectively enable self-correction. The lack of significant gains from prompting further underscores the limitations of relying solely on in-context examples. **Therefore, we argue that embedding self-correction as an inherent capability through fine-tuning is essential for truly endowing LLMs with robust self-correction abilities.**\\n\\nMoreover, by integrating self-correction directly into the training process, our approach allows models to perform self-correction autonomously in a zero-shot setting during inference, eliminating the need for external feedback or few-shot demonstrations. **This self-correction mechanism enables even weaker LLMs to achieve significant improvements in mathematical reasoning\\u2014enhancements that are unattainable through prompting methods alone.** Additionally, our CoSC framework is **open-source,** making these advancements accessible to the broader research community. We believe this represents a pivotal step toward democratizing advanced reasoning capabilities and fostering further innovation.\"}", "{\"summary\": \"Authors are studying whether an LLM that performs poorly on mathematical reasoning tasks can achieve a much stronger performance through a combination of fine-tuning and structured sampling with code execution. They find that their method produces very strong performance compared to proprietary and OS models and they analyze how their different interventions contribute to this performance increase.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The authors demonstrate that an initial seed model trained on GPT-4 generated data (distillation) can then be used to generate its own training data through self-correction (expert iteration), which\", \"reduces dependency on expensive API calls to GPT-4 after the initial seeding phase\", \"shows that a model can effectively act as its own teacher/critic through the Chain of Self-Correction mechanism\", \"demonstrates that relatively small amounts of high-quality seed data can go some distance towards bootstrap a more extensive self-improvement process\", \"This methodology is powerful and the authors execute well on it.\"], \"weaknesses\": \"The authors train a small model on the output of a large model and find that it improves performance a lot (compare \\\"model distillation\\\" - https://arxiv.org/abs/2305.02301) and then train the resulting model on its own output and find that it improves performance further (compare born again networks https://arxiv.org/abs/1805.04770 or more generally \\\"expert iteration\\\"). The authors should mention these highly related fields of research and contextualize their research better, especially a big step of performance increase comes from the distillation step.\", \"the_comparison_landscape_is_incomplete_and_potentially_misleading\": \"Notable omission of open-source models with strong mathematical performance (QWEN2, Orca-Math, ...) that match/surpass the achieved performance.\\n\\nThe authors should indicate exactly which GPT-4 version was used for training data generation (presumably gpt-4o-2024-08-06 or gpt-4o-mini-2024-07-18?). This is not only important for reproducibility, but also for understanding how much of the gap between the weak model and GPT-4 the models were able to close via distillation.\\n\\nThe phrase \\\"for some complex solutions we can only get < 1\\\" is slightly awkward\", \"questions\": \"Do the experiments for Table 4 use a fewshot prompt for the non-CoSC models to tell them how to utilize multiple rounds of reasoning? How do the authors explain that the fraction of \\\"more than one round of reasoning\\\" very close to 0% for those models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal by Authors [2/3]\", \"comment\": \"***W2: There is no ablation to clarify what capabilities are gained at a more granular level. CoSC has a generation and a verification step, unlike previous methods. The paper does not analyze these separately.***\\n\\nThank you for your valuable comment. **An ablative analysis addressing this point can be found in Appendix D.1 of the original paper.** As indicated in Table 6, the verification module achieves a classification accuracy of approximately 70%, demonstrating the model's ability to identify erroneous answers. On the other hand, the correction module effectively reduces the errors identified during the verification stage by about 25%, confirming its effectiveness. These results are based on experiments conducted with the CosC-Code model at different sizes, specifically 7B and 13B, to ensure the reliability of our findings.\\n\\n---\\n\\n***W3: The main novelty of this paper is the verification/self-correction step. Without an extensive evaluation showing these capabilities have improved, it is hard to assess the effectiveness of the proposed method.***\\n\\nThank you for your valuable comment. In fact, we have conducted several experiments to assess the effectiveness of the verification/self-correction step of our method, and we reported **the results in Tables 5 and Table 6** of the original paper. In Table 5, we observe that a single round of reasoning without self-correction leads to an approximately 7% decrease in accuracy. Furthermore, in Table 6, the verification module successfully identifies around 70% of erroneous answers, and the correction module reduces these errors by approximately 25%.\\n\\nWe appreciate any specific suggestions you may have for further evaluating the self-correction capabilities and are open to exploring additional validation steps.\"}", "{\"metareview\": \"This paper introduces Chain of Self-Correction (CoSC), a fine-tuning mechanism enabling LLMs to iteratively self-correct by generating programs, executing them, verifying outputs, and refining or finalizing answers. It uses a two-stage data synthesis approach: GPT-4 generates initial data to train a seed LLM, which then generates additional data to fine-tune itself. Experiments show that CoSC outperforms existing prompting and fine-tuning methods on MATH and GSM8K across proprietary and open-source LLMs.\\n\\nMost reviewers and the AC generally acknowledge that the proposed method is well-motivated, with comprehensive experiments to support its claims. However, concerns are raised regarding the novelty of the approach. The method integrates several well-established ideas, such as tool use, verification, and iterative reasoning, without introducing significant conceptual innovation. Similarly, the data synthesis strategy combines well-known techniques like data distillation and expert iteration. While the improved performance on math datasets is notable, the gains over prior work, such as ToRA, appear marginal.\\n\\nOverall, while the paper presents a promising approach and demonstrates incremental improvements, the lack of substantial novelty and limited performance gains lead me to consider this a borderline submission.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, reviewers raised several concerns regarding certain claims and the experimental setup presented in the paper, most of which were adequately addressed by the authors. However, concerns regarding the limited novelty and marginal performance improvements remain.\"}", "{\"title\": \"A Kind Reminder of Further Discussion\", \"comment\": \"Dear Reviewers,\\n\\nWe would like to express our sincere gratitude for taking the time to review our paper. We greatly appreciate your insightful comments and are truly thankful for your efforts. We have worked diligently to address all of the concerns you raised, and we hope that our responses have sufficiently resolved any remaining issues. **If you have any further questions or require clarification, we would be more than happy to discuss them with you.** Your feedback is invaluable to us.\\n\\nSincerely,\\\\\\nAuthors of Submission 7154\"}", "{\"title\": \"Author Response [1/2]\", \"comment\": \"Thank you for your feedback! Below we respond to the follow-up questions.\\n\\n---\\n\\n***Q1: The notion of self-correction with respect to code and mathematical reasoning is becoming very prevalent in current research, so it is difficult to clearly separate the novelty of your technique.***\\n\\nThank you for your insightful comment! We would like to highlight that the key differences between our CoSC and CSV [1] as well as Self-Refine [2] are outlined as follows.\\n\\u00a0\\n- **Difference from CSV [1].** The main difference between CSV [1] and our CoSC lies in the approach to verification. **CSV relies on external tools, specifically code, for verification, whereas our CoSC approach performs verification entirely in natural language without using tools,** which relies on the model's inherent capabilities for this process. CSV [1] argues in Table 4 of their original paper that relying solely on natural language verification of CSV can compromise accuracy and negatively impact performance. In contrast, our CoSC approach challenges this view and successfully achieves self-correction using only natural language verification, facilitated by a fine-tuning-based method. Specifically, our CoSC verifies the consistency between the question, the Python program, and the program's outputs using only natural language, which has proven effective.\\n\\u00a0\\n- **Difference from Self-Refine [2].** The primary difference between Self-Refine [2] and our CoSC is that **Self-Refine is unable to effectively identify mathematical errors, whereas our CoSC can do so.** As stated in Paragraph 4 of Section 3.3 in Page 5 of their original paper of Self-Refine [2], the modest performance improvements in mathematical reasoning stem from the inability of Self-Refine to accurately identify errors. Self-Refine cannot be applied effectively for weaker models, as stated in Page 7 in their paper. Despite leveraging powerful LLMs such as GPT-3, ChatGPT, and GPT-4, Self-Refine only achieves **a minimal accuracy improvement of 0%-0.2%** on mathematical reasoning tasks, as shown in Table 1 of their paper.\\u00a0 In contrast, our CoSC approach excels in error identification during mathematical reasoning. As demonstrated in Table 6 of our paper, the verification module in our CoSC achieves **a classification accuracy of approximately 70%, showcasing its ability to effectively identify erroneous answers.** Moreover, our CoSC is capable of significantly improving the performance of weaker LLMs compared to GPT-3, ChatGPT, or GPT-4 used in Self-Refine. Specifically, our fine-tuning-based CoSC yields significant improvements in mathematical reasoning, with average accuracy boosts of 35.9%, 33.9%, and 29.3% over CodeLLaMA on the MATH and GSM8K datasets for the 7B, 13B, and 34B model sizes, respectively.\\n\\nWe acknowledge the valuable contributions of CSV [1] and Self-Refine [2]. Both approaches offer important insights, and we will ensure they are properly referenced and discussed in the next version of our paper. While our CoSC approach presents key distinctions, we believe that ongoing dialogue with these works is crucial for advancing the understanding and capabilities of self-correction methods in LLMs.\\n\\n[1] Solving Challenging Math Word Problems Using GPT-4 Code Interpreter with Code-based Self-Verification. In ICLR, 2024.\\\\\\n[2] Self-Refine: Iterative Refinement with Self-Feedback. In NeurIPS, 2023.\"}", "{\"title\": \"Rebuttal by Authors [2/2]\", \"comment\": \"***W3: I'm not sure why it is stressed that CoSC can work with zero shot as opposed to proprietary models that require few shots.***\\n\\nThanks for your careful reading and bringing this to our attention. Our emphasis on CoSC\\u2019s zero-shot capability was intended to highlight the practical usability of the model in real-world applications, where providing task-specific examples might not always be feasible.\\n\\nWe agree that proprietary models, being more general-purpose and not fine-tuned for these specific tasks, require few-shot examples to adapt to this reasoning paradigm. This distinction underscores the trade-off between model generality and task-specific optimization.\\n\\n---\\n\\n***W4: It can be misleading to say it outperforms \\\"all the advanced proprietrary LLMs\\\" as that covers ALL proprietary LLMs including the multimodal ones. It should be \\\"all non-multi-modal proprietrary LLMs\\\". (BTW, I am not sure why multi-modality distinction actually matters here?)***\\n\\nThank you for your thoughtful feedback. **We have revised the wording in our revised paper from \\\"proprietary LLMs\\\" to \\\"non-multi-modal proprietary LLMs\\\".**\\n\\nRegarding the distinction, while multi-modal LLMs process questions through language, **their visual capabilities can enhance understanding of spatial or geometric concepts, which may improve mathematical reasoning in tasks like Geometry.** Despite this, our CoSC models, trained solely on language, outperform some multi-modal models in specific mathematical benchmarks, demonstrating the effectiveness of our chain of self-correction approach in improving mathematical reasoning.\\n\\n---\\n\\n***W5: Can you highlight in bold the best performing proprietary models in Table 2 (which I think is GPT4o and Claude 3.5 Sonnet).***\\n\\nThank you for your valuable suggestion. We have updated Table 2 to highlight the best-performing proprietary models in bold in our revised paper. For the MATH dataset, GPT-4o is the best-performing proprietary model, while for the GSM8K dataset, the top-performing proprietary model is Claude-3.5 Sonnet.\\n\\n---\\n\\n***W6: Typos.***\\n\\nThank you for your careful review. We have corrected these typos in the revised version of our paper.\"}", "{\"comment\": \"Thank you to the authors their detailed response to my comments.\\n\\nI am still not clear on the significant differences in the technique with Tora. This statement is confusing: \\n\\\"while the design of ToRA may imply multi-round reasoning, it does not explicitly support multi-round inference\\\". They have a notion of \\\"rationales\\\" that are meant to check the results in each round and can make corrections that can go into the next round to refine the solution further right? Also, the fact that it does empirically go into multi-round reasoning (even if it may be less often than CoSC) shows that by design it does support it right? Isn't their support for multiple rounds designed to allow self-correction? It may help to have a more detailed comparison table that clearly lists the concrete conceptual differences between CoSC and ToRA approaches (without any empirical differences). \\n\\nOn the significance of the evaluation results, thank you very much for performing the additional evaluation to show the value of self-correction based solely on prompting. It is good that these show a consistent increase above the baseline, even though again the increase seems a bit incremental. Can you discuss the statistical significance of these 1-3% improvements (both over ToRA with your fine-tuned model and over proprietary models with your prompting-based approach).\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"Thank you for your feedback! Below we respond to the remaining questions.\\n\\n---\\n\\n***Q1: I would recommend the authors to include the key details on the prompting method in Table 2 if possible rather than in the Appendix.***\\n\\nThank you for your valuable suggestion. We have included the key details of the prompting methods for each evaluation **in Table 2 of the revised paper.**\\n\\n---\\n\\n***Q2: Can you include the details of the additional experiment and the exact prompt used in the Appendix?***\\n\\nThank you for your valuable suggestion. We have included the details of the additional experiment and the corresponding prompts **in Appendix D.4 of the revised paper.** The prompts used are the same as those used for CoSC seeding data generation in Appendix A.\"}", "{\"summary\": \"This paper proposes a technique for improving mathematical reasoning with large language models which the authors call the \\\"Chain of Self-Correction (CoSC)\\\". CoSC is designed to incorporate self-correction as an inherent ability in LLMs to iteratively refine their reasoning through multiple rounds of program generation, execution, and verification. The approach uses a two-phase fine-tuning process: first, training with a small dataset generated by GPT-4, and then self-enhancing with self-generated data, thereby reducing reliance on expensive models. On standard math reasoning datasets MATH and GSM8, the method outperforms a wide range of open source techniques and models that are tested in this work, and also performs better than some proprietary LLMs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Interesting and cost-efficient two-phase fine-tuning approach. Using a two-phase approach (starting with seeding data from GPT-4 and then using self-generated data) is an interesting and effective way to reduce reliance on expensive models. The authors also include an ablation study that shows the significant improvements with both initial GPT4 and self-enhanced finetuning.\\n2. Evaluation is very broad and shows core value. The paper shows a very extensive evaluation with many proprietary and open source models as baselines, which is admirable. Their CoSC approach is also evaluated on three different sizes of LLMs up to 35B, where the approach consistently performs better than all other open source models on all sizes. Ablation studies also show the gains from multiple rounds of correction to show the value of the concept of chain of self correction, and further ablation studies are included in appendices. \\n3. Presentation and details. Well-structured and well-written paper. The problem is motivated well, and the technique, implementation and evaluation is clearly explained with details, and prompts and more details are provided in extensive appendices.\", \"weaknesses\": \"1. Novelty is not clear. In particular, I am not sure how exactly the CoSC technique is different from ToRA (Gou et al., 2023b). ToRA seems to also use the interleaved approach of program-generation, execution and rationales in multiple rounds (similar to the chain of correction you have here). One thing that seems different with CoSC is that you have explicit verification and conclusion steps and verification checks separately for question and output correctness. But those seem a bit like relatively smaller optimizations to guide the model to do more explicit reasoning. You also classify ToRA as under prompting approaches as opposed to fine-tuning in the related work discussion, but from reading their work it seems they are also fine tuning the models? (and in fact using GPT4 for initial data generation and also using two steps in training their models). But I am not completely sure of this - please clarify what exactly the significant differences in technique there are with ToRA.\\n\\n2. Evaluation results seem not to show very drastic gains. The overall improvement over the best open source model (ToRA) is 1.8% - 2.6% for all sizes of models, which seems pretty incremental. On the other hand, the differences are pretty big with the best proprietary models (e.g. GPT4o has 76.6% on MATH vs CoSC's 53.5% and Claude 3.5 Sonnet has 96.4% on GSM8K vs CoSC's 82.3%), which indicates a much bigger scale of potential (attainable) improvement that could have been made. Also, I am curious if you can show the core value of the technique with a prompting-based approach over any proprietery model? You are already using prompts to train the model with GPT-4 - what if you used those same prompts to generate CoSC type trajectories on top of the best performing proprietary models like GPT4o and Sonnet - will it improve upon their results further? That will also help establish the core value of the technique apart from the fine-tuning gains you see over open source models.\", \"other_comments\": [\"I'm not sure why it is stressed that CoSC can work with zero shot as opposed to proprietary models that require few shots - since it has been already fine tuned heavily on this task it is not surprising it does not need additional examples - whereas the proprietary models are very generic powerful LLMs so they need a few examples to orient them towards this specific form of reasoning tasks. So in a sense the heavy fine-tuning is already supposed to replace the few shot training right?\", \"This wording is pretty convoluted and confusing: \\\"The results reveal that our CoSC-Code-34B can outperform all\", \"the advanced proprietary LLMs, as well as most advanced proprietary multi-modal LLMs, such as GPT-4V, Gemini-1.0 Pro, and Gemini-1.0 Ultra\\\". It can be misleading to say it outperforms \\\"all the advanced proprietrary LLMs\\\" as that covers ALL proprietary LLMs including the multimodal ones. It should be \\\"all non-multi-modal proprietrary LLMs\\\". (BTW, I am not sure why multi-modality distinction actually matters here?)\", \"Can you highlight in bold the best performing proprietary models in Table 2 (which I think is GPT4o and Claude 3.5 Sonnet).\"], \"typos\": [\"\\\"There are some recent studies (Chen et al., 2023b; Gou et al., 2023a; Lightman et al., 2023; Huang\", \"et al., 2024a; Chen et al., 2024b) attempt to enable large language models to perform self-correction\", \"by either prompting methods or or fine-tuning methods.\\\"\", \"\\\"rewrited\\\" should be \\\"rewrote\\\" or \\\"rewritten\\\" in multiple places\", \"\\\"to obtain the final our CoSC model.\\\"\", \"\\\"CoSC consisits of\\\"\"], \"questions\": \"1. Please explain in detail what are the important differences in technique of your approach and ToRA.\\n2. Can you compare a prompt-only version of your approach on the best proprietary models and show improvements over them?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a fine-tuning technique called Chain of Self-correction (CoSC) that embeds self-correction as an inherent ability of LLM. The method is specifically developed for LLM mathematics benchmarks - MATH and GSM8K. The paper proposes a specific format (program+output+verification+conclusion) for fine-tuning data. GPT4 is used first to get part of the fine-tuning data. This initial data is used to fine-tune a smaller model. After initial fine-tuning, the rest of the data is self-generated by a fine-tuned small model, and the model is further fine-tuned using the new data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written and easy to follow. I appreciate Figure 1, which summarizes prompts used in some recent related works.\", \"Compared to the prior work ToRA, the main difference is that the model is made to verify and confirm the answer. If the model deems that the output is incorrect then the model is made to repeat the whole procedure in multiple rounds. This improves the model\\u2019s accuracy on the benchmark by around 2%.\", \"Evaluation is quite comprehensive and considers many open-source and closed-source models\"], \"weaknesses\": \"I can\\u2019t understand some of the main contributions of CoSC without some further clarification:\\n\\nFrom what I can get from the evaluation in the paper - the second-best method in the evaluation is ToRA which converts a mathematical problem into Python code, evaluates it, and then uses CoT to infer the final solution. And the CoSC model improves on the ToRA model by around 2%. \\n\\nHow is ToRA prompted in this evaluation? If the ToRA models were not allowed to self-correct, is it possible to simply modify the prompt and use a similar prompt as CoSC (and possibly use some few-shot examples instead of fine-tuning) to allow it to self-correct? If not, then it would answer my main concern about whether collecting large amounts of data in two phases and fine-tuning in CoSC is actually necessary. \\n\\nThe algorithm 1 is referenced in line 310. I think this place is quite inappropriate for referring to the main algorithm. Consider moving the algorithm a little bit earlier in section 3.2.1.\\n\\nI don\\u2019t understand what Table 4 for the ToRA code means without much information about the evaluation setup. From Figure 1, it seems that ToRA does not perform any self-correction, so what does the 0.1% cases for ToRA in Round=2 mean?\", \"line_142\": \"\\u201crevolutionize the accuracy\\u201d - please remove revolutionize here\", \"line_215\": \"what if the conclusion is never reached?\", \"grammar_issues\": \"Line 342, 344, 351, 363\\n\\nCan you add more description to the appendix? Some parts of the appendix like Appendix B are quite unclear.\", \"questions\": \"As pointed out in weakness, my main question is about the prompt used for ToRA in Table 2.\", \"line_452\": \"\\u201cUnlike these proprietary models, our CoSC performs the inference in a zero-shot manner without demonstration\\u201d -\\nI don\\u2019t understand this. How do the proprietary models considered in the evaluation use demonstrations?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"Thank you for your feedback! Below we respond to the remaining questions.\\n\\n---\\n\\n***Q1: I remain unsure about how fair the comparison to existing methods is.***\\n\\nThank you for your insightful comment. To address your concern about the fairness of the comparison, we conducted an experiment to directly compare our CoSC model with ToRA [1], the top-performing open-source method. For fairness, we used the same number of training samples (69K) as reported in the official ToRA paper, representing its best official result. The results, presented in the table below, clearly demonstrate that CoSC outperforms ToRA on both datasets.\\n\\nThis result underscores that the superior performance of CoSC is not merely due to a larger training dataset but is instead attributed to the effectiveness of its self-correction mechanism. This mechanism enables CoSC to iteratively refine its outputs, providing a significant advantage over ToRA's approach. We believe this focused evaluation highlights the robustness and efficiency of our method under comparable settings.\\n\\nIf you have additional suggestions or require further clarifications, we would be happy to address them. \\n\\n| | MATH | GSM8K |\\n|-----------|------|------|\\n| ToRA-Code-7B [1] | 44.6 | 72.6 |\\n| CoSC-Code-7B (Ours) | 47.0 | 74.2 |\\n\\n[1] Zhibin Gou, Zhihong Shao, Yeyun Gong, Yujiu Yang, Minlie Huang, Nan Duan, Weizhu Chen, et al. Tora: A tool-integrated reasoning agent for mathematical problem solving. In ICLR, 2024.\\n\\n---\\n\\n***Q2: I remain unsure about whether it can justify e.g. the following in the abstract: \\\"surpassing the performance of well-established models such as ChatGPT, GPT-4, and even multi-modal LLMs like GPT-4V, Gemini-1.0 Pro, and Gemini-1.0 Ultra.\\\"***\\n\\nThank you for your thoughtful feedback. As outlined in Lines 31-34 of our paper, our claim is specifically based on the performance of our CoSC-Code-34B model **on the MATH dataset,** where it achieves an accuracy of 53.5%. This surpasses the performance of several well-established models, including ChatGPT, GPT-4, and multi-modal models like GPT-4V, Gemini-1.0 Pro, and Gemini-1.0 Ultra, as reported in Table 2.\\n\\nTo ensure fairness and validity, the results were obtained using the same evaluation protocol across all models, focusing on the MATH dataset\\u2014a well-recognized benchmark for mathematical reasoning tasks. By emphasizing this context, we aim to clarify that our claim is dataset-specific and grounded in empirical evidence, rather than a general statement about model superiority across all tasks or domains.\\n\\nWe hope this clarification addresses your concerns and are happy to provide additional context or analysis if needed.\"}", "{\"title\": \"Thank you for your support and raising the score\", \"comment\": \"Thank you very much for your thoughtful feedback and for updating your overall score. We greatly appreciate your valuable suggestions and would like to address your points as follows.\\n\\n---\\n\\n***Q1: I would encourage the authors to explore incorporating evaluation methods that incentivize \\\"self-correction\\\" through the use of a large number of in-context examples, rather than relying solely on fine-tuning. My understanding is that fine-tuning a model for self-correction might reduce its performance on unrelated tasks. If this understanding is incorrect, I would appreciate clarification.***\\n\\nThank you for your insightful suggestion. We agree that prompting, using a large number of in-context examples, can be a promising approach to complement our fine-tuning-based CoSC method. Besides, we also agree that fine-tuning (SFT) a model for a specific task can lead to catastrophic forgetting, where the model\\u2019s performance on unrelated tasks may deteriorate. However, it is important to emphasize that the goal of our CoSC approach is to embed self-correction as an inherent capability in LLMs, **specifically to enhance mathematical reasoning.** For a specialized LLM specifically for mathematical reasoning, we think that the performance improvement on mathematical tasks outweighs the potential drawbacks on unrelated tasks. Furthermore, we posit that, with careful datasets design, such as involving datasets from other tasks, the impact of catastrophic forgetting can be mitigated, allowing the model to maintain performance across a broader range of tasks. However, we recognize the value of incorporating in-context examples to incentivize self-correction, in addition to fine-tuning. We view this as an important direction for future research and look forward to exploring it in our ongoing work.\\n\\n---\\n\\n***Q2: Additionally, I would like to see a more detailed discussion on the future directions of this research. For instance, How does this research fit in the context of more recent models trained for reasoning such as GPT4-o1, DeepSeek R1, or Qwen QwQ?***\\n\\nThank you for your insightful suggestion! We appreciate your interest in the future directions of our research. We plan to explore several key areas to further advance our work:\\n\\n- **Extending to Broader Domains.** Our CoSC approach introduces a structured, step-by-step framework that enables LLMs to identify errors, generate corrections, and iteratively refine their outputs with greater accuracy. While we have demonstrated its effectiveness in mathematical reasoning tasks, we believe that the self-correction mechanisms can be extended to a broader range of domains. In future work, we aim to adapt CoSC for tasks such as general reasoning, code generation, and multimodal applications. This expansion will allow us to explore new areas where self-correction can further enhance LLM performance.\\n- **Advancing Prompting Techniques.** In addition to fine-tuning, we will investigate methods to prompt LLMs for self-correction, leveraging their inherent abilities. This includes manually designing effective prompts, automatically selecting high-quality few-shot examples for various tasks, and crafting concise prompts to facilitate self-correction. We believe that refining in-context examples to prompt LLMs for self-correction will be a valuable avenue for future research. Furthermore, with the emergence of state-of-the-art models such as GPT-4-o1, DeepSeek R1, and Qwen QwQ, there is an increasing need to develop adaptive, context-aware prompting techniques, which can better align with the advanced capabilities of these models and enhance their self-correction potential.\\n- **Open-Sourcing Code, Data, and Models.** In our commitment to advancing research in the field, we will open-source the code, data, and models related to this work. Once the paper is accepted, all associated resources will be made publicly available. We believe that this openness will foster engagement from the broader research community and support the development of more flexible and versatile reasoning systems, allowing CoSC to be applied across a wider range of applications.\\n\\nWe will incorporate these discussions into the next version of our paper and continue to explore these exciting future directions in our ongoing research.\"}", "{\"title\": \"Follow-up questions\", \"comment\": \"> In the evaluation of proprietary models such as GPT-4 and Gemini, the results we report are based on their respective official technical reports [2,3].\\u00a0These reports indicate that the proprietary models often utilize few-shot prompting\\n\\nIn your Table can you add exact prompting method used for each evaluation? This is a key information for such an evaluation. \\n\\n\\n> In this evaluation, ToRA is directly applied to test tasks, without additional prompt modifications or few-shot examples. While it is theoretically possible to adapt ToRA by incorporating some prompts that simulates self-correction, this would fundamentally change its methodology and deviate from the original design.\\n\\nThis makes me question the motivation of whole setup of this paper. Why is a restriction of inherently embedding self-correction justified if one could easily achieve the same level of accuracy with few in-context examples.\"}", "{\"title\": \"Author Response [2/2]\", \"comment\": \"***Q2: I think to make this clasim it should be shown that prompting vs fine tuning on the same model would produce such a huge difference (e.g. the prompting approach on your open source models performs so much worse than your fine-tuned versions). However, the marginal improvements with respect to SoTA will still remain.***\\n\\nThank you for your valuable feedback. We conducted experiments comparing the performance of the prompt-based version and the fine-tuning version of our CoSC approach on the same base LLM, CodeLLaMA. The results are shown in the table below. We observed that the fine-tuning version of CoSC significantly outperforms the prompting version by 28.9% and 37.9% in accuracy on the MATH and GSM8K datasets, respectively. These results clearly demonstrate that fine-tuning is far more effective than prompting when embedding self-correction capabilities into LLMs.\\n\\nFor context, Self-Refine [2] achieves only a 0%-0.2% accuracy improvement on the GSM8K dataset over the baseline, as shown in Table 1 of their original paper. CSV [1], which uses natural language-based self-verification without relying on code, even results in a 0.4% accuracy decrease on the MATH dataset compared to the baseline, as shown in Table 4 of the original CSV paper [1]. It is worth noting that the MATH dataset is a more challenging benchmark compared to GSM8K. However, our CoSC approach can achieve inherent self-correction capabilities without relying on any external tools and improve accuracy by 3%, 2.2%, and 2.7% over ToRA for the 7B, 13B, and 34B model sizes on the MATH dataset, respectively. Compared to the minimal or negligible improvements seen in CSV [1] and Self-Refine [2], we argue that the performance gains achieved by our CoSC approach are particularly significant, especially considering the challenging nature of the MATH dataset.\\n\\n| | MATH | GSM8K |\\n|-------|------|------|\\n| CodeLLaMA-7B with CoSC prompt | 18.7 | 36.8 |\\n| CodeLLaMA-7B with CoSC finetuning (Ours) | 47.6 | 74.7 |\"}", "{\"title\": \"Rebuttal by Authors [1/2]\", \"comment\": \"Thank you for your valuable review and suggestions. Below we respond to the comments in **Weaknesses (W)** and **Questions (Q)**.\\n\\n---\\n\\n***W1 & Q1: Novelty is not clear. Please explain in detail what are the important differences in technique of your approach and ToRA.***\\n\\nThank you for your valuable comment. **We would like to clarify that ToRA is also fine-tuned on data generated by GPT-4,** rather than utilizing a prompting method, as mentioned in Line 86-87 and Line 437-438 of the original paper.\\n\\nAlthough both our method and ToRA are fine-tuning approaches, there are fundamental differences in their underlying principles and functionalities. ToRA integrates Chain-of-Thought (CoT) and Program-of-Thought (PoT) strategies to enhance reasoning capabilities in mathematical problem-solving. **However, it is primarily limited to single-round reasoning and lacks self-correction capabilities.** Notably, while the design of ToRA may imply multi-round reasoning, it does not explicitly support multi-round inference. The reasoning process in ToRA typically involves only a single round, with additional iterations occurring only in extremely rare cases, as demonstrated in Table 4, where the number of such iterations is nearly zero.\\n\\n**In contrast, our method incorporates self-correction as an intrinsic feature of LLMs and is explicitly designed to support multi-round reasoning.** This enables the model to iteratively refine its responses, correcting errors and improving accuracy over several round of reasoning. These advancements contribute to a more robust, iterative approach to problem-solving. Experimental results clearly show that our method outperforms ToRA by a clear margin across all three model sizes (7B, 13B, and 34B) on both the MATH and GSM8K mathematical benchmark datasets. This underscores the superior effectiveness of our approach in enhancing mathematical reasoning capabilities.\\n\\n---\\n\\n***W2 & Q2: Evaluation results seem not to show very drastic gains. Can you compare a prompt-only version of your approach on the best proprietary models and show improvements over them? Will it improve upon their results further if you use GPT-4o to generate training data?*** \\n\\nThank you for your insightful comments. Regarding the performance gap between our method and proprietary models such as GPT-4o and Claude 3.5 Sonnet, we acknowledge that there is notable room for improvement. **However, it is important to note that proprietary models operate on an entirely different scale in terms of training data volume, computational resources, and architectural design.** This makes direct comparisons inherently challenging. Our primary goal of this paper has been to advance the capabilities of open-source models using feasible computational resources, rather than directly competing with state-of-the-art proprietary systems.\\n\\nAs suggested by the reviewer, we conducted an additional experiment comparing a prompt-only version of our CoSC approach on GPT-4o (one of the top proprietary models) with the original GPT-4o on the MATH and GSM8K datasets. The table below shows our results alongside the official GPT-4o benchmarks on these datasets. **Our results surpass the official GPT-4o performance on both datasets, suggesting that self-correction-style reasoning is a broadly effective strategy for enhancing the reasoning capabilities of LLMs.**\\n\\n| | MATH | GSM8k |\\n|-----------|------|------|\\n| GPT-4o with CoSC prompt (Ours) | 77.0 | 97.1 |\\n| GPT-4o [1,2] | 76.6 | 96.1 |\\n\\n[1] https://openai.com/index/hello-gpt-4o/. \\\\\\n[2] https://ai.meta.com/blog/meta-llama-3-1/.\\n\\nFinally, while CoSC prompts can enhance proprietary models, this approach relies on closed systems **that are inaccessible to the broader research community.** In contrast, **our CoSC method is an open-source framework,** providing self-correction capabilities that are publicly available. We believe this represents a critical step toward democratizing advanced reasoning capabilities and fostering further innovation within the research community.\"}", "{\"summary\": \"The paper describes a method called Chain of Self-Correction (CoSC). The idea is to generate a large set of synthetic data that includes stages of self-correction in order to fine-tune a model to learn self-correction capabilities. Then, at inference time, they employ this verification step to enhance performance on mathematical reasoning.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper provides a framework for improving the self-correction abilities of models. They propose a framework to synthetically generate data for improving self correction.\", \"This synthetic data could be a useful resource for training models in the future.\", \"The performance seems good, and the method beats prior work fine-tuning on synthetically generated data.\"], \"weaknesses\": [\"Compared to ToRA, the method only has about 3% improvement. However, while ToRA was only trained on 16k annotated samples, this method was trained on 37k, so the comparison is not apples to apples.\", \"There is no ablation to clarify what capabilities are gained at a more granular level. CoSC has a generation and a verification step, unlike previous methods. The paper does not analyze these separately. For example, what is the precision/recall of the verification step on the programs (how often does the verification step accidentally classify a correct program as wrong, or vice versa)?\", \"The main novelty of this paper is the verification/self-correction step. Without an extensive evaluation showing these capabilities have improved, it is hard to assess the effectiveness of the proposed method.\"], \"questions\": [\"Did you try any normalizing experiment between CoSC and ToRA where both are trained on the same number of samples?\", \"Compared to ToRA and looking at Table 5, it seems like the accuracy when using just one round of reasoning is lower than ToRA. Does that mean the data generated is worse than ToRA's when it comes to reasoning without any self-correction steps?\", \"Because the models were trained on code, do they have an improved sense of code understanding?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response, I am updating my score upwards. Similar to the other reviewers I remain unsure about how fair the comparison to existing methods is and whether it can justify e.g. the following in the abstract: \\\"surpassing the performance of well-established models such as ChatGPT, GPT-4, and even multi-modal LLMs like GPT-4V, Gemini-1.0 Pro, and Gemini-1.0 Ultra.\\\"\"}", "{\"title\": \"Rebuttal by Authors [3/3]\", \"comment\": \"***Q2: Compared to ToRA and looking at Table 5, it seems like the accuracy when using just one round of reasoning is lower than ToRA. Does that mean the data generated is worse than ToRA's when it comes to reasoning without any self-correction steps?***\\n\\nThank you for your insightful observation regarding Table 5. The lower accuracy of our method compared to ToRA in a single round of reasoning does not necessarily indicate that the data generated by our method is inferior. Instead, it reflects the design focus of our approach, which emphasizes multi-round reasoning and self-correction.\\n\\n**Our method is explicitly optimized to leverage iterative reasoning, where the self-correction mechanism refines intermediate outputs over multiple rounds.** As a result, while the single-round performance may appear lower, the overall performance in multi-round scenarios demonstrates significant improvements. This trade-off highlights the unique strengths of our method in handling complex reasoning tasks, which rely on iterative refinement rather than single-pass outputs.\\n\\nWe appreciate your point and believe it underscores the complementary nature of our approach to ToRA. As clearly shown in Table 4, the reasoning process of ToRA typically involves a single round of reasoning, with additional iterations occurring only in very rare cases. \\n\\n---\\n\\n***Q3: Because the models were trained on code, do they have an improved sense of code understanding?***\\n\\nThank you for your thoughtful feedback. To address your question, we evaluate ToRA-Code and our CoSC-Code (both 7B models) on the MBPP dataset [2], which measures code understanding and generation. Both models are trained on mathematical datasets without code instructions.\\n\\nThe results show that CoSC-Code achieves a pass@1 score of 37.6%, outperforming ToRA-Code, which achieves 30.8%. This indicates that **our CoSC approach enhances out-of-distribution generalization on code-related tasks compared to ToRA.** We attribute this improvement to the iterative reasoning and self-correction mechanisms embedded in our CoSC framework, which likely contribute to a stronger capacity for structured problem-solving and logical reasoning, even in domains like coding.\\n\\n[2] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, Charles Sutton. Program Synthesis with Large Language Models. In 2021.\"}", "{\"comment\": \"We would like to thank the AC for organizing the review of our paper. We found all the reviews and comments extremely helpful. We have addressed all the concerns raised by the reviewers and have revised the paper accordingly to properly address any questions raised.\", \"we_would_also_like_to_thank_all_the_reviewers_for_taking_the_time_to_review_our_paper_and_provide_valuable_feedback\": [\"We would like to thank Reviewer ediq for recognizing the clarity of our paper, the usefulness of Figure 1 in summarizing related prompts, and for highlighting the improvement in accuracy enabled by our CoSC mechanism.\", \"We are grateful to Reviewer VQBN for appreciating our framework for enhancing self-correction abilities and for acknowledging the potential future value of this resource for training models.\", \"We sincerely thank Reviewer KVMJ for commending our two-phase fine-tuning approach, which leverages self-correction to reduce reliance on expensive API calls, and for highlighting the strength of our methodology in enabling models to act as their own teacher.\", \"We also thank Reviewer gRDy for recognizing the cost-efficiency and effectiveness of our two-phase fine-tuning approach, the breadth of our evaluation across various model sizes, and the clarity and structure of our paper.\", \"We address each reviewer\\u2019s comments individually below. We have worked hard to address your concerns and hope you find our responses informative. If you feel our comments have not sufficiently addressed your concerns, we would love to discuss them with you further. We have also uploaded a **Paper Revision** for your consideration.\"], \"title\": \"General Response\"}", "{\"title\": \"A Kind Reminder of Further Discussion\", \"comment\": \"Dear Reviewer VQBN,\\n\\nThank you once again for your valuable comments and suggestions, which have been extremely helpful to us. We have posted detailed responses to the concerns you raised and have included additional experimental results.\\n\\nWe fully understand that this is a particularly busy period, and **we deeply appreciate it if you could take some time to provide further feedback on whether our responses address your concerns.** If there are any additional comments, we will try our best to address them promptly.\\n\\nSincerely,\\\\\\nAuthors of Submission 7154\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
8DBTq09LgN
Synthesizing Programmatic Reinforcement Learning Policies with Large Language Model Guided Search
[ "Max Liu", "Chan-Hung Yu", "Wei-Hsu Lee", "Cheng-Wei Hung", "Yen-Chun Chen", "Shao-Hua Sun" ]
Programmatic reinforcement learning (PRL) has been explored for representing policies through programs as a means to achieve interpretability and generalization. Despite promising outcomes, current state-of-the-art PRL methods are hindered by sample inefficiency, necessitating tens of millions of program-environment interactions. To tackle this challenge, we introduce a novel LLM-guided search framework (LLM-GS). Our key insight is to leverage the programming expertise and common sense reasoning of LLMs to enhance the efficiency of assumption-free, random-guessing search methods. We address the challenge of LLMs' inability to generate precise and grammatically correct programs in domain-specific languages (DSLs) by proposing a Pythonic-DSL strategy — an LLM is instructed to initially generate Python codes and then convert them into DSL programs. To further optimize the LLM-generated programs, we develop a search algorithm named Scheduled Hill Climbing, designed to efficiently explore the programmatic search space to improve the programs consistently. Experimental results in the Karel domain demonstrate our LLM-GS framework's superior effectiveness and efficiency. Extensive ablation studies further verify the critical role of our Pythonic-DSL strategy and Scheduled Hill Climbing algorithm. Moreover, we conduct experiments with two novel tasks, showing that LLM-GS enables users without programming skills and knowledge of the domain or DSL to describe the tasks in natural language to obtain performant programs.
[ "Large Language Model", "Programmatic Reinforcement Learning" ]
Accept (Poster)
https://openreview.net/pdf?id=8DBTq09LgN
https://openreview.net/forum?id=8DBTq09LgN
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tOkH8v3SgB", "rfvXEJ9XgZ", "oVL64WjO9g", "n1eelu9uMi", "klq6zUdVGR", "h3gdivdvbU", "ga3v7iKqes", "eLTkU6mjNO", "dKT6XXU8Y4", "XIbu4XeTyQ", "OL3tDP6qhj", "MDdi8JXhan", "L2PlPJv3uK", "JNSW88NhKN", "Etd39FE2lC", "DhTa7TBxRg", "AMUP0cC1BJ", "9uFoYHSBEh", "7zygvo43W8", "5V58qJFKAp", "0mHEkObcWJ" ], "note_type": [ "decision", "official_comment", "official_review", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1737523981781, 1732699009319, 1730652317260, 1732699209228, 1734748891881, 1730584218165, 1733219241478, 1733152064667, 1733103688175, 1732699033555, 1732699293420, 1733191959995, 1732870733932, 1733236957781, 1732698947868, 1733192010010, 1732871002483, 1733236852622, 1732699389630, 1730877198528, 1732870407486 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9415/Authors" ], [ "ICLR.cc/2025/Conference/Submission9415/Reviewer_C8r6" ], [ "ICLR.cc/2025/Conference/Submission9415/Authors" ], [ "ICLR.cc/2025/Conference/Submission9415/Area_Chair_F9VF" ], [ "ICLR.cc/2025/Conference/Submission9415/Reviewer_CapU" ], [ "ICLR.cc/2025/Conference/Submission9415/Reviewer_C8r6" ], [ "ICLR.cc/2025/Conference/Submission9415/Authors" ], [ "ICLR.cc/2025/Conference/Submission9415/Reviewer_CapU" ], [ "ICLR.cc/2025/Conference/Submission9415/Authors" ], [ "ICLR.cc/2025/Conference/Submission9415/Authors" ], [ "ICLR.cc/2025/Conference/Submission9415/Authors" ], [ "ICLR.cc/2025/Conference/Submission9415/Authors" ], [ "ICLR.cc/2025/Conference/Submission9415/Authors" ], [ "ICLR.cc/2025/Conference/Submission9415/Authors" ], [ "ICLR.cc/2025/Conference/Submission9415/Authors" ], [ "ICLR.cc/2025/Conference/Submission9415/Authors" ], [ "ICLR.cc/2025/Conference/Submission9415/Authors" ], [ "ICLR.cc/2025/Conference/Submission9415/Authors" ], [ "ICLR.cc/2025/Conference/Submission9415/Reviewer_st6i" ], [ "ICLR.cc/2025/Conference/Submission9415/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer st6i (Part 2/3)\", \"comment\": \"> Related to (2), HC learning curve is steeper than yours. Can you explain why?\\n\\nOur motivation is to improve the **sample efficiency** of programmatic RL, i.e., maximizing the expected return using as few program evaluations as possible, which aligns with the standard RL objective that aims to minimize the number of environment interactions. That said, we analyze the plots from the following aspects:\\n- **Reaching a target average return with varying numbers of program evaluations** (drawing a horizontal line in the plots): Table 2 below shows the number of program evaluations required for each method to reach an average return of 0.5. The result shows that LLM-GS requires fewer program evaluations to achieve an average return of 0.5 in all the tasks compared to HC.\\n- **Average return given the same number of program executions** (drawing a vertical line in the plots): Table 1 below shows the average return when each method uses $k \\\\in \\\\{100, 1000\\\\}$ program evaluations. LLM-GS outperforms HC in all the tasks using 100 programs. When using 1k programs, LLM-GS achieves a better average return in six Karel tasks compared to HC, and the two methods perform comparably in the rest of the four tasks. That said, LLM-GS is more sample-efficient than HC when the 1000 program elapses.\\n\\n| Table 1 | CleanHouse | DoorKey | FourCorners | Harvester | Maze | OneStroke | Seeder | Snake | StairClimber | TopOff |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| LLM-GS | **189** | **6** | **5** | **3** | **1** | **2** | **3** | **73521** | **3** | **7** |\\n| HC | 261 | 10454 | 1443 | 136 | 44 | 34 | 976 | 132989 | 323 | 1365 |\\n\\n| Table 2 | CleanHouse | DoorKey | FourCorners | Harvester | Maze | OneStroke | Seeder | Snake | StairClimber | TopOff |\\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\\n| LLM-GS (100 program evaluations) | **0.40** | **0.69** | **1.00** | **0.94** |**1.00** |**0.82** | **0.92** | **0.09** | **1.00** | **0.99** |\\n| HC (100 program evaluations) | 0.25 | 0.17 | 0.09 | 0.41 | 0.74 | 0.73 | 0.15 | 0.05 | 0.25 | 0.04 |\\n| LLM-GS (1000 program evaluations) | **0.96** | **0.86** | **1.00** |**0.97** | **1.00** | **0.90** | **0.97** | 0.13 | **1.00** | **1.00** |\\n| HC (1000 program evaluations) | 0.90 | 0.40 | 0.35 | 0.88 | 1.00 | **0.90** | 0.51 | **0.14** | **1.00** | 0.43 |\\n\\nFinally, we would like to clarify that steepness is not often considered an important factor when analyzing sample-efficiency plots. We believe that **HC curves look steeper because their initial performance (random initializations) is much worse than LLM-GS with initialized programs**. As a result, there is more room for improvement for HC. Therefore, we believe **the steeper curves of HC should not be considered as an advantage**.\\n\\n> While the complexity focus is mainly on number of programs, the cost of running a LLM is ignored. Can you add the cost of initialization to your plots to understand if spending more \\u201cflops\\u201d is in general better?\\n\\nThe standard reinforcement learning (RL) problem setting often considers interacting with environments to be expensive, dangerous (e.g., robotics), or even impossible (e.g., offline RL). Therefore, the development of RL algorithms focuses on minimizing the number of interactions required to get reasonable performance. In the case of programmatic RL, we consider the number of programs executed to measure the sample efficiency.\\n\\nAs requested by the reviewer, we have conducted additional investigations on the cost of LLMs. Since we use a proprietary language model (GPT4) in our experiments, we cannot calculate the exact FLOPs used. Hence, **we estimate the cost of running the LLM in the following two aspects**:\\n- **Time**: We provide a plot in Appendix I (Figure 18) illustrating the wall-clock time evaluation for the DoorKey task. The result shows that although LLM takes some time to generate the initial programs, it quickly surpasses other baselines in a few seconds, highlighting the efficiency of our proposed method in terms of real-time elapsed.\\n- **Money**: We use 48 API calls to initialize the search population of 32 programs, which costs approximately USD 0.50 when using GPT-4 (gpt-4-turbo-2024-04-09).\"}", "{\"summary\": \"In this paper, the authors propose a framework for PRL (Programmatic Reinforcement Learning) based on a new search algorithm in program space (SHC), whose initial population of programs is generated from a capable LLM rather than obtained via random generation.\\nThe authors' main contributions are a prompting strategy which correctly outlines each PRL task to the LLM, coupled with the idea of having the LLM generate both Python code and the DSL in which the solution to a given PRL task is supposed to be expressed; this allows the authors to get the LLM to generate programs with the correct DSL syntax, despite the DSL not having presumably been experienced by the LLM during training.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper's idea is quite simple but effective, and one can see that it significantly improves over SOTA methods when it comes to sample efficiency/number of program evaluations during search, and for some tasks, also in terms of final average return. The paper is written clearly and features comprehensive ablation studies.\", \"weaknesses\": \"It is not quite clear which one of the two contributions (SHC and its initialisation with LLM-generated programs) is the one which contributes the most to the final result. From figure 4, it does not appear that the SHC algorithm is really that much more efficient than HC. Plus, the role of the Python -> DSL parser is not quite clear.\", \"questions\": [\"I have two main points relating to the weaknesses mentioned above, which I would like to see addressed before possibly raising my score:\", \"The authors should run two more ablation studies of their LLM-GS method; one in in which they remove the LLM generated initialisation for SHC, and one in in which they run SHC which the initialisation strategy used by the SOTA HC. This would help address the weakness mentioned above.\", \"What is exactly the role of the parser? Does the LLM generate both Python and DSL code natively, or does it only generate Python code to be processed by the parser? Did the authors implemenent the parser themselves, of did they use an off-the-shelf parser?\"], \"more_but_less_pressing_questions\": [\"The advantage of SHC in terms of # of program evaluations is manifest. However, how large is its advantage in terms of simple walltime/wait time?\", \"HC (and therefore SHC as well) is basically a greedy search algorithm. Isn't there a strong possibility of it getting stuck in local maxima? To me this seems to be a bigger problem of HC compared to its sample inefficiency.\", \"Most of the discussion in section 3 (aside from the outline of the Karel domain) feels more like a Related Work section. I suggest that the authors move it to Related Work or to the Appendix.\", \"Which initialisation do the authors use for the ablation study in figure 6? To they use LLM-generated programs or randomly generated ones?\", \"How is the experiment in section 5.5 conducted? Is the LLM asked to revise programs, and these are then used as initialisation for SHC? Or is SHC completely absent from the experiment?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer C8r6\", \"comment\": \"We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below.\\n\\n> The authors should run two more ablation studies of their LLM-GS method; one in in which they remove the LLM generated initialisation for SHC, and one in in which they run SHC which the initialisation strategy used by the SOTA HC. This would help address the weakness mentioned above.\\n\\nWe completely agree with the reviewer that comparing different initializations to the SHC search is essential. Hence, we have shown this ablation study in **Figure 7 in the original paper**, where we compared two initialization strategies: (1) **LLM initialization** which uses LLM-generated programs as proposed in this work, and (2) **random initialization** used by SOTA HC. The result shows that our proposed LLM initialization significantly outperforms random initialization in terms of sample efficiency.\\n\\n> What is exactly the role of the parser? Does the LLM generate both Python and DSL code natively, or does it only generate Python code to be processed by the parser? Did the authors implemenent the parser themselves, of did they use an off-the-shelf parser?\\n\\nTo generate a program, we prompt the LLM to write the program in Python and then translate it into DSL, all in one response. We have included a sample output from the LLM in Appendix D.3.\\n\\nTo translate an LLM-generated program to an abstract syntax tree (AST), we use an off-the-shelf Karel parser which can convert a string in DSL into an AST. We implemented a translator to convert a Python program into Karel DSL and fix some minor errors made by the LLM (rules listed in Table 6). We have updated Appendix B to make it clear.\\n\\n> The advantage of SHC in terms of # of program evaluations is manifest. However, how large is its advantage in terms of simple walltime/wait time?\\n\\nAs suggested by the reviewer, we have additionally conducted **a wall time evaluation comparing different methods (HC, CEBS, CEM, and LLM-GS)** in DoorKey. The result is presented in Appendix I (Figure 18), showing that our proposed **LLM-GS framework is significantly more runtime efficient compared to all the baselines**, surpassing the baselines in less than 50 seconds.\\n\\n> HC (and therefore SHC as well) is basically a greedy search algorithm. Isn't there a strong possibility of it getting stuck in local maxima? To me this seems to be a bigger problem of HC compared to its sample inefficiency.\\n\\nAs pointed out by the reviewer, greedy search algorithms can indeed get stuck in local maxima. Thus, these algorithms often have a restart strategy - to search programs from another initial point. In the cases of HC and SHC, after searching the neighborhood programs for k times without improving, it switches to a new initial point and restarts the search process. In HC, the neighborhood size k is fixed, whereas in SHC, it can grow over time. **Hence the main contribution of our work is to utilize LLM-generated programs as restart initial points given LLM\\u2019s common sense and programming skills to potentially escape local maxima**.\\n\\n> Most of the discussion in section 3 (aside from the outline of the Karel domain) feels more like a Related Work section. I suggest that the authors move it to Related Work or to the Appendix.\\n\\nAs suggested by the reviewer, we have moved the discussion of CEM and CEBS to Appendix F, while keeping the search space and HC in Section 3 since they are essential for understanding our proposed framework.\\n\\n> Which initialisation do the authors use for the ablation study in figure 6? To they use LLM-generated programs or randomly generated ones?\\n\\nThe ablation study in Figure 6 uses LLM-generated programs. To make this clear, we have revised the caption of Figure 6 by adding \\u201cusing LLM-initialized programs.\\u201d\\n\\n> How is the experiment in section 5.5 conducted? Is the LLM asked to revise programs, and these are then used as initialisation for SHC? Or is SHC completely absent from the experiment?\\n\\nWe conduct this experiment to test whether LLMs can progressively revise and improve their generated programs, **so SHC and any other search methods are completely absent from this experiment**.\\n\\nWe ask the LLM to repeatedly generate new programs given feedback from previously produced ones and observe that the performance saturates within a few rounds. This indicates the necessity of combining the advantages of LLM and the search method to achieve better performance. We have clarified this in Section 5.6 in the revised paper by adding \\u201cLLM itself without the help of search algorithms.\\u201d\"}", "{\"metareview\": \"The paper presents a mechanism to use LLMs to find policies in a Domain Specific Language (DSL) for the Programmatic Reinforcement Learning paradigm. The technique builds on the Hill Climbing (HC) algorithm that effectively searches for programmatic policies directly in the program space. The proposed technique provides a mechanism to initialize the HC algorithm with promising candidates instead of with random programs. The experiments show the efficacy of the proposed technique in a toy RL navigation benchmark.\", \"additional_comments_on_reviewer_discussion\": \"The authors provided detailed responses to the concerns raised by the reviewers. There was a consensus among the reviewers that the paper should be accepted.\"}", "{\"summary\": \"Programmatic policies for reinforcement learning have gained popularity in recent years, with state-of-the-art methods often employing search techniques that utilize various heuristics and initialization strategies. Given that these search methods are typically localized and proximity-based, the choice of initialization point becomes crucial. The authors present a novel LLM-based search algorithm, LLM-GS, which uses large language models (LLMs) for initializing the search, capitalizing on their coding capabilities. They acknowledge that LLMs often struggle to generate effective initializations within the domain-specific language (DSL) of the problem and introduce a Pythonic-DSL approach to address this challenge. Additionally, they propose a new search algorithm called scheduled hill climbing, which optimizes the search budget by starting with a smaller allocation and gradually increasing it as the search progresses. Through experiments in the Karel the Robot domain, they demonstrate the superiority of their algorithm compared to several search baselines and conduct thorough ablation studies to assess the impact of each contribution on performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The authors have identified a significant limitation in existing search algorithms for programmatic reinforcement learning: their localized nature and the crucial role of initialization. They effectively combine this insight with the proven capabilities of LLMs in coding tasks to develop a straightforward yet powerful algorithm, LLM-GS. Furthermore, they propose a search strategy that aligns with their initialization approach, recognizing that they are likely starting from a strong position and can save their search budget for later stages.\", \"They demonstrate the superior performance of LLM-GS compared to several search baselines, instilling confidence in its effectiveness, particularly in the Karel domain.\", \"The ablation studies are well-structured and thorough, effectively investigating each component to clarify the sources of performance improvements.\", \"The authors address potential concerns regarding LLM memorization and data leakage, which adds to the credibility of their results.\", \"I found the LLM Revision experiment in Section 5.5 particularly fascinating, as it explores whether LLMs can be leveraged more effectively in the search process to refine the initially proposed program. This experiment highlights that existing simple techniques may not perform as well as anticipated.\"], \"weaknesses\": [\"I have two main concerns with the paper that prevent me from giving it a higher score:\", \"The authors have conducted experiments solely in the Karel the Robot domain. Testing the LLM-GS algorithm in another domain, such as MicroRTS, would enhance confidence in its effectiveness across different problem domains and DSLs. **Addressing this issue could lead me to reconsider my score.**\", \"The ablation study comparing search algorithms is quite limited, especially given the close results shown in Figure 6. It appears that the LLM provides such a strong starting point that the specific search algorithm has minimal impact. All search algorithms perform reasonably well on CleanHouse, and the results are very similar among the top 2-3 algorithms on DoorKey. Including results from additional domains would help establish the significance of the search algorithm. Furthermore, the scheduled hill climbing algorithm involves several design choices, such as the sinusoidal schedule and logarithmic interpolation; conducting separate ablation studies on the importance of each of these choices could be beneficial.\"], \"questions\": [\"Do the equations for the scheduled hill climbing algorithm derive from prior work, or is there additional intuition apart from the decision to increase the search budget as the algorithm progresses? There are several key choices here\\u2014such as the logarithmic interpolation and sinusoidal schedule\\u2014that don\\u2019t seem entirely intuitive and are introduced without sufficient justification. Could you provide more details on these decisions?\", \"Are the program embeddings used for latent space search generated by the LLM?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thanks the authors for their diligent and detailed reply to my comments. Since all of my concerns have been addressed, I will raise my score to a straight accept.\"}", "{\"comment\": \"We sincerely thank the reviewer for acknowledging our rebuttal and for providing insightful feedback, which has significantly strengthened our submission.\"}", "{\"title\": \"Thank you for the responses (score increased)\", \"comment\": \"Thank you for your thorough responses to my questions regarding the scheduled HC algorithm and the ablation studies, as well as for running experiments in an additional domain. I believe incorporating MicroRTS before the camera-ready version would significantly strengthen your paper, but I am already convinced by the results and methodology. As such, I\\u2019m happy to increase my score now.\"}", "{\"title\": \"Response to Reviewer st6i (Part 3/3)\", \"comment\": \"> What is task variance? Is it random initialization in environment state or policy pool?\\n\\n**Each task's variance arises from the randomness in the environment's initial states**, including the agent's position and direction and the placement of markers and walls. We thank the reviewer for raising this question, and we have updated Section 5.1 (the Metric paragraph) to make this clear.\\n\\n> What happens if you initialize other heuristics with LLM-based programs? Do they compare favorably compared to your Scheduling-HC?\\n\\nWe completely agree with the reviewer that comparing different search methods with the same set of LLM-initialized is essential. Therefore, **Section 5.3 (Figure 6) in the original paper exactly presents this comparison, where we initialize the CEM, CEBS, HC, and Scheduled HC heuristics with LLM-generated programs** in two tasks: DoorKey and CleanHouse. The result shows that our proposed scheduled HC is among the best-performing heuristics in both tasks. We have revised the paper to clarify this in the caption of Figure 6 by adding \\u201cusing LLM-initialized programs.\\u201d\\n\\n> In line 425, it should be 500k.\\n\\nWe thank the reviewer for pointing this out, and we have fixed this in our revised paper.\\n\\n**References**\\n\\n[1] Dweep Trivedi, Jesse Zhang, Shao-Hua Sun, and Joseph J Lim. Learning to synthesize programs as interpretable and generalizable policies. In Neural Information Processing Systems, 2021.\\n\\n[2] Guan-Ting Liu, En-Pei Hu, Pu-Jen Cheng, Hung-Yi Lee, and Shao-Hua Sun. Hierarchical programmatic reinforcement learning via learning to compose programs. In International Conference on Machine Learning, 2023.\\n\\n[3] Tales Henrique Carvalho, Kenneth Tjhia, and Levi Lelis. Reclaiming the source of programmatic policies: Programmatic versus latent spaces. In International Conference on Learning Representations, 2024.\\n\\n[4] Maxime Chevalier-Boisvert, Bolun Dai, Mark Towers, Rodrigo Perez-Vicente, Lucas Willems, Salem Lahlou, Suman Pal, Pablo Samuel Castro, and Jordan Terry. Minigrid & miniworld: Modular & customizable reinforcement learning environments for goal-oriented tasks. In Neural Information Processing Systems, 2023\\n\\n[5] Student. The probable error of a mean. Biometrika, 6(1):1\\u201325, 03 1908. ISSN 0006-3444\"}", "{\"title\": \"Response to Reviewer CapU (Part 1/2)\", \"comment\": \"We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below.\\n\\n> The authors have conducted experiments solely in the Karel the Robot domain. Testing the LLM-GS algorithm in another domain, such as MicroRTS, would enhance confidence in its effectiveness across different problem domains and DSLs. Addressing this issue could lead me to reconsider my score.\\n\\nDue to the limited time we have during the rebuttal period, we could not conduct experiments in the MicroRTS domain since running each method against CoacAI on map basesWorkers8x8A, one of the simplest tasks in MicroRTS, could take up to 16 hours per seed for search-based methods like HC so it\\u2019s more than 10 days in total and it takes even longer for RL based method like HPRL.\\n\\nTherefore, to include an additional domain, we adopt **the Minigrid domain** [1]. We designed the DSL for Minigrid to be similar to the MicroRTS DSL used by Moraes et al. [2]. Specifically, both the perception functions in our Minigrid DSL and the MicroRTS DSL are parameterized, i.e., require an input parameter such as an object, since there are multiple object types in the two domains, unlike the case in Karel. \\n\\nWe present the results of three Minigrid tasks in Section 5.5 in the revised paper. As shown in Figure 9, **LLM-GS demonstrates better sample efficiency compared to HC, the best-performing baseline in the Karel domain**. For further details of the Minigrid experiments, please refer to Section 5.5 and Appendix L.\\n\\n> The ablation study comparing search algorithms is quite limited, especially given the close results shown in Figure 6. It appears that the LLM provides such a strong starting point that the specific search algorithm has minimal impact. All search algorithms perform reasonably well on CleanHouse, and the results are very similar among the top 2-3 algorithms on DoorKey. Including results from additional domains would help establish the significance of the search algorithm. \\n\\nIn Figure 6, it is evident that HC-32 and HC-2048 excel only in specific tasks: HC-32 performs exceptionally well in CleanHouse, while HC-2048 stands out in DoorKey. In contrast, our proposed scheduled HC ranks among the best-performing methods in both DoorKey and CleanHouse, showcasing its robustness and efficiency. **That said, given a novel task of interest, we can confidently run our method LLM-GS instead of trying HC with different hyperparameters**.\\n\\n> Furthermore, the scheduled hill climbing algorithm involves several design choices, such as the sinusoidal schedule and logarithmic interpolation; conducting separate ablation studies on the importance of each of these choices could be beneficial.\\n\\nWe thank the reviewer for the suggestion. There are three components in the proposed scheduled hill climbing (SHC): logarithmic interpolation, sinusoidal schedule, and logarithmic ratio. As suggested by the reviewer, **we have conducted additional experiments ablating each component**. We substitute them with their linear counterpart and testify all 8 variants in DoorKey and CleanHouse. The results shown in Appendix E in the revised paper indicate no significant differences between all the SHC variants, showcasing its robustness in design.\\n\\n> Do the equations for the scheduled hill climbing algorithm derive from prior work, or is there additional intuition apart from the decision to increase the search budget as the algorithm progresses? There are several key choices here\\u2014such as the logarithmic interpolation and sinusoidal schedule\\u2014that don\\u2019t seem entirely intuitive and are introduced without sufficient justification. Could you provide more details on these decisions?\\n\\nOur intuition to logarithmic interpolation and logarithmic ratio aims to allocate the most appropriate neighborhood size, k, to tasks of varying difficulties. The logarithm of the number of executed programs provides an indication of task difficulty, with the intuition that the optimal k should increase exponentially according to the structure of AST. For the sinusoidal schedule, we prioritize maintaining a stable neighborhood size k during both the early and final stages of the training process. We thank the reviewer for raising this question, and we have revised the paper to include these design intuitions in Appendix E.\\n\\n> Are the program embeddings used for latent space search generated by the LLM?\\n\\nNo. We do not use LLMs to generate the program embeddings; instead, **the program embeddings are derived from Trivedi et al. [3]**. Their method trains a Variational Autoencoder (VAE) to produce a program embedding space from programs and their execution traces.\"}", "{\"title\": \"Reminder: The reviewer-author discussion period ends in 10 hours\", \"comment\": \"The deadline for reviewers to post a message to the authors is in 10 hours (Dec 2 23:59 AoE). We look forward to hearing from the reviewer.\"}", "{\"title\": \"Reminder: The reviewer-author discussion period ends in four days\", \"comment\": [\"We would like to express our sincere gratitude to the reviewer for the thorough and constructive feedback. We are confident that our responses adequately address the concerns and questions raised by the reviewer, including the following points:\", \"**A clarification of Figure 7 caption**\", \"**An explanation of the parser**: Appendix D.3\", \"**An evaluation of wall time**: Appendix I (Figure 18)\", \"**A discussion of escaping local maxima with LLM-initialized restart programs**\", \"**A rearrangement of Section 3**: the discussion of CEM and CEBS has been moved to Appendix F\", \"**A clarification of Figure 6 caption**\", \"**A clarification of LL revision**: Section 5.6\", \"Please kindly let us know if the reviewer has any additional concerns or if further experimental results are required. We are fully committed to resolving any potential issues, should time permit. Again, we thank the reviewer for all the detailed review and the time the reviewer put into helping us to improve our submission.\"]}", "{\"comment\": \"We are very grateful to the reviewer for acknowledging our rebuttal and for the effort the reviewer put into helping us improve our submission.\"}", "{\"title\": \"Response to Reviewer st6i (Part 1/3)\", \"comment\": \"We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below.\\n\\n> The proposed idea is only tested on a toy RL benchmark which only add marginal practical value. Are there no other environments where programs can be actions and your idea can be applied?\\n\\nWe follow existing programmatic RL works [1-3] and use the Karel domain as our testbed for evaluation. The Karel tasks characterize diverse aspects that are essential in solving various RL problems as discussed [1-2], including:\\n- **Exploration**: to navigate the agent through maps to novel states that may be far from initial locations like Maze and CleanHouse.\\n- **Complexity**: to perform specific actions at specific locations i.e., put or pick markers on marker grids like TopOff and Harvester.\\n- **Multi-stage exploration**: to exhibit specific behaviors depending on the current stages like DoorKey.\\n- **Additional Constraints**: to perform specific actions under restrictions e.g., traverse the environment without revisiting the same position like OneStroke, and place exactly one marker on all grids like Seeder. \\n\\nAs suggested by the reviewer, **we have additionally explored adapting our framework to a new domain, the Minigrid domain [4], to showcase the adaptability of our proposed framework**, which simply requires defining sets of perceptions and actions for the domain of interest. We utilized the same control flows (FOR, WHILE\\u2026) as in the Karel DSL and chose actions as the Minigrid built-in ones. Unlike the Karel environment, Minigrid has multiple objects and colors, and it is necessary to identify them to solve the tasks. Therefore, we designed our perceptions to be parameterized, i.e., the perceptions may require input parameters, such as an object or a color.\\n\\nWe present the results of three Minigrid tasks in Section 5.5. As shown in Figure 9, LLM-GS demonstrates better sample efficiency compared to HC, the best-performing baseline in the Karel domain. For further details of the Minigrid experiments, please refer to Section 5.5 and Appendix L.\\n\\n> In Figure-4, HC and your method compare very similarly. In fact, confidence intervals overlap in many of the tasks. It is not clear if the your method is significantly better. Can you conduct a statistical test to better compare these two methods?\\n\\nWe thank the reviewer for the suggestion. To provide a statistical test and better compare the two methods, we use **the Student\\u2019s t-test [5]** to evaluate the statistical significance of our proposed LLM-GS compared to HC in terms of sample efficiency. We present this additional result in Appendix J (Figure 19) in the revised paper. **The result shows that LLM-GS outperforms HC in sample efficiency with statistical significance on 8 out of 10 Karel tasks, and the two methods perform comparably with the other two tasks.**\"}", "{\"title\": \"Reminder: The reviewer-author discussion period ends in 10 hours\", \"comment\": \"The deadline for reviewers to post a message to the authors is in 10 hours (Dec 2 23:59 AoR). We look forward to hearing from the reviewer.\"}", "{\"title\": \"Reminder: The reviewer-author discussion period ends in four days\", \"comment\": [\"We would like to express our sincere gratitude to the reviewer for the thorough and constructive feedback. We are confident that our responses adequately address the concerns and questions raised by the reviewer, including the following points:\", \"**Additional results in a new domain (Minigrid with three tasks)**: Section 5.5 and Appendix L\", \"**An explanation of the result in Figure 6**\", \"**An additional ablation study and an explanation of the design intuitions of SHC**: Appendix E\", \"**A clarification of program embedding space**\", \"Please kindly let us know if the reviewer has any additional concerns or if further experimental results are required. We are fully committed to resolving any potential issues, should time permit. Again, we thank the reviewer for all the detailed review and the time the reviewer put into helping us to improve our submission.\"]}", "{\"title\": \"Still looking forward to the reviewer's feedback\", \"comment\": \"Since the deadline for reviewers to post a message has just passed, the reviewer won't be able to post an official comment anymore. **We would greatly appreciate it if the reviewer could edit the original review to let us know if our rebuttal and the revised paper sufficiently address the questions and concerns raised by the reviewer**. This would be extremely helpful for us in improving our submission.\"}", "{\"title\": \"Response to Reviewer CapU (Part 2/2)\", \"comment\": \"**References**\\n\\n[1] Maxime Chevalier-Boisvert, Bolun Dai, Mark Towers, Rodrigo Perez-Vicente, Lucas Willems, Salem Lahlou, Suman Pal, Pablo Samuel Castro, and Jordan Terry. Minigrid & miniworld: Modular & customizable reinforcement learning environments for goal-oriented tasks. In Neural Information Processing Systems, 2023.\\n\\n[2] Rubens O. Moraes, David S. Aleixo, Lucas N. Ferreira, and Levi H. S. Lelis. Choosing well\", \"your_opponents\": \"How to guide the synthesis of programmatic strategies. In Proceedings of the\\nThirty-Second International Joint Conference on Artificial Intelligence, International Joint Conference on Artificial Intelligence, 2023.\\n\\n[3] Dweep Trivedi, Jesse Zhang, Shao-Hua Sun, and Joseph J Lim. Learning to synthesize programs as interpretable and generalizable policies. In Neural Information Processing Systems, 2021.\"}", "{\"summary\": \"This paper studies learning programmatic-actions for RL, through heuristic search methods such as hill climbing. The paper focuses on the initialization problem. Rather than randomly initializing programs, the authors propose using LLM guided programs where the environment is described in natural language and a GPT-based model is prompted to bootstrap a set of programs. These programs are further improved using heuristic methods based on environment feedback. The authors also propose a scheduler for the hill climbing that allocates budget based on a sinusoidal scheduling. Finally, they find that generating DSLs directly is challenging due to domain gap; instead, they propose python-dsl where a python program is generated by the LLM and converted into DSL using rules. The resulting initialization and heuristic lead to improvement in sample complexity of the heuristics in a toy RL navigation benchmark.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Update: Authors addressed my concerns with many additional experiments. I increased my score accordingly.\\n\\n1. The paper proposes LLM-based initialization for heuristic search methods. This is applicable to broader range of domains.\\n\\n2. Experimental results indicate sample efficiency.\", \"weaknesses\": \"My main concerns are lack of realistic domains, novelty of the proposed idea, and significance of the experimental comparison.\\n\\n1. The proposed idea is only tested on a toy RL benchmark which only add marginal practical value. Are there no other environments where programs can be actions and your idea can be applied?\\n\\n2. In Figure-4, HC and your method compare very similarly. In fact, confidence intervals overlap in many of the tasks. It is not clear if the your method is significantly better. Can you conduct a statistical test to better compare these two methods?\\n\\n3. Related to (2), HC learning curve is steeper than yours. Can you explain why?\\n\\n4. While the complexity focus is mainly on number of programs, the cost of running a LLM is ignored. Can you add the cost of initialization to your plots to understand if spending more \\u201cflops\\u201d is in general better?\\n\\n5. What is task variance? Is it random initialization in environment state or policy pool?\\n\\n6. What happens if you initialize other heuristics with LLM-based programs? Do they compare favorably compared to your Scheduling-HC?\\n\\n7. In line 425, it should be 500k.\", \"questions\": \"Please see above for specific questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reminder: The reviewer-author discussion period ends in four days\", \"comment\": [\"We would like to express our sincere gratitude to the reviewer for the thorough and constructive feedback. We are confident that our responses adequately address the concerns and questions raised by the reviewer, including the following points:\", \"**Additional results in a new domain (Minigrid with three tasks)**: Section 5.5 and Appendix L\", \"**A statistical test of our main result**: Appendix J (Figure 19)\", \"**A discussion of HC's learning curves' steepness**\", \"**An estimation of the cost of running an LLM**\", \"**A clarification of task variance**: Section 5.1 (the Metric paragraph)\", \"**A clarification of Figure 6 caption**\", \"Please kindly let us know if the reviewer has any additional concerns or if further experimental results are required. We are fully committed to resolving any potential issues, should time permit. Again, we thank the reviewer for all the detailed review and the time the reviewer put into helping us to improve our submission.\"]}" ] }
8CKgS18uWx
Structure-Enhanced Protein Instruction Tuning: Towards General-Purpose Protein Understanding
[ "Wei Wu", "Chao Wang", "Liyi Chen", "Mingze Yin", "Yiheng Zhu", "Kun Fu", "Jieping Ye", "Hui Xiong", "Zheng Wang" ]
Proteins, as essential biomolecules, play a central role in biological processes, including metabolic reactions and DNA replication. Accurate prediction of their properties and functions is crucial in biological applications. Recent development of protein language models (pLMs) with supervised fine tuning provides a promising solution to this problem. However, the fine-tuned model is tailored for particular downstream prediction task, and achieving general-purpose protein understanding remains a challenge. In this paper, we introduce Structure-Enhanced Protein Instruction Tuning (SEPIT) framework to bridge this gap. Our approach integrates a noval structure-aware module into pLMs to inform them with structural knowledge, and then connects these enhanced pLMs to large language models (LLMs) to generate understanding of proteins. In this framework, we propose a novel two-stage instruction tuning pipeline that first establishes a basic understanding of proteins through caption-based instructions and then refines this understanding using a mixture of experts (MoEs) to learn more complex properties and functional information with the same amount of activated parameters. Moreover, we construct the largest and most comprehensive protein instruction dataset to date, which allows us to train and evaluate the general-purpose protein understanding model. Extensive experimental results on open-ended generation and closed-set answer tasks demonstrate the superior performance of SEPIT over both closed-source general LLMs and open-source LLMs trained with protein knowledge.
[ "Large Language Models", "Insturction Tuning", "Multi-modal Learning", "Mixture of Experts", "Protein" ]
Reject
https://openreview.net/pdf?id=8CKgS18uWx
https://openreview.net/forum?id=8CKgS18uWx
ICLR.cc/2025/Conference
2025
{ "note_id": [ "t2nRHtww5u", "rzBNrEOCtp", "r5hnoBJNdp", "kSt05hjtGS", "Ui6bpSeGSP", "FEIUnb6KAH" ], "note_type": [ "official_review", "decision", "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1730608390823, 1737523769163, 1734915914880, 1731157070690, 1730608169695, 1730813123986 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6434/Reviewer_K4CN" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6434/Area_Chair_UyMJ" ], [ "ICLR.cc/2025/Conference/Submission6434/Reviewer_UgeA" ], [ "ICLR.cc/2025/Conference/Submission6434/Reviewer_yPkE" ], [ "ICLR.cc/2025/Conference/Submission6434/Reviewer_jsaQ" ] ], "structured_content_str": [ "{\"summary\": \"This is an excellent work! The authors propose a novel instruction tuning method to understand the protein structure and connect the PLM with LLM to provide the natural language explanation and human interaction. Meanwhile, a very large scale dataset is curated by them which is also a very significant contribution.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The method is very novel and bridges the gap in this domain.\\nThe motivation for this work is very strong, which bridges the gap between protein and language. They also propose a two-stage method to achieve this.\\n2. Large-scale dataset is curated.\\nHigh-quality, large-scale, and AI-ready data are always needed in this field.\\n3. Extensive experiments and competitive performance.\\nFrom Tab 1, we can find the performance is very competitive compared to SOTA models.\", \"weaknesses\": \"1. Maybe add more case studies, (only 2 seems too few).\\nMy suggestion is to categorize the question type (currently only two types of questions are made). If the author can summarize and categorize most questions and do the case studies for each of them. That would be great.\\n2. The in-depth analysis is not enough. Add more analysis in the experiments part.\\nThere is a lack of error analysis. Can you also add error analysis in the case studies? So that we know when the model might make mistakes.\", \"questions\": \"See details in weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"The paper introduces the Structure-Enhanced Protein Instruction Tuning (SEPIT) framework, which aims to achieve general-purpose protein understanding by integrating structural knowledge into protein language models and connecting them to large language models. The authors propose a two-stage instruction tuning pipeline and construct a novel comprehensive protein instruction dataset. Experimental results demonstrate that SEPIT outperforms existing models in both open-ended generation and closed-set answer tasks.\", \"strengths\": [\"**Novel, Compelling Framework**. The proposed SEPIT framework effectively combines structural information (scarce) with sequence protein data (plentiful), enhancing protein understanding.\", \"**Comprehensive Experimental Validation**. Extensive experiments show that the proposed method works well across various task.\", \"**Clear Presentation**. The paper is well-written and clearly explains the methodology and experimental results.\", \"**A novel dataset**. The proposed protein instruction dataset containing open-ended generation and a closed-set answer task might be of independent interest.\"], \"weaknesses\": [\"**Unclear Generalization**. The paper focuses most experiments on its own dataset. Only during the rebuttal phase were experiments on external out-of-distribution datasets reported, but these were reported with baseline comparison and not included in the revised version, despite the request from reviewer UgeA . This leaves the question of generalization mostly unanswered.\", \"**Complexity of Structural Modeling**. As pointed out by jsaQ, the structural components might be too simplistic to capture detailed structural information, potentially introducing errors.\", \"**Two-Stage Training Justification**: The proposed two-stage training process is not sufficiently motivated and justified.\", \"**Limited / no error analysis**\"], \"reason_to_reject\": \"The lack of comprehensive experiments on external datasets, which renders the performance of the method in practice uncertain.\", \"additional_comments_on_reviewer_discussion\": [\"There was limited engagement from the reviewers, despite repeated pleas from the authors. The main points raised in the reviews/rebuttal were:\", \"Data Leakage Concerns: Reviewer yPkE raised concerns about potential data leakages due to redundancy and similarity in protein data. The authors clarified that they followed standard practices to avoid data leakage by ennsuring there are no identical proteins between training/test sets.\", \"Baseline Models: Reviewer yPkE noted the lack of comparisons with certain baselines. The authors responded by highlighting additional baseline comparisons in the appendix and provided results on updated test sets.\", \"Two-Stage Training Process: Reviewer yPkE questioned the necessity of the two-stage training process. The authors explained that this approach is common in MoE-related literature and provided additional justification for its use.\", \"Generalization Beyond Training Data: Reviewer UgeA and Reviewer jsaQ emphasized the need for experiments on external datasets. The authors added results on new test sets and out-of-distribution datasets to address this concern.\", \"Structural Modeling: Reviewer jsaQ and Reviewer K4CN raised concerns about the simplicity of the structural modeling approach. The authors acknowledged this and suggested future work to explore more advanced structural modeling methods.\", \"For reviewers that did not engage with the authors' rebuttal and whose concerns appear to have been resolved, those concerns were downweighed.\"]}", "{\"summary\": \"The paper introduces structure-enhanced protein instruction tuning (SEPIT) framework to learn a general-purpose protein understanding model. The paper combines a structure-aware protein encoder with a pretrained large language model, then trains a mixture of experts. To train the new model, they create a new large-scale protein instruction tuning dataset from existing resources. The experiments on the test split of their dataset shows that the structure aware training improves model performance. The paper includes comprehension ablation to show the importance of each component in the model architecture. Finally, they include qualitative analysis regarding how the mixture of experts is utilized at inference time.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper is very well written.\", \"The architecture is well-motivated. The paper clearly explains each component, making it easy to understand and reproduce.\", \"The experiment section is comprehensive and up to date. The paper compares SEPIT with recent large language models and PIT models. In all cases, they show that SEPIT achieves the highest performance.\", \"The paper includes key ablations that test the robustness of the SEPIT model. For example, Table 4 shows that SEPIT does not significantly drop performance even when 3D structure information is unavailable.\"], \"weaknesses\": \"**Generalization beyond the training data.**\\nThe paper does not experiment on datasets other than its dataset. In Table 1, the paper includes results on the test set of its newly constructed dataset. This experiment does not test the generalization of the model to new datasets, a key practical requirement. It would increase the impact of the paper if experiments with more datasets were included. This is the main weakness of the paper. \\n\\n**A need for warm up stage.**\\nThe paper includes a warm up for the protein encoder in Stage 0. In this stage, the paper pretrains the encoder with a self supervised learning objective, i.e., denoising objective. However, it is unclear from the experiments if this stage is necessary. The motivation of the pretrained encoder will be randomly initialized is not a strong justification for this additional step. Can the protein encoder be jointly trained with the language model? \\n\\n**Combination of existing components.**\\nThis is a very minor weakness. The paper combines components from existing literature to create a hybrid language model. The architectural innovation is limited. This is not a deal breaker, but it would be great if any existing modules could be simplified. \\n\\nTypos\\n- Line 126: Incorrect citation for ALBEF\\n- Line 442: Remove full stop after GO annotation tasks\", \"questions\": \"Please see the weaknesses in the paper.\", \"additional_questions\": [\"Nit: In equation 11, what is $W_{m}$?\", \"How are the instructions provided to the zero-shot models?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The excerpt describes a framework called Structure-Enhanced Protein Instruction Tuning (SEPIT) aimed at enabling large language models (LLMs) to achieve general-purpose protein understanding. The authors argue that existing protein language models (pLMs), even when fine-tuned, are often task-specific and struggle to provide a holistic understanding of protein properties and functions. By integrating structural information, employing a novel instruction tuning pipeline, and utilizing a comprehensive dataset,\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"i) Incorporates Structural Information: this work includes a structure-aware module that lets it use both sequence data (which is plentiful) and structural data (which is scarcer) to understand proteins. the idea structural information could help understanding is ituiative and sounded.\", \"ii) Positive Performance on Function Prediction Tasks\\uff1aExperimental results show that SEPIT performs better than other methods on tasks involving both open-ended questions (like explaining the function of a protein) and closed-set questions (like determining if a protein has a specific function)\"], \"weaknesses\": \"i) Data Leakage Concerns:\\n* Considering the redundancy and similarity of protein data, a large amount of work indicates the importance of data cleaning. I believe the current experiment fails to address my concerns about data leakage. For instance, AlphaFold2, AlphaFold3, and Prot2Text propose methods to remove 40% sequence similarity. Such an approach would likely result in lower (and more reasonable) BLEU/ROUGE scores. It is recommended that the authors further refine the data splitting to mitigate potential data leakage issues.\\n\\nii) In the current experimental design, many of the studies mentioned by the authors were not included, such as Prot2Text and ProteinChat. Moreover, including some updated data (updated validation sets) could further enhance the model's credibility. The argue that these methods are limited to specific tasks like prediction and retrieval and don't offer the open-ended generation capabilities required for comprehensive protein understanding. However, a quantative comparision would be appriciate.\", \"questions\": \"i) The necessity and advantages of the proposed two-stage training process require further clarification. What are the potential drawbacks of a single-stage approach? How does the two-stage process specifically address these limitations? What empirical evidence supports this design choice?\\\"\\n\\nii) While the evaluation metrics derived from LLM contexts provide valuable insights, the paper would benefit from a more comprehensive analysis of model performance using domain-specific evaluation criteria. I suggest author could considering :\\nStratifying the evaluation set based on biological classification hierarchies and analyzing performance patterns across different biological categories\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a novel method for general protein understanding, termed SEPIT. The approach integrates a structure-aware module into protein language models (pLMs) and subsequently connects these structure-enriched pLMs to large language models (LLMs) to enhance the understanding of proteins. Building on this model framework, the authors propose a two-stage instruction tuning process. Initially, a foundational understanding of proteins is established through title-based instructions, which is then refined using a mixture of experts (MoEs) to capture more complex attributes and functional information.The authors also constructed a comprehensive dataset for both open-ended generation and closed-ended answering, based on Swiss-Prot and RCSB PDB. Extensive experimental results demonstrate that SEPIT significantly outperforms state-of-the-art models, and ablation studies provide structural validation of the effectiveness of each component within SEPIT.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.I appreciate the clear and well-structured presentation of the paper. Their writing style is fluid, effectively conveying the motivations behind the research and the logical progression of the methodology.\\n2.The experiments are comprehensive, featuring insightful ablation studies and case studies. The authors articulate the effectiveness of various techniques clearly, ensuring that these methods remain accessible and not overly complex.\\n3.The authors have successfully achieved the objectives outlined in the paper, and the figures are exceptionally clear. I appreciate their thoughtful summary of their contributions.\", \"weaknesses\": \"I mainly have three concerns:\\n1.Although the structural components serve their purpose, is such a design too simplistic to capture more detailed structural information? As is well known, the structure of proteins is highly complex, containing numerous amino acids. Could this approach become unreliable and introduce some errors?\\n2.The experimental results do not provide sufficient evidence to ascertain whether the mixture of experts (MoEs) module is functioning as intended. Further clarity on its performance would strengthen the findings.\\n3.What are the innovative aspects of the newly constructed dataset compared to the previous datasets? Are there any other models that have utilized this new dataset?\", \"questions\": \"1.What are the inference time and complexity of this framework?\\n2.Is the model capable of explaining interactions between two proteins or other more complex scenarios?\\n3.Is the dataset constructed by the authors representative in this field?\\n4.Not a question, What does the author suggest for addressing the resource consumption caused by excessively long protein sequences?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
8CJDYx8GwF
Gradient Flow Provably Learns Robust Classifiers for Data from Orthonormal Clusters
[ "Hancheng Min", "Rene Vidal" ]
Deep learning-based classifiers are known to be vulnerable to adversarial attacks. Existing methods for defending against such attacks require adding a defense mechanism or modifying the learning procedure (e.g., by adding adversarial examples). This paper shows that for certain data distribution one can learn a provably robust classifier using standard learning methods and without adding a defense mechanism. More specifically, this paper addresses the problem of finding a robust classifier for a binary classification problem in which the data comes from a mixture of Gaussian clusters with orthonormal cluster centers. First, we characterize the largest $\ell_2$-attack any classifier can defend against while maintaining high accuracy, and show the existence of optimal robust classifiers achieving this maximum $\ell_2$-robustness. Next, we show that given data sampled from the orthonormal cluster model, gradient flow on a two-layer network with a polynomial ReLU activation and without adversarial examples provably finds an optimal robust classifier.
[ "Orthonormal Clusters", "Robust classifier", "Two-layer Network", "Gradient Flow" ]
Reject
https://openreview.net/pdf?id=8CJDYx8GwF
https://openreview.net/forum?id=8CJDYx8GwF
ICLR.cc/2025/Conference
2025
{ "note_id": [ "pDPB7YxgfO", "oiE0n1FuGQ", "jmKgQdk6kr", "hi9spv8Lne", "fBOWtPcOm2", "SqrSExDhbv", "Shumuip8BV", "RQ56ZRXd9c", "O7zhGVfewM", "N2lSEIdZMM", "MiHfXsujFH", "K3h6xCUozZ", "GRFpvzd9l6", "Cjkifc56fk", "9njbtXPeqs", "84vL2Fj2Xj", "6QCwWeYGaX", "51AQr3cmXL", "2Mh7EwLwJD", "11AujP90JD" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732050224803, 1732770772913, 1730475646232, 1732553557212, 1737523614960, 1732049483836, 1734870676675, 1732109975604, 1732126723249, 1729235626747, 1732259043124, 1732450448288, 1730578787453, 1732049641671, 1732050638472, 1732049845739, 1732324481642, 1732068026328, 1729327748115, 1732324600858 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4030/Authors" ], [ "ICLR.cc/2025/Conference/Submission4030/Reviewer_MqXy" ], [ "ICLR.cc/2025/Conference/Submission4030/Reviewer_MqXy" ], [ "ICLR.cc/2025/Conference/Submission4030/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4030/Authors" ], [ "ICLR.cc/2025/Conference/Submission4030/Area_Chair_phn8" ], [ "ICLR.cc/2025/Conference/Submission4030/Authors" ], [ "ICLR.cc/2025/Conference/Submission4030/Reviewer_sKL6" ], [ "ICLR.cc/2025/Conference/Submission4030/Reviewer_HTHc" ], [ "ICLR.cc/2025/Conference/Submission4030/Reviewer_b91S" ], [ "ICLR.cc/2025/Conference/Submission4030/Reviewer_sKL6" ], [ "ICLR.cc/2025/Conference/Submission4030/Reviewer_sKL6" ], [ "ICLR.cc/2025/Conference/Submission4030/Authors" ], [ "ICLR.cc/2025/Conference/Submission4030/Authors" ], [ "ICLR.cc/2025/Conference/Submission4030/Authors" ], [ "ICLR.cc/2025/Conference/Submission4030/Authors" ], [ "ICLR.cc/2025/Conference/Submission4030/Reviewer_b91S" ], [ "ICLR.cc/2025/Conference/Submission4030/Reviewer_b91S" ], [ "ICLR.cc/2025/Conference/Submission4030/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We thank the reviewer for the review. We address your concerns below:\\n\\n1. **Non-degeneracy gap assumption**: The reviewer is right that the non-degeneracy gap is very small under random initialization when dimension $D$ is large, which we also acknowledged in our remark *Requirement on the initialization* in lines 506-516 as one limitation of our current results. In that remark, we noted that the required non-degeneracy gap of $\\\\Theta(1)$ generally cannot be achieved by random initialization. However, this does not imply that GF cannot converge to a robust classifier from random initialization. From a random initialization, the GF dynamics should be split into three phases (\\\"burn-in\\\" phase -> alignment phase -> convergence phase). The \\\"burn-in\\\" phase is the initial chaotic phase when each neuron moves towards the interior of one of the Voronoi regions (since each neuron is initialized close to the boundaries of these Voronoi regions, we have no control over which Voronoi region it \\\"selects\\\" as it highly depends on the sampled training data points). Once every neuron has reached the interior of one of the Voronoi regions with a non-degeneracy gap $\\\\Theta(1)$, our results apply afterward. We hope the reviewer can see the challenges in analyzing this \\\"burn-in\\\" phase, which we plan to tackle in future research. We also note that special but practically viable initializations, such as initializing each neuron as one of the training data points, satisfy our non-degeneracy gap assumption, which might be a good alternative to random initialization. \\n\\n2. **Dependence on variance $\\\\alpha^2$**: In our response, we will first point out that the dependence on $\\\\alpha$ is also empirically observed in Min and Vidal, 2024. Then, we will discuss our view on such dependence. \\n\\n* In Figure 2(a) of Min and Vidal, 2024, they plot the distance between the trained network and $F^{(p)}$ against different choices of $\\\\alpha$, and the distance increases as $\\\\alpha$ increases. Therefore, empirical results have shown the dependence on $\\\\alpha$, and our results agree with such observations. \\n\\n* Why does the distance have to depend on $\\\\alpha$? Our explanation is as follows: To make sure the trained network $f$ is close to $F^{(p)}$, one requires each neuron $w_j$ to be aligned with one of the cluster centers $\\\\mu_k$, and any misalignment $\\\\cos(w_j,\\\\mu_k)$ will be reflected in the distance between $f$ and $F^{(p)}$. However, since the training data only contains noisy samples around cluster centers, the GF dynamics can only guide the neurons' directions within a neighborhood of the true cluster center, and the size of this neighborhood necessarily depends on the variance. Indeed, in our proof sketch for the alignment phase (line 445-476), we show that the misalignment $\\\\cos(w_j,\\\\mu_k)$ is $\\\\mathcal{O}(\\\\alpha^2)$, which eventually gets into our upper bound on the distance between $f$ and $F^{p}$. \\n\\n3. **Empirical validation**: In Min and Vidal, 2024, the authors had the conjecture that pReLU networks converge to the robust classifier $F^{p}$ and offered empirical validations on synthetic data (mixture of Gaussian data) and on real image dataset (MNIST). We feel their numerical validations are sufficient to show the effectiveness of pReLU in finding robust classifiers. Thus, we focus on formally proving their conjecture with a detailed proof sketch and technical discussion. Nonetheless, to address the practical relevance of our theorems, we added numerical experiments to our revised manuscript; we invite the reviewer to check the new Appendix A and our global response.\\n\\nWe hope our response addresses your concerns. If you have additional questions/concerns, please feel free to post comments during the discussion phase. \\n\\n\\n**References**:\\n\\nMin and Vidal, Can implicit bias imply adversarial robustness? ICML, 2024\"}", "{\"title\": \"Response to Update from Authors\", \"comment\": \"Thanks for your update! I appreciate the efforts of authors for updating the revision. Thus, I will keep my scores.\"}", "{\"summary\": \"This paper addresses the vulnerability of deep learning-based classifiers to adversarial attacks, which typically require defense mechanisms or modified learning procedures, such as adding adversarial examples. The authors demonstrate that for certain data distributions, it is possible to train a provably robust classifier using standard learning methods without any additional defenses. Focusing on a binary classification problem where the data is generated from a mixture of Gaussian clusters with orthonormal centers, the paper first characterizes the largest $\\\\ell_2$-attack that a classifier can defend against while maintaining high accuracy. It then proves the existence of optimal robust classifiers achieving this maximum $\\\\ell_2$-robustness. Furthermore, the authors show that for data sampled from the orthonormal cluster model, gradient flow on a two-layer network with a polynomial ReLU activation, even without adversarial examples, can provably yield an optimal robust classifier.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper provides solid theoretical analysis, proving that under the multi-cluster data assumption, a two-layer pReLU neural network with certain initialization conditions can converge to a robust solution.\\n\\n2. The paper mainly utilizes the techniques from [1] and offers a two-phase training dynamics analysis based on early stopping.\\n\\n3. Overall, the paper is written quite smoothly.\\n\\n**Reference**\\n\\n[1] Boursier, E., Pillaud-Vivien, L., & Flammarion, N. (2022). Gradient flow dynamics of shallow relu networks for square loss and orthogonal inputs. Advances in Neural Information Processing Systems, 35, 20105-20118.\", \"weaknesses\": \"1. The Non-degenerate initialization shape assumption used in the paper seems overly strong, as it requires that each Voronoi region contains at least one initialized weight, which may not be natural. Specifically, when considering a random initialization setup, if the dimension $ D $ is much larger than the number of clusters $ K $, the randomly initialized weights should be approximately orthogonal to the $ K $-dimensional subspace formed by the cluster features with high probability. This appears to suggest that the non-degeneracy gap might be very small.\\n\\n2. The paper considers a setup with sufficiently small data variance $ \\\\alpha $. However, the empirical phenomena observed in [2] seem to be independent of the variance. Thus, the paper only partially addresses the conjecture in [2] by resolving a special case for finite orthogonal data.\\n\\n3. The paper lacks effective experimental validation of its theoretical analysis and conclusions, such as numerical simulations on synthetic data and observations on real image classification datasets.\\n\\n**Reference**\\n\\n[2] Min, H., & Vidal, R. (2024). Can Implicit Bias Imply Adversarial Robustness?. arXiv preprint arXiv:2405.15942.\", \"questions\": \"1. Could the authors provide an analysis and verification of whether their proposed non-degeneracy gap assumption holds for general small random initialization?\\n\\n2. The main text does not seem to clearly explain why the small quantities in the conclusions are related to the variance $ \\\\alpha $. Could the authors offer a more intuitive explanation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for the response and willingness to reassess our manuscript. Since it is close to the end of the discussion phase, we will keep our response short as we have discussed our viewpoints in previous responses.\\n\\n>In other words, as far as learning theory is concerned, I believe that the proposition of the paper regarding GD is true for every ERM learning rule out there and the specific analysis of gradient flow seems a little superfluous.\\n\\nThe reviewer's comment about hypothesis space is interesting, but we think our additional experiment in the newest revision is at the odds of this conjecture. In this new experiment (Please see Appendix A.1), we run SGD on a pReLU network and on a regular polynomial ReLU network (they induce the same function space for the same $p$), and the results are (We invite the reviewer to check the details in our new revision):\\n * SGD on a regular polynomial ReLU network cannot find a robust classifier. Therefore, selecting the function space is important, but the way the function space is parametrized is also important.\\n* SGD on a pReLU network with a large initialization scale cannot find a robust classifier. Therefore, selecting the correct hyperparameter of the training algorithm is also important.\\n\\nIn summary, many important aspects of GD training affect the robustness of the trained network; our theoretical analysis, alongside many previous works on the implicit bias of GD, precisely shows why certain choices make sense, which is a non-trivial problem, as we can see in the experiment. We hope our new experiment can aid the reviewer's assessment of our manuscript.\\n\\n(Note: We slightly edited our response to improve its clarity)\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Global response to reviewers' concern\", \"comment\": \"Several reviewers (reviewer sKL6, b91S, and HTHc) have raised their concerns about the practical relevance of our data assumption. We thank their feedback and they have helped us to improve our manuscript. We have added one numerical experiment section to Appendix A of our revised manuscript (text highlighted in blue), and we explain our revision in this global response.\\n\\nIn the newly added section, we solve the tasks of classifying cats and dogs via transfer learning using extracted features from a ResNet152 trained on ImageNet. We conjecture that the extracted features of the dog (or cat) class may naturally have many clusters: when the feature extractor is trained on ImageNet, dogs are further labeled by their breeds. Thus, the extracted features of dogs of the same breed should be sufficiently close, and features of dogs of different breeds should be sufficiently far apart, based on the well-known neural collapse phenomenon (Papyan et al., 2020; Galanti et al., 2021). If such a multi-cluster structure exists in the extracted feature, then we expect training pReLU as a classification head can achieve better robust accuracy than training its ReLU counterpart.\\n\\nIn the experiments, we show that, indeed, in this transfer learning scenario, the multi-cluster structure arises due to the distinguishing power of the feature extractor trained on large datasets with finer labels, and we show that in this case, pReLU networks with larger $p$ achieve better robustness compared to its ReLU counterpart. Admittedly, our current Theorems cannot fully explain the observed experimental results since the extracted features form clusters with large variances, and there are some correlations among these clusters, which does not follow our data assumption. Relaxing our data assumption to large variance and allowing inter-cluster correlation is an important future research direction.\\n\\nWe hope that our newly added experiments address reviewers' concerns about the practical relevance of our theorems, and we are happy to further improve them based on reviewers' feedback.\\n\\n**References**:\\n\\nPapyan et al., Prevalence of neural collapse during the terminal phase of deep learning training. PNAS, 2020\\n\\nGalanti et al., On the role of neural collapse in transfer learning. ICLR, 2021\"}", "{\"metareview\": \"This paper shows that gradient descent can provably learn a robust classifier for binary classification data drawn from clusters with orthonormal means, under a specific initialization condition. The reviewers highlight the theoretical rigor, novelty of the results, technical contributions, and high writing quality of the paper. They also raise concerns about the limited practical implications and the stringent assumptions on the data distribution and the initialization condition. The AC agrees that this paper makes solid technical contributions to a very challenging problem, but thinks that the paper could be strengthened by a better argument on its practical relevance (e.g., what is the consequence of this result given that in practice adversarial training is typically required to achieve robustness?) and weaker requirements on the initialization.\", \"additional_comments_on_reviewer_discussion\": \"The authors added a numerical experiment during the discussion phase, aiming to address concerns about the practical relevance of their data assumption. This new experiment partially, but did not completely, address the reviewers' concerns.\"}", "{\"comment\": [\"We thank the reviewer for the question. Our response to your question will be in two parts:\", \"_Can this conclusion be generalized to other distributions?_ Yes. **Empirically**, Min and Vidal, 2024 have shown that pReLU (p>2) is more robust than ReLU when trained on the MNIST dataset. In our newly added numerical section, we compared pReLU (p>2) with ReLU on the transfer learning tasks of classifying cats and dogs with pretrained feature extractor, and the pReLU is more robust. These experiments show empirical evidence that pReLU (p>2) can be more robust than ReLU for some real-world datasets. **Theoretically**, we conjecture that pReLU is suitable for learning data distributions concentrated within a finite number of sufficiently separated low-dimension regions, for example, the multi-cluster data model we assume in the paper or some union of low-dimensional subspaces. Extending our theorem to these data models is our ongoing work.\", \"_Can the author's article guide the search for network structures that are conducive to robustness in more situations?_ Not directly. We think searching for robust network structures is possible for complicated data models (other than what we mentioned in the previous point) only when we understand their geometric structure better, and we try to convey this message in the introduction. Since this requires a joint research effort to understand both data structure and the inductive bias of network architecture during training, there is a significant amount of work before we can do the search. Nonetheless, by showcasing a successful example with a mixture Gaussian model, we hope our work can motivate more works studying data structure and implicit bias of neural network training for adversarial robustness.\"]}", "{\"comment\": \"Thank you for the response. I don't want to discourage the authors from pursuing this research, rather I think that the paper is asking the wrong questions. Even though the analysis is done rigorously, and the quality of the paper is very good, I'm afraid that answers to irrelevant questions are irrelevant.\\n\\n- Data assumption:\\n\\nMy understanding of the analysis is that the paper is basically assuming that the training samples are sampled from a simplex, i.e. sampled from a Dirichlet distribution, and that the samples are separable, i.e. the parameters of the Dirichlet distribution are equal and less than one. The paper does not realize this simple characterization of data and spends a lot of energy in describing it through orthogonality, etc. Nevertheless, this does not change the fact that the assumption is not real or helpful. For example, it is completely useless for any domain that is a subset of $\\\\mathbb{R}$ because it is not even possible to define a classification problem that respects this assumption in 1D; the simplex in this case is a point. Considering this fact, I cannot see what good an analysis with this assumption can do. Previous papers might be onto something here, but I don't think that this paper is furthering the discussion in a significant direction. I can imagine that relaxing/changing this assumption would be a step in the right direction.\\n\\n- Relevance in real-world scenarios:\\n\\nI think the new experiments are fine, however, the experiments should also contain the end-to-end robust accuracy as well before we can infer them as evidence for real-world relevance.\\n\\n- Why ReLU fails to be robust:\\n\\nFrom what I understand, $\\\\sigma^p\\\\in C^p$ in which $C^p$ is the set of $p-1$ times differentiable functions. For example, $\\\\sigma$ is continuous but not differentiable, and $\\\\sigma^2$ is both continuous and differentiable but not twice differentiable. Consequently, a network that is activated with pReLU is also in $C^p$. We know that $C^p\\\\subset C^q$ for $q<p$, i.e. networks that are activated with ReLU are more expressive that those that are activated with pReLU with $p>1$. So, it makes a lot of sense, even a little obvious if I may, that GD performs better for pReLU **when the target decision boundary can be represented with a $C^q$ function in which $q>p$**; there are simply less hypotheses in the hypothesis space. This further shows why the assumptions on data distribution are very important on making the analysis relevant. My take is that the paper has set the learning problem up to succeed.\\n\\n- Accuracy-robustness tradeoff:\\n\\nI think you are assuming that a tradeoff between accuracy and robustness is only possible when training samples overlap. While there are such proposals in the literature[A], there are also counter arguments[B]. I think the safest bet for now is to say that the trade-off is due to nonexistence of a representation for the robust decision boundary in the hypothesis space. The analysis does not take into account this possibility which further exacerbates the issues with data assumption and the presented analysis that I have mentioned. The paper needs to make these issues explicit in the text.\\n\\n- Obvious claims:\\n\\n> Our theorem 1 does not assume an unlimited budget.\\n\\nThe theorem gives an upper bound for the budget, so I am interpreting that it is rejecting the possibility of unlimited budget. The exact value of the bound is dependent on the data distribution and is irrelevant to the broader problem IMHO. \\n\\n> We do not see why the Bayes classifier is robust by definition if its definition has nothing to do with adversarial attacks.\\n\\nAFAIK, the Bayes classifier is:\\n\\n$$\\nf_D(x)=\\n\\\\\\\\begin{cases}\\n1\\\\\\\\quad P_D(c=1|x)>\\\\\\\\frac{1}{2}\\\\\\\\\\\\\\\\\\n0\\\\\\\\quad \\\\\\\\mathrm{o.w.}\\n\\\\\\\\end{cases}\\n$$\\n\\nin which $D$ is the distribution of test samples. So, as long as we are choosing the right distribution, there is no need to mention any robustness condition. As an analogy, consider a human subject as the optimal Bayes classifier. If we find a perturbation for an image of a dog which changes the opinion of the human into believing that this is an image of a cat, then **that image is indeed of a cat**, i.e. the perturbation has moved the sample past the true decision boundary. What you are referring to I think is the application of attacks in adversarial training of the hypothesis which always limit the budget so that a human is never convinced that the label of the image has truly changed, e.g. the perturbation be imperceptible. As you can see, we are always training a classifier for which the corresponding optimal Bayes classifier exists in practice.\\n\\n[A] D. Tsipras, S. Santurkar, L. Engstrom, A. Turner, and A. Madry, \\u201cRobustness may be at odds with accuracy,\\u201d in International Conference on Learning Representations, 2019.\\n[B] Y.-Y. Yang, C. Rashtchian, H. Zhang, R. R. Salakhutdinov, and K. Chaudhuri, \\u201cA closer look at accuracy vs. robustness,\\u201d in\\nAdvances in Neural Information Processing Systems, 2020\"}", "{\"summary\": \"This paper analyses the robustness of networks in a very fine-grained way. Specifically, it assumes the data distribution obeys mixture K-Gaussian clusters, and the cluster centers is orthonormal. In this way, the paper proves that the optimal robust classifier under this distribution is approaching to $\\\\sqrt{2}/2$ if the dimension is sufficient large or the intra-class variance is small. Furthermore, beyond the existence, this paper uses gradient flows to prove that with some assumptions on initial points, pReLU networks can converge to a nearly optimal robust classifier if the intra-cluster variance is small.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"This paper is well-written and easy-to-follow. I appreciate author\\u2019s effort to make a strongly technical paper easy for folks to read, that will be beneficial for the community. For examples, authors introduce many intuitions for the assumptions or the results of the theorem, and bring detailed proof sketch for readers to grasp.\\n\\nThis paper develops a full convergence analysis for gradient flow, demonstrating the conjecture on (Min & Vidal 2024), bring the significant contribution.\\n\\nThe technical contribution is solid. Although I do not check each stuff of the whole proofs, I believe they are correct after reading the proof sketch and some important part of the proof in the appendix.\\n\\nThe authors have discussed their limitations concretely, that will help readers understand their work comprehensively.\", \"weaknesses\": \"The assumption of the data distribution seems too strong. The awesome results may be unable to instruct learning for real-world scenarios.\", \"typo\": \"In Line 061 it should be \\u201corthonormal\\u201d.\", \"questions\": \"Is your assumption reasonable for real-world datasets? For instance, the $1/D$ variance assumption?\\n\\n(Min & Vidal 2024) has proved the (almost) $\\\\sqrt{2}/2$ robustness for $F^{(p)}$ classifier, and your Theorem 2 proves the similar result for Bayes optimal classifier. Is this a progress? I think the Bayes optimal classifier probably should be more robust than $F^{(p)}$. Maybe \\u201coptimal\\u201d does not means \\u201crobust optimal\\u201d in some cases. Can you provide more discussions on that?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"My question has been resolved. This article has made contributions on theoretical, but there are also some weakness such as a simple data structure(Although the author has provided some experimental explanations on the practicality of this kind of data, it is inevitable that this data structure is indeed too simple), so I believe 6 is a reasonable score.\"}", "{\"comment\": \"> We find it unfair for the reviewer to criticize our work from a pure learning perspective for having a simple data model and disregard all our main contributions in optimization.\\n\\nI understand your frustration, but my job is to be objective, not to be fair. If the main contribution of the paper is in optimization, then it should not have been filed under \\\"learning theory\\\" as the primary area. That is the main reason that the paper is being reviewed by someone like me. Having said that, since the paper's analysis is not taking into account the issues of hypothesis spaces and sample complexities of those spaces, from the perspective of \\\"learning theory\\\" it is not a strong analysis. In other words, as far as learning theory is concerned, I believe that the proposition of the paper regarding GD is true for every ERM learning rule out there and the specific analysis of gradient flow seems a little superfluous. Nevertheless, I might change my mind after discussing the contributions with other reviewers and AC in the discussion period. Hence, I will increase the score of the paper to 5 to show that I am willing to discuss.\\n\\n> The reviewer criticizes our data assumptions based on an incorrect understanding of our data assumptions. Specifically, the reviewer believes \\u201ctraining samples are sampled from a simplex, i.e. sampled from a Dirichlet distribution.\\u201d As we write in Line 56, \\u201cWe consider a balanced mixture of K-Gaussians.\\u201d\\n\\nFigure 1 of the paper explicitly shows the simplex, the dashed line connecting the centers of the Gaussians is exactly the simplex. More formally, the cluster centers in $\\\\mathbb{R}^D$ could be stacked in a $D\\\\times K$ matrix $M$ which has a QR decomposition $M=QR$ in which $R$ is a rectangular diagonal matrix with its diagonal entries being 1. As you can see, when $K\\\\leq D$, the cluster centers fall on the corners of the $K-1$-simplex and when $K>D$ it is not possible to satisfy the orthogonality condition. Furthermore, since only support vectors are necessary to define the decision boundary, the training samples that are sampled from the simplex are sufficient for classification since they are the samples that fall closest to any other cluster center.\\n\\n> Simply put, this is where the state of the art is.\\n\\nThe SOTA argument is not effective when it comes to theoretical papers. For an experimental paper, it is acceptable to be similar to SOTA, but for a theoretical paper we either need a new theory or an improvement over SOTA. That is what I meant when I said that the paper is asking the wrong questions, this particular configuration of data is already analyzed and now we need to move beyond it. The particularities of this data assumption are not significant enough to be discussed further IMHO. For example, I consider dropping the orthogonality condition or the Normality condition on the distribution of clusters as significant.\\n\\n> we are very puzzled by the reviewer's comments.\\n\\nIn my comment I am referring to Theorem 2 of the paper. The text following the theorem reads:\\n\\n>> Therefore, $f^\\u2217$ is nearly optimal robust when $\\\\frac{\\\\alpha^2}{D}=o(1)$, i.e. the ambient dimension is large or the intra-class variance is small.\\n\\nI find the conclusion of this theorem obvious. This problem would be alleviated if the paper draws a stronger connection between the theorems and the main goal of the paper.\"}", "{\"summary\": \"The paper studies the binary classification problem in which the data comes from a mixture of Gaussian clusters with orthonormal cluster centers. The paper first shows that an attack with unlimited budget is not defendable and establishes a maximum budget for a plausible attack. Then, it shows that some classifier can achieve this robustness while maintaining accuracy. Finaly, the paper shows that a neural network with a single hidden layer and polynomial ReLU activation can be trained using gradient descent to approximate this optimal classifier when the initialization of the network parameters is favorable.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"- Originality:\\n\\nThe paper is original in its focus on GMM distributed data.\\n\\n- Quality:\\n\\nThe paper is rigorous in its presentation.\\n\\n- Clarity:\\n\\nThe paper is clear for the most part.\\n\\n- Significance:\\n\\nThe issue of finding plausible classification problems that are amenable to analysis is important considering the current stage in understanding adversarial examples phenomenon.\\n\\n---------Edited-----------\\n\\nI increase my original score of 3 to 5 since the overall quality of the paper is very good.\", \"weaknesses\": \"I believe that the paper is not ready for publication based on four key observations.\\n\\nFirst, the assumptions of the analysis are very restrictive, the orthonormality condition on the cluster centers for example exclude even the simplest classification problems such as XOR.\\n\\nSecond, the paper makes no attempt to show that the analysis bears any relevance in real-world scenarios.\\n\\nThird, the analysis does not connect or explain any issue that the analysis is revealing with regards to the current paradigm of training robust ANNs. While the main theorem asserts that the degree of polynomial ReLUs should be at least 3, it does not explain what makes the first degree ReLUs unsuitable. Furthermore, while the paper claims training robust classifiers are possible with simple gradient decent, it does not explain why is it that we observe a trade-off between accuracy and robustness in practice.\\n\\nLast but not least, some of the claims of the paper are obvious and has no significance. For example, we don't need a mathematical analysis to figure out the reason behind the fact that attacks with unlimited budget are not defendable. Moreover, we don't need a reason to believe that robust classifier exists since every optimal Bayes classifier is accurate and robust by definition.\", \"questions\": \"See the Weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for the review and for acknowledging the strengths of our paper. We address your concerns below:\\n\\n1. **Is the data assumption reasonable**: We have added a new experiment section addressing your concern about the practical relevance. We refer the reviewer to our revised Appendix A and our global response.\\n\\n2. **Bayes classifier v.s. $F^{(p)}$ classifier**: We have shown Bayes classifier is nearly optimally robust and explained it by interpreting it as a nearest-cluster rule. When Min and Vidal, 2024 propose $F^{(p)}$ classifier, they did not state whether its $\\\\sqrt{2}/2$ robustness is optimal or not. The reviewer's question of why $F^{(p)}$ classifier is also optimally robust is valid, and we have some explanation: the $F^{(p)}$ classifier is also approximately a nearest-cluster rule when $p$ is large. To see this, notice that\\n$sign(F^{(p)})=sign(\\\\sum_{k=1}^{K_1}\\\\sigma^p(\\\\left\\\\langle x,\\\\mu_k\\\\right\\\\rangle)- \\\\sum_{k=K_1+1}^{K}\\\\sigma^p(\\\\left\\\\langle x,\\\\mu_k\\\\right\\\\rangle))$\\n$=sign((\\\\sum_{k=1}^{K_1}\\\\sigma^p(\\\\left\\\\langle x,\\\\mu_k\\\\right\\\\rangle))^{\\\\frac{1}{p}}-(\\\\sum_{k=K_1+1}^{K}\\\\sigma^p(\\\\left\\\\langle x,\\\\mu_k\\\\right\\\\rangle))^{\\\\frac{1}{p}})$\\n$\\\\approx sign(\\\\max_{1\\\\leq k\\\\leq K_1}(\\\\sigma(\\\\left\\\\langle x,\\\\mu_k\\\\right\\\\rangle)-\\\\max_{K_1+1\\\\leq k\\\\leq K}(\\\\sigma(\\\\left\\\\langle x,\\\\mu_k\\\\right\\\\rangle))$\\nbecause the $p$-norm is getting closer to $\\\\infty$-norm as $p$ increases. \\n\\nWe hope our response addresses your concerns. If you have additional questions/concerns, please feel free to post comments during the discussion phase. \\n\\n**References**:\\n\\nMin and Vidal, Can implicit bias imply adversarial robustness? ICML, 2024\"}", "{\"comment\": \"We respectfully disagree reviewer's assessment of our manuscript:\\n\\n1. **Data assumption**: Our data assumption is not restrictive when compared to other theoretical works using similar analyses. We have compared our data assumption in lines 397-409 to those in prior work that also studied the convergence of two-layer networks; ours is the least restrictive. Moreover, our theoretical results can characterize exactly what classifier the two-layer network learns via gradient flow, which cannot be done without some assumption, either on data or the network width. Having said this, we are working on relaxing these assumptions.\\n\\n2. **Relevance in real-world scenarios**: We have added a new experiment section addressing your concern about practical relevance. We refer the reviewer to our revised Appendix A and our global response.\\n\\n3. **Why ReLU fails to be robust**: The results for ReLU (the case when p=1) have been shown in prior works, and we mentioned them in multiple sections (lines 97-107 and lines 384-395): Frei et al. 2023 showed that GF on two-layer ReLU networks provably converges to networks that are non-robust against adversarial attacks of radius $\\\\mathcal{O}(1/\\\\sqrt{K})$ and Min and Vidal, 2024 further explained this phenomenon.\\n\\n4. **Accuracy-robustness tradeoff**: We have clearly stated what is considered to be robust classifiers in our paper in lines 142-149: those that can defend against attacks of radius $r$ while maintaining a robust accuracy almost as high as the clean accuracy, which has no accuracy-robustness trade-off against attacks of radius smaller than $r$. Our contribution is to show that robust classifiers in such a strong sense exist for our orthonormal cluster model with $r\\\\approx \\\\frac{\\\\sqrt{2}}{2}$ and can be found by the gradient flow. Moreover, our results still agree with the empirical observation that the accuracy-robustness tradeoff exists: the robust classifier we find can achieve high accuracy under any attack of radius smaller than $\\\\sqrt{2}/2$, but has near zero accuracy under attacks of radius larger than $\\\\sqrt{2}/2$, as shown in Figure 2. In order to achieve more robustness against an attack of radius larger than $\\\\sqrt{2}/2$, one must trade off the clean accuracy, for example, by using a constant classifier that always outputs the majority class. The same thing can be said for real datasets: robust classifiers in our strong sense exist for some unknown $r^*$ that might be small, and in practice, the algorithm might be looking for robust networks against attacks of radius much larger than $r^*$, which necessarily has the accuracy-robustness tradeoff.\\n\\n5. **Obvious claims** The reviewer is misinterpreting our theorems.\\n\\n > For example, we don't need a mathematical analysis to figure out the reason behind the fact that attacks with unlimited budget are not defendable\\n\\n Our theorem 1 does not assume an unlimited budget. For our orthonormal clusters model, we are showing the critical attack budget $\\\\sqrt{2}/2$: if the attack radius is smaller than this critical value, a robust classifier achieving high accuracy exists; and if the attack radius is larger, no classifier can achieve high accuracy.\\n\\n> Moreover, we don't need a reason to believe that robust classifier exists since every optimal Bayes classifier is accurate and robust by definition\\n\\nWe do not see why the Bayes classifier is robust by definition if its definition has nothing to do with adversarial attacks. Our theorem 2 shows that the Bayes classifier for our orthonormal clusters model is nearly optimally robust because it can maintain high robust accuracy under any attacks of radius smaller than the critical attack budget $\\\\sqrt{2}/2$. We would like the reviewer to clarify why this is an obvious result that does not require proof.\\n\\nWe hope our response addresses your concerns. If you have additional questions/concerns, please feel free to post comments during the discussion phase. \\n\\n\\n**References**:\\n\\nMin and Vidal, Can implicit bias imply adversarial robustness? ICML, 2024\\n\\nFrei et al., The double-edged sword of implicit bias: Generalization vs. robustness in reLU networks. NeurIPS, 2023\"}", "{\"comment\": \"We thank the reviewer for the review. We address your concerns below:\\n\\n1. **Orthonormal cluster data**: We have added a new experiment section addressing your concern about practical relevance. We refer the reviewer to our revised Appendix A and our global response.\\n\\n2. **Notation in Theorem 1 and 2**: Thank you. We have fixed the notion in Theorem 1 and 2.\\n\\n3. **What happens if one uses ReLU**: First of all, we wrote footnote 1 only to justify the use of gradient flow: If we are to study ReLU networks, then we have to write equation (7) as the differential inclusion (and the reviewer is right that many analyses exist), which is unnecessary in our manuscript since our main results are about pReLU (p>2). Secondly, the results for ReLU (the case when p=1) have been shown in prior works, and we mentioned them in multiple sections (lines 97-107, and lines 384-395): Frei et al. 2023 showed that GF on two-layer ReLU networks provably converges to networks that are non-robust against adversarial attacks of radius $\\\\mathcal{O}(1/\\\\sqrt{K})$ and Min and Vidal, 2024 further explained this phenomenon.\\n\\n4. **upper bound on the amount of data**: The upper bound on sample size $N$ is due to the current proof techniques used in this paper, and we have discussed this limitation in lines 517-527. We invite the reviewer to read lines 517-527; if they do not clarify, please feel free to ask further questions.\\n\\n5. **what $\\\\theta(t)$ represents**: Since this paper considers the gradient flow dynamics, the continuous-time limit of gradient descent when the step size is infinitesimal, $\\\\theta(t)$ represents the solution to the ordinary differential equation in (7), and $t\\\\in[0,\\\\infty)$ denote the time. Nonetheless, there is some connection between the gradient flow solution $\\\\theta(t)$ with the iterates from gradient descent with sufficiently small step size, for which we refer the reviewer to Elkabetz and Cohen, 2021.\\n\\n6. **robustness without adversarial training**: This is the point we highlight in our manuscript: the correct choice of network architecture matters in determining the adversarial robustness of the trained network. For this mixture of Gaussian data, we show that, as long as the activation function is chosen carefully (pReLU, p>2), the gradient flow training without adversarial examples can find the optimal robust classifier; Notice that in our Figure 2., if the network is NOT chosen carefully, the trained network is not robust, reminiscent of what reviewer pointed out, \\\"normal training makes the network non-robust\\\". \\n\\nWe hope our response addresses your concerns. If you have additional questions/concerns, please feel free to post comments during the discussion phase. \\n\\n**References**:\\n\\nMin and Vidal, Can implicit bias imply adversarial robustness? ICML, 2024\\n\\nFrei et al., The double-edged sword of implicit bias: Generalization vs. robustness in reLU networks. NeurIPS, 2023\\n\\nElkabetz and Cohen, Continuous vs. discrete optimization of deep neural networks. NeurIPS, 2021\"}", "{\"comment\": \"We thank the reviewer for clarifying your comments. We disagree with the conclusion the reviewer arrived at:\\n\\n> I think that the paper is asking the wrong questions. Even though the analysis is done rigorously, and the quality of the paper is very good, I'm afraid that answers to irrelevant questions are irrelevant. \\n\\nAlthough it is hard to know precisely what the reviewer thinks are the \\\"wrong\\\"/\\\"irrelevant\\\" questions, we are afraid that the reviewer is misunderstanding the scope and contribution of this paper. We will explain next.\\n\\n* **Contribution of this paper**: As our paper title, \\\"gradient flow provably learns robust classifiers for data from orthonormal clusters,\\\" clearly suggested, our **main contribution** is to show **how gradient descent algorithm on some networks provably converges to a robust classifier** for orthonormal data model; and is NOT what is a robust classifier for orthonormal data model. One can easily be convinced that a robust classifier exists for orthonormal data model, either through our Theorem 1 and 2 (we write them in theorems for mathematical rigor), or as the reviewer explained, but **how can we train a neural network to find the robust classifier?**, since the neural networks are overparametrized in its width, there are infinitely many networks that interpolate the training data, which classifier do gradient descent/flow learn and whether the trained network is robust become important and nontrivial research questions. The mathematical way of answering these questions is through the rigorous convergence analysis of training dynamics of gradient flow and its implicit bias (we have referenced many works in lines 118-120). Notably, **the convergence analysis is nontrivial, even for simple data distribution** (lines 397-409). In Theorem 3, we show that pReLU converges to the robust classifier via gradient flow (Reviewer HTHc and MqXy think they are solid theoretical contributions). We carefully compared our results with prior work, explained the proof sketch, and pointed out some limitations of our current analysis (Reviewer HTHc found them \\\"will help readers understand their work comprehensively\\\"). We find it unfair for the reviewer to criticize our work from a pure learning perspective for having a simple data model and disregard all our main contributions in optimization.\"}", "{\"title\": \"Another question\", \"comment\": \"According to the author's response of question 6, when training on the data defined in the paper, pRelu(p>2) seems to be a better network structure than Relu to get robustness. Can this conclusion be generalized to other distribution? Can the author's article guide the search for network structures that are conducive to robustness in more situation.\"}", "{\"summary\": \"This paper study the problem 'what is the maximum adversarial perturbation a neural network can tolerate?' within the theory. The article contains three main theorems. Theorem 1 shows that for the Orthonormal cluster data, any Lebesgue measurable function can not keep robustness under budget $L_2$ norm 1 on such distribution with a probability. Theorem 2 gives the analysis of the robustness for Bayes optimal classifier. Theorem 3 shows that with some assumptions, a pRelu two-layer network can converges to optimal robust classifier after training.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1, In my opinion, the conclusion is new and reasonable.\\n2, Motivation is reasonable.\\n3, The article has a good writing.\", \"weaknesses\": \"1, The proof is a bit long, I didn't take a closer look. I hope the author can write the proof of the theorem more concisely.\\n\\n2, Overall, I maintain a positive attitude towards this article, but I still have many questions that I hope the author can answer seriously.\", \"questions\": \"1, This paper mainly focus on the orthonormal clusters data, and to make sure that $<\\\\mu_i,\\\\mu_j>=I(i=j)$, there are at most D(dim of $x$) clusters in the data distribution, this type of data appears to be a combination of several normal distributions that are relatively far apart. So may I ask why considering this type of data? What are the practical applications of this kind of data in reality?\\n\\n2, Theorem 1 said that: 'Given a sample $(x,y)\\u223cD_{X,Y}$, we have equation (3)', but I think the probability in (3) should be about $(x,y)\\\\sim D_{X,Y}$? So it should not be written 'Given a sample $(x,y)\\u223cD_{X,Y}$' here. The same for Theorem 2.\\n\\n3, For the network structure, author do not choose Relu network due to 'Relu is non-differentiable' as said in note 1. But in my opinion, this is not important. Relu is almost differentiable everywhere, which seems sufficient, and so many work have done on Relu network. Moreover, Relu is frequently used in real world. So, what would happen we take p=1? Is the author's main conclusion (converges to optimal robust classifier) still correct? \\n\\n4, Why there is an upper bound of the amount of data in theorem 3? It should be more data lead to the better the training, why does the author need an upper bound for the data here?\\n\\n5, In Theorem 3, I think $\\\\theta(t)$ represent the parameters obtained after t steps training, is it right? And what is the learning rate?\\n\\n6, Accoding to the real world experience, normal training makes the network non-robust. In the paper, as said in equation (7) and definition (6) of dataset, author also does not consider the robustness training, but according to theorem 3, training lead to optimal robust classifier. Is there a contradiction in between?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"**Data assumptions**: The reviewer criticizes our data assumptions based on an incorrect understanding of our data assumptions. Specifically, the reviewer believes \\u201ctraining samples are sampled from a simplex, i.e. sampled from a Dirichlet distribution.\\u201d As we write in Line 56, \\u201cWe consider a balanced mixture of K-Gaussians.\\u201d By and large, the entire theory of deep learning community acknowledges that there is a big gap between theory and practice. Existing theory is all under very restrictive settings in terms of architectures (e.g., two-layer ReLU networks or multi-layer networks with infinite width) or data assumptions (mixtures of Gaussians). Although we acknowledge that our approach involves restrictive data assumption, we simply want to point out that it is less restrictive than the state-of-art for the convergence analysis of learning dynamics (Lines 398-409), which is the topic of this paper. Simply put, this is where the state of the art is in terms of assumptions for which one can prove theorems about precise characterizations of trained neural networks (Theorem 3) in deep learning theory.\", \"**Obvious claims**: we are very puzzled by the reviewer's comments, such as \\u201cunlimited budget are not defendable\\u201d or \\u201cthe exact value of the bound is dependent on the data distribution and is irrelevant to the broader problem IMHO.\\u201d These comments do not make sense within the context of adversarial robustness. Specifically, all the literature on $\\\\ell_p$-bounded adversarial robustness is predicated on the idea of small, imperceptible adversarial perturbations. So, by definition of the problem, the budget is never unlimited: attacks are always bounded in the $\\\\ell_p$ norm. Moreover, all the literature on adversarial robustness assesses performance as a function of the attack budget. In addition, all of the literature on certified robustness is predicated on computing the largest budget, for which one can guarantee that the classifier does not change its predictions for all inputs. The whole point of our results is that for a ReLU network, the maximum allowable attack budget is $\\\\mathcal{O}(1/\\\\sqrt{K})$, while for a pReLU network, the maximum allowable budget is $\\\\mathcal{O}(1)$. Anyone working on adversarial learning would appreciate that this is a huge improvement, and the claim is far from obvious, given that proving it involves the convergence analysis on gradient flow on neural networks.\", \"Nonetheless, we agree with multiple points the reviewer stated in the previous response, for example, the discussion about hypothesis classes being different when using pReLU and ReLU networks and the discussion about accuracy robustness tradeoff; they are interesting and insightful, and we will include them as additional remarks to the manuscript and we thank the reviewer for the references (Since they involve debatable issues like accuracy-robustness tradeoff, we will take caution in making these remarks, thus they, unfortunately, cannot be made happen before the discussion phase ends).\"]}" ] }
8BJl6LQgW5
Visual Representation Learning for World Models by Predicting Fine-Grained Motion
[ "Zhao-Han Peng", "Shaohui Li", "Zhi Li", "Yu LIU", "You He" ]
Originating from model-based reinforcement learning (MBRL) methods, algorithms based on world models have been widely applied to boost sample efficiency in visual environments. However, existing world models often struggle with irrelevant background information and omit moving tiny objects that can be essential to tasks. To solve this problem, we introduce the Motion-Aware World Model (MAWM), which incorporates a fine-grained motion predictor and entails action-conditional video prediction with a motion-aware mechanism. The mechanism yields compact and robust representations of environments, filters out extraneous backgrounds, and keeps track of the pixel-level motion of objects. Moreover, we demonstrate that a world model with action-conditional video prediction can be interpreted as a variational autoencoder (VAE) for the whole video. Experiments on the Atari 100k benchmark show that the proposed MAWM outperforms current prevailing MBRL methods. We further show its state-of-the-art performance across challenging tasks from the DeepMind Control Suite.
[ "world models", "model-based reinforcement learning", "visual representation learning" ]
Reject
https://openreview.net/pdf?id=8BJl6LQgW5
https://openreview.net/forum?id=8BJl6LQgW5
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yFOjaSMOyw", "vEG8rfUL2u", "srjUH9a48m", "sqISwzHENv", "snUvzjSHNH", "cpp6IwW8zF", "auLCZJGGDe", "a0uGAbiisc", "XxLtbtSrTy", "TmEaSkEOEz", "QEROm4D8ap", "NLqzzeB6PC", "LKr56goOPB", "I6tC9bTyLW", "EFEqOopLNj", "8OR43OZPbp", "4dLWWd9qsd", "1DnkwinaMo", "0lQaXdoI2u" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "comment" ], "note_created": [ 1733142495043, 1732810012413, 1732810000556, 1730692901051, 1730699437797, 1737524071265, 1732810400390, 1733222062829, 1732809818019, 1732809827557, 1732810076451, 1732809969260, 1730610691019, 1732810127437, 1732809956976, 1735139724036, 1730620256174, 1732969784693, 1744028389519 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10693/Authors" ], [ "ICLR.cc/2025/Conference/Submission10693/Authors" ], [ "ICLR.cc/2025/Conference/Submission10693/Authors" ], [ "ICLR.cc/2025/Conference/Submission10693/Reviewer_qY1E" ], [ "ICLR.cc/2025/Conference/Submission10693/Reviewer_vTbC" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10693/Authors" ], [ "ICLR.cc/2025/Conference/Submission10693/Authors" ], [ "ICLR.cc/2025/Conference/Submission10693/Authors" ], [ "ICLR.cc/2025/Conference/Submission10693/Authors" ], [ "ICLR.cc/2025/Conference/Submission10693/Authors" ], [ "ICLR.cc/2025/Conference/Submission10693/Authors" ], [ "ICLR.cc/2025/Conference/Submission10693/Reviewer_KLRo" ], [ "ICLR.cc/2025/Conference/Submission10693/Authors" ], [ "ICLR.cc/2025/Conference/Submission10693/Authors" ], [ "ICLR.cc/2025/Conference/Submission10693/Area_Chair_9eEA" ], [ "ICLR.cc/2025/Conference/Submission10693/Reviewer_pZ6p" ], [ "ICLR.cc/2025/Conference/Submission10693/Authors" ], [ "ICLR.cc/2025/Conference/Submission10693/Authors" ] ], "structured_content_str": [ "{\"comment\": \"**References**:\\n\\n[1]Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollar, and Ross Girshick. Masked autoencoders are scalable vision learners. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 16000\\u201316009, 2022.\\n\\n[2]Younggyo Seo, Danijar Hafner, Hao Liu, Fangchen Liu, Stephen James, Kimin Lee, and Pieter Abbeel. Masked world models for visual control. In *Conference on Robot Learning*, pp. 1332\\u20131344. PMLR, 2023.\\n\\n---\\nWe hope we have addressed your concerns regarding our contributions, the relationship between fine-grained prediction and performance, and the additional benchmark.\", \"title\": \"Official Comment by Authors (4/4)\"}", "{\"comment\": \">**Q3**: I wonder if the same pipeline could be applied to real-world videos as currently there is increasing trend in leveraging video generative models as \\\"world models\\\" to facilitate various tasks. It would be better if the authors could find a proper way to compare or address such line of methods.\\n\\n**A3**: This is an interesting suggestion. Currently, there is a gap between world models in the realm of MBRL and \\\"World Models\\\" for video generative models like Sora in that ontology of relationship among images, actions and rewards is different. A world model in MBRL settings is a generative model that produces future states and rewards, *i.e.*, models of $p(s_{t+1}, r_{t} | s_t, a_t)$[1]. Diamond[2] trains a diffusion model and a reward model separately for RL agents, the training pipeline of which is the same as \\\"World Models\\\" for video generative models de facto. It is not trivial to combine reward prediction and future state generation with extracted representations from video generative models[3] for future work. Nevertheless, computation resources should be taken into consideration for academic research. Please refer to Appendix I for a comparison of computation resources. Under comparative computation time with DIAMOND on a NVIDIA GeForce RTX 4090, we have obtained results on the following ten games up till now:\\n| Game |DIAMOND(100k)|MAWM(580k)|\\n| --- | --- | --- |\\n| Boxing |86.9 |**95.0**|\\n| Breakout | 132.5 |**414** |\\n|CrazyClimber| 99167.8 |**114529.0** |\\n| DemonAttack | 288.1|**2225.3** |\\n|Frostbite|274.1 |**3507.1** |\\n| Gopher|5897.9|**25104.2** |\\n|KungFuMaster|23523|**28730.0**|\\n|PrivateEye|114.3 |**5412.1** |\\n|Seaquest|551.2 |**1381.4** |\\n|MsPacman|1958.2|**2699.5**|\\n\\nWe will run more experiments in the near future if you are interested in our results.\\n\\n**References**:\\n\\n[1]David Ha and Jurgen Schmidhuber. Recurrent world models facilitate policy evolution. Advances in neural information processing systems, 31, 2018.\\n\\n[2]Eloi Alonso, Adam Jelley, Vincent Micheli, Anssi Kanervisto, Amos Storkey, Tim Pearce, and Franc\\u00b8ois Fleuret. Diffusion for world modeling: Visual details matter in atari. In Thirty-eighth Conference on Neural Information Processing Systems, 2024.\\n\\n[3]Grace Luo, Lisa Dunlap, Dong Huk Park, Aleksander Holynski, and Trevor Darrell. Diffusion hyperfeatures: Searching through time and space for semantic correspondence. Advances in Neural Information Processing Systems, 36, 2024.\\n>**Q4**: Lastly, the paper organization could be improved for better clarity.\\n\\n**A4**: Thanks for your suggestion. We highlight important modifications in blue and reorganize the appendices in our revised version.\\n\\n---\\nWe hope to have addressed your concerns regarding the significance of our method and the comparison with the other line of methods.\", \"title\": \"Official Comment by Authors (2/2)\"}", "{\"comment\": \"We thank the reviewer for the detailed feedback.\\n\\n>**Q1**: One concern about this paper is the significance of the proposed method.\\n\\n**A1**: Our proposed method is of significance:\\n1. We propose a novel idea about the motion-aware mechanism and incorporate it into our world model, MAWM, which learns compact visual representations via motion prediction and an Adaptive Motion-Aware Scheduler (AMAS). To our best knowledge, no motion-aware mechanism has ever been applied to world models and our proposed motion-aware mechanism can be incorporated into existing MBRL methods to capture moving tiny objects, which we believe could be a source of inspiration for other MBRL methods.\\n2. As a significant cornerstone of world models, vanilla RSSM[1] and its variants were limited to image reconstruction[2][3][4][5]. Therefore, we propose a theoretical model, RSSM-VP, which establishes the foundation of learning RSSM the dynamics model via video prediction. Our idea can be conveniently adopted by these methods via the substitution of video prediction for image reconstruction, in which we expect significant improvement. \\n3. We evaluate MAWM on 46 tasks (including all the 20 tasks on DeepMind Control Suite in our revised version) across diverse domains with fixed hyperparameters and demonstrate its consistent significant improvement over baselines. Significantly, MAWM outperforms TD-MPC2, the state-of-the-art RL algorithms without a lookahead search on the DeepMind Control Suite, by a large margin. Furthermore, we further demonstrate the generalization ability of MAWM in DMC-GB2, where test environments are visually distinct from the training environment.\\n \\n**References**:\\n\\n[1]Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. In *International conference on machine learning*, pp. 2555\\u20132565. PMLR, 2019b.\\n\\n[2]Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, and Timothy Lillicrap. Mastering diverse domains through world models. *arXiv preprint arXiv:2301.04104*, 2023.\\n\\n[3]Jeongsoo Ha, Kyungsoo Kim, and Yusung Kim. Dream to generalize: Zero-shot model-based reinforcement learning for unseen visual distractions. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 37, pp. 7802\\u20137810, 2023.\\n\\n[4]Christian Gumbsch, Noor Sajid, Georg Martius, and Martin V Butz. Learning hierarchical world models with adaptive temporal abstractions from discrete latent dynamics. In *International Conference on Learning Representations*, 2023.\\n\\n[5]Ruixiang Sun, Hongyu Zang, Xin Li, and Riashat Islam. Learning latent dynamic robust representations for world models. In *Proceedings of the 41st International Conference on Machine Learning*, Proceedings of Machine Learning Research, pp. 47234\\u201347260. PMLR, 2024.\\n>**Q2**: First, except in curves and Fig.3 from the ablation studies, the authors might want to provide more visualizations on the effect of the motion awareness introduced, especially when considering the current experiment setting is only in atari and dm-control-suite where data are all synthetic and should potentially be simpler compared to real-world videos. \\n\\n**A2**: Thanks for your suggestions. We provide Appendix M in the revised version, for visualizations of video prediction by MAWM, comparing with pretrained video generation models. Results show that the MAWM can capture the moving patterns of tiny objects. However, pretrained Stable Diffusion model fails in these cases. Furthermore, as showcased in Appendix L, experiments on DMC-GB2, where test environments are visually distinct from the training environment and real-world video serve as the background of the environment, demonstrate the generalization ability of MAWM, and indicate its potential to work in a real-world application.\", \"title\": \"Official Comment by Authors (1/2)\"}", "{\"summary\": \"The paper presents a novel approach in terms of model-based reinforcement learning (MBRL), where an image-based world model is utilized to improve sample efficiency by mimicking a scalable digital copy of the environment, such as DreamerV3. The authors claims that traditional world models tend to ignore moving tiny objects and their connections with tasks, thereby they propose a Motion-Aware World Model (MBRL) to account for them. Specifically, MAWM 1. focuses on small objects via pixel-level attention mechanisms and 2. deals with rapid changes of them via an adaptive control scheduler. The results on Atari 100k and DeepMind Control Suite (DMC) depict the superiority of the proposed method against DreamV3.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper presents several strengths:\\n1. The experimental validation of the proposed model on two standard datasets, Atari 100k and DMC, demonstrates its effectiveness and generalizability.\\n2. The writing of the paper is clear, making it easy for readers to understand the proposed method and its experimental validation.\\n3. The details of the model in the appendix are valuable contributions to the community, as it enables other researchers to reproduce the results and build upon the proposed method.\", \"weaknesses\": \"1. One concern is regarding the novelty of the proposed two techniques. The motion-aware auxiliary loss is not a novel topic [1]. Additionally, the technical contributions of the proposed pixel-level attention and scheduler are not clear enough to me.\\n[1] 3D Motion Decomposition for RGBD Future Dynamic Scene Synthesis, CVPR 2019. \\n\\n2. Is it possible to compare MAWM with pretrained video generation models like stable video diffusion, to figure out whether they can capture the patterns of moving targets or not?\\n\\n3. I also have some concerns about the experimental results. As an example, Table 1 is confusing. In the last row, the Median score of a counterpart SimPLe is 1.34, which seems to obtain the best results without being marked bold. Besides, the different components proposed in this paper don\\u2019t demonstrate convincing improvement in Figure 4. A more comprehensive and significant ablation study could help address this.\", \"questions\": \"Please see the weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents the Motion-Aware World Model. It integrates a fine-grained motion predictor with an adaptive motion-aware scheduler and involves action-conditional video prediction to filter out backgrounds and track object motion at the pixel level.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The attention that the authors give to relevant foreground information and moving tiny objects is meaningful for constructing world models.\\n\\n2. The performance gain of the proposed method is considerable on the Atari 100K benchmark and DeepMind Control Suite.\", \"weaknesses\": \"1. The author elaborates on some details in the method section; however, many technical aspects, like the Convolutional Block Attention Module, were not employed in previous approaches. Nevertheless, the relevant experiments have not undergone ablation, which could readily prompt people to doubt the fairness of the experiments.\\n\\n2. The results of the ablation experiments are rather confusing. It appears that MAWM is not consistently the best choice. Moreover, for the majority of the experiments, it doesn't seem that they have reached convergence.\\n\\n3. The motivation of this paper is to solve the problem of struggling with irrelevant background information and omitting moving tiny objects. Is there any corresponding visualization to verify this?\\n\\n4. Many experimental details are not clearly stated. For example, what are the values of $\\\\alpha$ and $r_{dizzy}$? \\n\\n5. Discussions on limitations are necessary.\", \"questions\": \"Please see weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \">**Q3**: How does MAWM compare to MWM that uses MAE objective on convolution features to better learn representations, which is also shown specifically more suitable for small objects?\\n\\n**A3**: Thank you for your useful suggestion about the choice of the autoencoder. Following your advice, we try the MAE[1] objective on convolution features in the same way as MWM[2] does. We have included the results in Appendix F.2. \\n\\nResults in Table 1 demonstrate that our variational autoencoder for video ensures consistent better performance than the model that uses the MAE objective on convolution features on tasks from DeepMind Control Suite. \\nEquipped with the AMAS and the motion predictor, the MAE model has an 18% performance gain, which demonstrates the effectiveness of the two key components of our proposed method. Interestingly, we find that the same MAE model performs poorly on the Atari 100k benchmark, as listed in Table 2. Our intuitive answer is that the MAE model suffers from drastic changes of images on the Atari games. After adding the AMAS and the motion predictor to the MAE model, it achieves abou 44% performance gain over the original MAE model, which demonstrates the effectiveness of the AMAS and the motion predictor again.\", \"table_1\": \"Ablation studies on VAE for video on eight challenging tasks from DeepMind Control Suite. AMASMO: AMAS and motion predictor.\\n| Task | TD-MPC2 | MAE | MAE + AMASMO | MAWM(Ours) |\\n|:-----------------------|:-----------------:|:-----------------:|:-----------------:|:-----------------:|\\n| Acrobot Swingup | 295.3 | 236.6 | 416.1| **452.1** |\\n| Cartpole Swingup Sparse | **790.0** | 472.9 | 548.7 | 666.7 |\\n| Cheetah Run | 537.3 | 565.7 | 765.3 | **874.3** |\\n| Finger Turn Hard | 885.2 | 433.4 | 856.5 | **935.0** |\\n| Hopper Hop | 302.9 | 52.5 | **399.3** | 311.5 |\\n| Quadruped Run | 283.1 | **860.3** | 537.0 | 648.7 |\\n| Quadruped Walk | 323.5 | **883.7** | 835.3 | 580.3 |\\n| Reacher hard | **909.6** |705.0 | 627.3 | 654.9 |\\n| Mean($\\\\uparrow$) | 540.9 | 526.3 | 623.2 | **640.4** |\\n| Median($\\\\uparrow$) | 430.4 | 519.3 | 588.0 | **651.8** |\", \"table_2\": \"Ablation studies on VAE for video on Atari 100k benchmark. AMASMO: AMAS and motion predictor.\\n\\n| Game | MAE| MAE + AMASMO | MAWM(ours) |\\n|---------------|:---------:|:---------:|:----------:|\\n| Alien | 568.5 | **952.2** | 776.4 |\\n| Amidar | 98.5 | 117.3 | **144.2** |\\n| Assault | 557.9 | 592.5 | **883.4** |\\n| Asterix | 807.7 | 969.2 | **1096.9** |\\n| BankHeist | 61.6 | 102.3 | **742.6** |\\n| BattleZone | 6540.0 | 7543.3 | **13372.0** |\\n| Boxing | 35.1 | **88.0** | 85.4 |\\n| Breakout | 6.8 | 13.8 | **71.8** |\\n| ChopperCommand | 810.0 | 79.3 | **904.0** |\\n| CrazyClimber | 40567.0 | 44975.3 | **89038.6** |\\n| DemonAttack | 159.8 | **313.9** | 152.2 |\\n| Freeway | 0.0 | **0.1** | 0.0 |\\n| Frostbite | 782.6 | **1202.7** | 692.6 |\\n| Gopher | 633.8 | 2254.0 | **4415.8** |\\n| Hero | 3441.1 | 6474.4 | **8801.8** |\\n| JamesBond | 272.8 | **514.3** | 337.2 |\\n| Kangaroo | 3577.3 | 1706.7 | **3875.6** |\\n| Krull | 9724.1 | **10054.0** | 8729.6 |\\n| KungFuMaster | 20902.3 | **29653.7** | 23434.6 |\\n| MsPacman | 1092.2 | 1517.2 | **1580.7** |\\n| Pong | 4.5 | 19.7 | **20.1** |\\n| PrivateEye | -123.2 | **1225.5** | -472.5 |\\n| Qbert | 912.7 | **3984.0** | 1664.4 |\\n| RoadRunner | 7938.3 | **12548.0** | 12518.6 |\\n| Seaquest | 635.1 | 405.3 | **557.9** |\\n| UpNDown | 4203.0 | 3871.9 | **28408.2** |\\n|#Superhuman($\\\\uparrow$)|5|7|**12**|\\n| Mean($\\\\uparrow$) | 0.714 | 1.031 | **1.290** |\\n| Median($\\\\uparrow$) | 0.144 | 0.277 | **0.651** |\\n\\nThank you so much for your insightful and constructive suggestions about these additional experiments, which provide strong evidence for the effectiveness of our three key components and the generalization ability of MAWM.\", \"title\": \"Official Comment by Authors (3/4)\"}", "{\"title\": \"Official Comment by Authors (3/3)\", \"comment\": \">**Q5**: Discussions on limitations are necessary.\\n\\n**A5**: We agree that it is necessary to discuss the potential limitations of our work. We have included the following discussions in the Conclusion section of our revised version. We identify three potential limitations of our work for future research:\\n1. MAWM has difficulties in long-horizon video prediction, which is also the key problem in current MBRL methods[1]. Specifically, if the imagination step is large, predicted images may be incorrect in certain cases, even though predicted motion by MAWM remains accurate. Future work can try to find whether perfect long-horizon video prediction improves policy learning.\\n2. Besides, although MAWM has been trained with fixed hyperparameters across domains, we currently train a standalone model for each task. An exciting avenue is to explore the potential of MAWM to finish different tasks within a model by effectively sharing common knowledge. \\n3. Since MAWM learns task-specific relationships between actions and images, another promising avenue might be to integrate text-guided video generative models[2][3][4][5][6][7].\\n\\n**References**:\\n\\n[1]Eloi Alonso, Adam Jelley, Vincent Micheli, Anssi Kanervisto, Amos Storkey, Tim Pearce, and Franc\\u00b8ois Fleuret. Diffusion for world modeling: Visual details matter in atari. In *Thirty-eighth Conference on Neural Information Processing Systems*, 2024.\\n\\n[2]Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. High-resolution image synthesis with latent diffusion models. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*, pp. 10684\\u201310695, 2022.\\n\\n[3]Tim Brooks, Aleksander Holynski, and Alexei A. Efros. Instructpix2pix: Learning to follow image editing instructions. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 18392\\u201318402, 2023.\\n\\n[4]Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In *Proceedings of the IEEE/CVF International Conference on Computer Vision*, pp. 3836\\u20133847, 2023a.\\n\\n[5]Andreas Blattmann, Tim Dockhorn, Sumith Kulal, Daniel Mendelevitch, Maciej Kilian, Dominik Lorenz, Yam Levi, Zion English, Vikram Voleti, Adam Letts, et al. Stable video diffusion: Scaling latent video diffusion models to large datasets. *arXiv preprint arXiv:2311.15127*, 2023.\\n\\n[6]Hyeonho Jeong, Geon Yeong Park, and Jong Chul Ye. Vmc: Video motion customization using temporal attention adaption for text-to-video diffusion models. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 9212\\u20139221, 2024.\\n\\n[7]Grace Luo, Lisa Dunlap, Dong Huk Park, Aleksander Holynski, and Trevor Darrell. Diffusion hyperfeatures: Searching through time and space for semantic correspondence. *Advances in Neural Information Processing Systems*, 36, 2024.\\n\\n---\\nWe hope we have addressed your concerns about ablation studies and whether our model improves with more interactions with environments.\"}", "{\"comment\": \"We thank the reviewer for the detailed feedback.\\n\\n---\\n\\n>**Q1**: The author elaborates on some details in the method section; however, many technical aspects, like the Convolutional Block Attention Module, were not employed in previous approaches. Nevertheless, the relevant experiments have not undergone ablation, which could readily prompt people to doubt the fairness of the experiments.\\n\\n**A1**: We agree that relevant modules need ablation studies to justify our contributions compared with previous works in MBRL. We have included an ablation study on the two modules (*i.e.*, convolutional block attention module and harmonizer) in Appendix F.1. Without these modules, MAWM still outperforms the best baselines on both the benchmarks, as shown in Table 1 and Table 2, which demonstrate the effectiveness of our proposed method.\", \"table_1\": \"Ablation studies on CBAM and Harmonizers on the Atari 100k benchmark. Both: CBAM\\nand Harmonizers, Standard: standard configurations of MAWM in the body of our paper.\\n|Game | REM | - Both | - CBAM | MAWM(ours) |\\n|-----------------------|---------|----------|----------|-----------|\\n| Alien | 607.2 | 1089.0 | **1165.4** | 776.4 |\\n| Amidar | 95.3 | **210.9** | 110.8 | 144.2 |\\n| Assault | **1764.2** | 1075.1 | 790.9 | 883.4 |\\n| Asterix | **1637.5** | 1466.3 | 1201.8 | 1096.9 |\\n| BankHeist | 19.2 | 517.2 | **987.5** | 742.6 |\\n| BattleZone | 11826 | 8060.0 | 10696.7 | **13372.0** |\\n| Boxing | **87.5** | 80.9 | 84.2 | 85.4 |\\n| Breakout | 90.7 | **108.7** | 40.6 | 71.8 |\\n| ChopperCommand | **2561.2** | 899.0 | 818.0 | 904.0 |\\n| CrazyClimber | 76547.6 | 82506.7 | **89538.3** | 89038.6 |\\n| DemonAttack | **5738.6** | 149.1 | 157.4 | 152.2 |\\n| Freeway | **32.3** | 0.0 | 0.0 | 0.0 |\\n| Frostbite | 240.5 | 2040.0 | **2449.2** | 692.6 |\\n| Gopher | 5452.4 | 3403.1 | **8012.3** | 4415.8 |\\n| Hero | 6484.8 | **11482.4** | 8139.8 | 8801.8 |\\n| JamesBond | 391.2 | **477.0** | 376.3 | 337.2 |\\n| Kangaroo | 467.6 | 1726.7 | 1836.0 | **3875.6** |\\n| Krull | 4017.7 | 8312.8 | 8408.5 | **8729.6** |\\n| KungFuMaster | **25172.2** | 19122.7 | 21415.3 | 23434.6 |\\n| MsPacman | 962.5 | 1557.3 | 1573.7 | **1580.7** |\\n| Pong | 18 | **20.2** | 18.3 | 20.1 |\\n| PrivateEye | 99.6 | **3288.6** | 1423.8 | -472.5 |\\n| Qbert | 743 | **4237.2** | 1145.1 | 1664.4 |\\n| RoadRunner | 14060.2 | **20635.7** | 14725.3 | 12518.6 |\\n| Seaquest | **1036.7** | 440.0 | 554.0 | 557.9 |\\n| UpNDown | 3757.6 | 15716.1 | 15952.4 | **28408.2** |\\n| #Superhuman($\\\\uparrow$) | **12** | 10 |11 | **12** |\\n| Mean($\\\\uparrow$) | 1.222 | 1.289 | 1.258 | **1.290** |\\n| Median($\\\\uparrow$) | 0.280 | 0.512 | 0.578 | **0.651** |\", \"table_2\": \"Ablation studies on CBAM and Harmonizers on eight challenging tasks from DeepMind Control\\nSuite. Both: CBAM and Harmonizers, Standard: standard configurations of MAWM in the body of our paper.\\n| Task | TD-MPC2 | - Both | - CBAM | MAWM(ours) |\\n|------------------------|:-------:|:-------:|:-------:|:----------:|\\n| Acrobot Swingup | 295.3 | 412.3 | 427.0 | **452.1** |\\n| Cartpole Swingup Sparse | **790.0** | 519.2 | 603.9 | 666.7 |\\n| Cheetah Run | 537.3 | 899.3 | **915.9** | 874.3 |\\n| Finger Turn Hard | 885.2 | **935.9** | 825.2 | 935.0 |\\n| Hopper Hop | 302.9 | 334.8 | **355.2** | 311.5 |\\n| Quadruped Run | 283.1 | 577.9 | 644.1 | **648.7** |\\n| Quadruped Walk | 323.5 | 620.7 | **653.2** | 580.3 |\\n| Reacher Hard | **909.6** | 582.8 | 689.3 | 654.9 |\\n| Mean($\\\\uparrow$) | 540.9 | 610.4 | 639.2 | **640.4** |\\n| Median($\\\\uparrow$)| 430.4 | 580.4 | 648.6 | **651.8** |\", \"title\": \"Official Comment by Authors (1/3)\"}", "{\"comment\": \">**Q2**: The results of the ablation experiments are rather confusing. It appears that MAWM is not consistently the best choice. Moreover, for the majority of the experiments, it doesn't seem that they have reached convergence.\\n\\n**A2**: It is a common setting for sample efficiency in related work, such as IRIS[1], DreamerV3[2] and EfficientZero V2[3]. On the Atari 100k benchmark, the agent is limited to fixed 100k interactions with the environment for each task. For comparison, unconstrained agents on Atari Games often require a budget of 50M interactions with environments while the computational resources of academic institutions are often limited. Nevertheless, we agree that training sample-efficient methods with a more computational budget is essential to guarantee that performance will improve consistently with more samples. To that end, we conducted an additional experiment with 1M training steps on Breakout, Demon Attack, Gopher. The game scores and human-normalized scores for the two games are listed below:\\n| Game |Score(100k)|HNS(100k)|Score(1M)|HNS(1M)|\\n| --- | --- | --- | --- | --- |\\n| Breakout | 71.8 | 2.435 | 422.4 |14.608 |\\n| DemonAttack | 152.2 | 0.000 | 2374.8| 1.222|\\n| Gopher | 4415.8 | 1.930 | 99995.4| 46.284 |\\n\\nJust with more samples, MAWM at 1M steps achieves 6 times the game score of Breakout, 16 times the game score of Demon Attack, and 23 times the game score of Gopher at 100k steps. We will run experiments on all 26 Atari games for the final revision.\\n\\nTo save the hassle of tuning hyperparameters for each task, there is a growing trend in the development of MBRL algorithms that use fixed hyperparameters across all tasks[4]. Thus, no method can always achieve the best performance in every task for now, as it is shown in Table 11. For the fairness of the ablation studies, we randomly selected 6 tasks for Atari 100k and 4 tasks for DeepMind Control Suite. Results show that MAWM is the best choice for the majority of experiments. Nevertheless, it is interesting to develop an algorithm with fixed hyperparameters that works satisfactorily for all tasks in future work. \\n\\n**References**:\\n\\n[1]Vincent Micheli, Eloi Alonso, and Franc\\u00b8ois Fleuret. Transformers are sample-efficient world models. In *International Conference on Learning Representations*, 2023.\\n\\n[2]Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, and Timothy Lillicrap. Mastering diverse domains through world models. *arXiv preprint arXiv:2301.04104*, 2023.\\n\\n[3]Shengjie Wang, Shaohuai Liu, Weirui Ye, Jiacheng You, and Yang Gao.\", \"efficientzero_v2\": \"Mastering discrete and continuous control with limited data. In *Proceedings of the 41st International Conference on Machine Learning*, volume 235 of Proceedings of Machine Learning Research, pp. 51041\\u201351062. PMLR, 21\\u201327 Jul 2024.\\n\\n[4]Nicklas Hansen, Hao Su, and Xiaolong Wang. Td-mpc2: Scalable, robust world models for continuous control. In *International Conference on Learning Representations*, 2024.\\n\\n>**Q3**: The motivation of this paper is to solve the problem of struggling with irrelevant background information and omitting moving tiny objects. Is there any corresponding visualization to verify this?\\n\\n**A3**: Thanks for your valuable suggestion. We have included Figure 13 and Figure 14 in the revised manuscript. The visualization shows the MAWM can capture the moving patterns of tiny objects. However, the pretrained Stable Diffusion model fails in these cases.\\n\\n>**Q4**: Many experimental details are not clearly stated. For example, what are the values of $\\\\alpha$ and $r_\\\\text{dizzy}$? \\n\\n**A4**: We listed all the experimental details in appendices in the previously submitted manuscript. For example, as listed in Table 8, $\\\\alpha=0.15$ and $r_\\\\text{dizzy}=0.05$, since we observe that humans focus on a relatively small area during learning. Although the hyperparameters may not be the best choice due to our limited computational resources, we do believe that our proposed motion-aware mechanism is general and can be a cornerstone for world models in future. Please refer to Appendix B for MAWM architecture, Appendix D for our hyperparameters, and Appendix I for computational resources. If you have interest in more details, don't hesitate to let us know.\", \"title\": \"Official Comment by Authors (2/3)\"}", "{\"comment\": \">**W1**: The main difference from existing model-based RL methods is an auxiliary objective based on pixel-wise motion information and replacing image reconstruction loss with video prediction objective, both of which are not significant changes.\\n\\n**R1**: Thanks for your feedback. It seems that there is some misunderstanding in our contributions. We have rewritten relevant sentences in our revised version for clarity. We hope the revised manuscript has addressed your concerns. Nevertheless, we highlight our contributions here to address your concerns:\\n1. To our best knowledge, MAWM is the first world model that incorporates a new motion-aware mechanism. Our proposed elements related to the mechanism are the motion predictor, the video predictor, AMAS, Equation 6, and Equation 8.\\n2. The adaptive motion-aware scheduler is a novel idea that imitates the dizziness mechanism of humans[1], which overcomes the shortcomings of pixel-level motion prediction when it comes to drastic changes in the environment.\\n3. As a cornerstone of world models, vanilla RSSM[2] and its variants were limited to image reconstruction[3][4][5][6]. Therefore, we propose a theoretical model, RSSM-VP, which establishes the foundation of learning the dynamics model RSSM via video prediction. Our idea can be conveniently adopted by these methods via a substitution of video prediction for image reconstruction, in which we expect significant improvement and present a viewpoint that image reconstruction is not enough for learning RL agents.\\n\\n**Reference**:\\n\\n[1]Behrang Keshavarz, Brandy Murovec, Niroshica Mohanathas, and John F Golding. The visually induced motion sickness susceptibility questionnaire (vimssq): estimating individual susceptibility to motion sickness-like symptoms when using visual devices. *Human factors*, 65(1):107\\u2013124,2023.\\n\\n[2]Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. In *International conference on machine learning*, pp. 2555\\u20132565. PMLR, 2019b.\\n\\n[3]Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, and Timothy Lillicrap. Mastering diverse domains through world models. *arXiv preprint arXiv:2301.04104*, 2023.\\n\\n[4]Jeongsoo Ha, Kyungsoo Kim, and Yusung Kim. Dream to generalize: Zero-shot model-based reinforcement learning for unseen visual distractions. In *Proceedings of the AAAI Conference on Artificial Intelligence*, volume 37, pp. 7802\\u20137810, 2023.\\n\\n[5]Christian Gumbsch, Noor Sajid, Georg Martius, and Martin V Butz. Learning hierarchical world models with adaptive temporal abstractions from discrete latent dynamics. In *International Conference on Learning Representations*, 2023.\\n\\n[6]Ruixiang Sun, Hongyu Zang, Xin Li, and Riashat Islam. Learning latent dynamic robust representations for world models. In *Proceedings of the 41st International Conference on Machine Learning*, Proceedings of Machine Learning Research, pp. 47234\\u201347260. PMLR, 2024.\\n\\n\\n>**W2**: Since the motivation of MAWM is to avoid distractions from irrelevant background information, in addition to standard control benchmarks, environments such as distracted DM control and DM control generalization can better demonstrate benefits of the proposed approach than baselines.\\n\\n**R2**: Thanks for your constructive advice and two additional references ([1] and [2]). We have included them in our revised version. Following your suggestion, we conduct experiments on a newer version of DM control generalization[2], DMC-GB2[3]. As described in Appendix L, the experiments demonstrate that MAWM is competitive with SADA[3], a state-of-the-art algorithm designed specifically for the benchmark, with the same settings as on the standard DeepMind Control Suite. Nevertheless, the generalization ability of MAWM on the DMC-GB2 benchmark indicates that MAWM has the potential to master a broader range of environments.\\n\\n**References**:\\n\\n[1]Amy Zhang, Rowan Thomas McAllister, Roberto Calandra, Yarin Gal, and Sergey Levine. Learning invariant representations for reinforcement learning without reconstruction. In *International Conference on Learning Representations*, 2021.\\n\\n[2]Nicklas Hansen and Xiaolong Wang. Generalization in reinforcement learning by soft data augmentation. In *2021 IEEE International Conference on Robotics and Automation (ICRA)*, pp.13611\\u201313617. IEEE, 2021.\\n\\n[3]Abdulaziz Almuzairee, Nicklas Hansen, and Henrik I. Christensen. A recipe for unbounded data augmentation in visual reinforcement learning, *arXiv preprint arXiv:2405.17416*, 2024.\", \"title\": \"Official Comment by Authors (1/4)\"}", "{\"comment\": \">**Q3**: I also have some concerns about the experimental results. As an example, Table 1 is confusing. In the last row, the Median score of a counterpart SimPLe is 1.34, which seems to obtain the best results without being marked bold. Besides, the different components proposed in this paper don\\u2019t demonstrate convincing improvement in Figure 4. A more comprehensive and significant ablation study could help address this.\\n\\n**A3**: Thank you for pointing out the mistake. The Median score of SimPLe should be $0.134$, and we have corrected it in the revised version. For ablation studies, we randomly selected 6 tasks for Atari 100k and 4 tasks for DeepMind Control Suite with fixed hyperparameters across both domains. We have revised the description for Figure 4 according to your advice. We add additional comprehensive ablation studies on the effect of relevant modules, which can be found in Appendix F. Furthermore, we run experiments on all the 20 tasks on DeepMind Control Suite for a more comprehensive study of MAWM.\\n\\n---\\nWe hope we have addressed your concerns about the novelty of our method and the comparison of prediction results with pretrained models.\", \"title\": \"Official Comment by Authors (2/2)\"}", "{\"summary\": \"This paper tackles the problem that model-based visual RL methods tend to ignore small objects that are essential for tasks while learning irrelevant background information. The proposed approach MAWM introduces a pixel-level loss function based on motion information together with adaptive motion-aware scheduler based on timesteps and video prediction loss instead of image reconstruction as representation learning objective. Authors perform experiments on Atari and DM control benchmarks. Authors compared MAWM to several model-based and model-free RL baselines, and demonstrated improvements averaging across multiple tasks. Ablation studies show contributions of each component in this framework.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed approach shows improvements over baselines in benchmark evaluations\\n2. Authors perform various ablations to demonstrate effectiveness of each component\\n3. Paper is well structured and experiments are well organized\", \"weaknesses\": \"1. The main difference from existing model-based RL methods is an auxiliary objective based on pixel-wise motion information and replacing image reconstruction loss with video prediction objective, both of are not significant changes.\\n2. Since the motivation of MAWM is to avoid distractions from irrelevant background information, in addition to standard control benchmarks, environments such as distracted DM control [1] and DM control generalization [2] can better demonstrate benefits of the proposed approach than baselines.\\n3. Current experiments do not quantitatively establish correlation whether proposed approach has more benefits in environments or tasks where objects tend to be smaller, which is a main motivation in method design.\\n\\n[1] Zhang, Amy, et al. \\\"Learning invariant representations for reinforcement learning without reconstruction.\\\" arXiv preprint arXiv:2006.10742 (2020).\\n\\n[2] Hansen, Nicklas, and Xiaolong Wang. \\\"Generalization in reinforcement learning by soft data augmentation.\\\" 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021.\", \"questions\": \"1. In ablation study (L486-L491), why video prediction loss instead of image reconstruction loss in MAWM framework is important for DM control experiments but not Atari experiments?\\n2. What is the intuition and rationale to use an adaptive Gaussian mixture model to extract ground truth label of whether each pixel belongs to foreground or background?\\n3. How does MAWM compare to MWM [3] that uses MAE objective on convolution features to better learn representations, which is also shown specifically more suitable for small objects?\\n\\n[3] Seo, Younggyo, et al. \\\"Masked world models for visual control.\\\" Conference on Robot Learning. PMLR, 2023.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \">**W3**: Current experiments do not quantitatively establish correlation whether the proposed approach has more benefits in environments or tasks where objects tend to be smaller, which is a main motivation in method design.\\n\\n**R3**: Thank you for your feedback. To address your concern, in Appendix M, we compare MAWM with pretrained generative video on two games, Pong and Breakout, where correct prediction of moving tiny objects is essential to policy learning. Pong is a \\\"tennis like\\\" game that features two paddles and a tiny ball. In Breakout, the agent controls a paddle to bounce a tiny ball into bricks to destroy them. As illustrated in Figure 13 and Figure 14, MAWM can make fine-grained predictions of future frames and succeeds in forecasting the future positions of moving objects.\\n\\n>**Q1**: In ablation study (L486-L491), why video prediction loss instead of image reconstruction loss in MAWM framework is important for DM control experiments but not Atari experiments?\\n \\n**A1**: Thank you for the insightful observations. We observe that a drastic stochastic change such as a flash of light often takes place in the video of an Atari game. It is hard for world models to predict future frames under such an unnatural change. In comparison, for DM control experiments, the result of an action is deterministic. Therefore, video prediction is essential to guarantee of understanding the relationship between actions and future states.\\n\\n>**Q2**: What is the intuition and rationale to use an adaptive Gaussian mixture model to extract ground truth label of whether each pixel belongs to foreground or background?\\n\\n**A2**: We simply choose an adaptive Gaussian mixture model for its efficiency to keep consistent with our lightweight motion-aware mechanism. We did try a frame difference method but it didn't perform well. Although the adaptive gaussian mixture model is designed for scenes with a static camera, we find our proposed method works well with the ground truth label predicted by the model, even on those Atari games with a moving camera like Battle Zone. Optical flow methods may work but demand on more computational resources[1], as discussed in Appendix N.\\n\\n**References**:\\n\\n[1]Syed Tafseer Haider Shah and Xiang Xuezhi. Traditional and modern strategies for optical flow: an investigation. *SN Applied Sciences*, 3(3):289, 2021.\", \"title\": \"Official Comment by Authors (2/4)\"}", "{\"comment\": \"We thank the reviewer for the detailed feedback.\\n\\n---\\n>**Q1**: One concern is regarding the novelty of the proposed two techniques. The motion-aware auxiliary loss is not a novel topic [1]. Additionally, the technical contributions of the proposed pixel-level attention and scheduler are not clear enough to me. \\n\\n\\n**A1**: Thank you for the additional references[1]. We have included it in the revised version. We have updated Appendix N to include related works according to your advice. As discussed below, we think our motion-aware visual representation learning in MAWM is orthogonal to the mentioned method. \\n1. The purpose of introducing a motion-aware mechanism is different. We concern about learning compact and meaningful representations for policy learning within limited interactions with the environment, while the mentioned method studies RGBD future scene synthesis.\\n2. The way we predict motion is distinct from the mentioned method. We integrate into world models the lightweight motion decoder to predict fine-grained future motion and use an adaptive GMM model to generate the \\\"ground truth\\\". In comparison, the paper uses point clouds from the last two depth maps to calculate the current change in camera pose and foreground motion. Via predicted future changes in camera pose and foreground motion by two relevant neural networks, the paper calculates 3D point clouds in the next frame, which represents the future location of pixels.\\n3. The training objective of our motion-aware mechanism is different from the proposed method. We explicitly minimize the focal loss for motion prediction. However, the paper implicitly learns to predict future motion via the estimation of images, semantic maps, and depth maps, which are further input into a refinement network to generate refined results.\\n\\nWe admit that developing a motion-aware mechanism is an existing research topic in the field of computer vision. However, developing a universal appropriate auxiliary loss is a challenging problem in the field of MBRL[2]. Since the ground truth for motion clues cannot be computed or obtained in advance under MBRL settings, it is essential to develop a lightweight and efficient motion-aware mechanism. In conclusion, our technical contributions are threefold:\\n1. To our best knowledge, MAWM is the first world model that incorporates a new motion-aware mechanism, which we believe could be a source of inspiration for other MBRL methods.\\n2. Nevertheless, the adaptive motion-aware scheduler is a novel idea that imitates the dizziness mechanism of humans, which overcomes the shortcomings of pixel-level motion prediction when it comes to drastic changes in the environment. \\n3. Moreover, all MBRL methods that use RSSM[3] as the dynamics model train world models via image reconstruction due to a lack of theoretical support for video prediction. Thus, we propose the theoretical model RSSM-VP and demonstrate its efficiency and performance over the vanilla RSSM in ablation studies.\\n\\n**References**:\\n\\n[1] Xiaojuan Qi, Zhengzhe Liu, Qifeng Chen, and Jiaya Jia. 3d motion decomposition for rgbd future dynamic scene synthesis. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, pp. 7673\\u20137682, 2019.\\n\\n[2] Thomas M Moerland, Joost Broekens, Aske Plaat, Catholijn M Jonker, et al. Model-based reinforcement learning: A survey. *Foundations and Trends\\u00ae in Machine Learning*, 16(1):1\\u2013118, 2023.\\n\\n[3] Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. In *International conference on machine learning*, pp. 2555\\u20132565. PMLR, 2019b.\\n\\n---\\n>**Q2**: Is it possible to compare MAWM with pretrained video generation models like stable video diffusion, to figure out whether they can capture the patterns of moving targets or not?\\n\\n**A2**: Thanks for your useful suggestion. We list the best results of pretrained video generation models in Figure 13 and Figure 14. Results show that they are often incapable of catching the patterns of moving targets, while MAWM can make fine-grained future frame predictions.\", \"title\": \"Official Comment by Authors (1/2)\"}", "{\"metareview\": \"The submission addresses the problem of model-based reinforcement learning in visual environments. It aims to introduce a \\\"world model\\\" that is better at capturing the visual dynamics of tiny objects that are relevant to the target tasks. This is achieved by adding foreground motion prediction and adaptive motion-blur losses. Evaluations are performed on Atari and Deepmind Control Suite. The submission received four borderline reject (5) ratings initially, and the authors provided their rebuttal. Unfortunately, none of the reviewers engage in the post-rebuttal discussion. After reading through the rebuttals and the submission, the AC believes that the submission should be revised to better explain its contributions with respect to prior work, along with the generalizability and limitations of the approach when applied to visually more complex scenarios. Overall, the AC finds no ground to overturn the consensus among all four reviewers.\", \"additional_comments_on_reviewer_discussion\": \"All reviewers rated the submission as borderline reject (5) before the rebuttal, and none of them engaged in the discussion. Nonetheless, the AC shares their concerns that the contributions of the submission should be better positioned with respect to both literature on model-based RL for visual environments, as well as \\\"video\\\" generative modeling in the computer vision community. Additionally, the AC has reservations on the generalizability of the proposed approach towards visually more complex environments, along with environments where a diverse range of tasks could be potentially accompolished.\"}", "{\"summary\": \"This paper introduces a new method for learning the model in model-based RL with special considerations in motion prediction modeling. The authors introduces motion awareness by adding foreground motion prediction and adaptive motion-blur losses over traditional video generative pipelines. The resulting model achieves comparable results against existing MBRL methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The proposed motion awareness in model learning is intuitive and the proposed method achieves comparative results with state-of-the-art methods.\", \"weaknesses\": \"One concern about this paper is the significance of the proposed method. First, except in curves and Fig.3 from the ablation studies, the authors might want to provide more visualizations on the effect of the motion awareness introduced, especially when considering the current experiment setting is only in atari and dm-control-suite where data are all synthetic and should potentially be simpler compared to real-world videos. Second, I wonder if the same pipeline could be applied to real-world videos as currently there is increasing trend in leveraging video generative models as \\\"world models\\\" to facilitate various tasks. It would be better if the authors could find a proper way to compare or address such line of methods. Lastly, the paper organization could be improved for better clarity.\", \"questions\": \"See the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response\", \"comment\": [\"We thank the reviewers for their detailed reviews. Based on their comments, we have revised our paper as listed below:\", \"We have run experiments on an additional benchmark to demonstrate the generalization ability and benefits of our proposed approach (in Appendix L), following suggestions of Reviewer [KLRo](https://openreview.net/forum?id=8BJl6LQgW5&noteId=LKr56goOPB).\", \"We have added several ablation studies on CBAM, harmonizers, and choice of the autoencoder to demonstrate the effectiveness of our motion-aware mechanism and the importance of video prediction (in Appendix F), according to Reviewers [vTbC](https://openreview.net/forum?id=8BJl6LQgW5&noteId=snUvzjSHNH), [qY1E](https://openreview.net/forum?id=8BJl6LQgW5&noteId=sqISwzHENv), and [KLRo](https://openreview.net/forum?id=8BJl6LQgW5&noteId=LKr56goOPB).\", \"We have illustrated that our proposed methods can capture the motion of tiny objects and make almost perfect predictions, in comparison to pretrained video generation models, revealing that our motion-aware mechanism is the key to efficient performance for most tasks (in Appendix M), for Reviewers [vTbC](https://openreview.net/forum?id=8BJl6LQgW5&noteId=snUvzjSHNH), [qY1E](https://openreview.net/forum?id=8BJl6LQgW5&noteId=sqISwzHENv), [pZ6p](https://openreview.net/forum?id=8BJl6LQgW5&noteId=4dLWWd9qsd), and [KLRo](https://openreview.net/forum?id=8BJl6LQgW5&noteId=LKr56goOPB) concerns.\", \"We have discussed differences between our motion-aware mechanism and existing methods in the response to Reviewers [qY1E](https://openreview.net/forum?id=8BJl6LQgW5&noteId=sqISwzHENv), and extended our related work (in Appendix N).\", \"We have conducted an experiment to ensure that MAWM improves beyond standard settings of 100k interactions on Atari Games, implying that MAWM may have better efficiency and effectiveness than existing methods, in answer to Reviewers [vTbC](https://openreview.net/forum?id=8BJl6LQgW5&noteId=snUvzjSHNH) and [pZ6p](https://openreview.net/forum?id=8BJl6LQgW5&noteId=4dLWWd9qsd).\", \"We have included a discussion of the limitations of our work (in Conclusion), according to Reviewer [vTbC](https://openreview.net/forum?id=8BJl6LQgW5&noteId=snUvzjSHNH).\", \"We have extended experiments to show MAWM consistently satisfactory performance on all the 20 tasks from DeepMind Control Suite, setting a state-of-the-art on the whole benchmark, as showcased in Table 12, which can be a reply to [vTbC](https://openreview.net/forum?id=8BJl6LQgW5&noteId=snUvzjSHNH) and [qY1E](https://openreview.net/forum?id=8BJl6LQgW5&noteId=sqISwzHENv).\", \"We have reflected feedbacks on revisions of our manuscript from Reviewers [vTbC](https://openreview.net/forum?id=8BJl6LQgW5&noteId=snUvzjSHNH), [qY1E](https://openreview.net/forum?id=8BJl6LQgW5&noteId=sqISwzHENv), [pZ6p](https://openreview.net/forum?id=8BJl6LQgW5&noteId=4dLWWd9qsd), and [KLRo](https://openreview.net/forum?id=8BJl6LQgW5&noteId=LKr56goOPB).\", \"We showcase visualization [here](https://anonymous.4open.science/r/mawm-C555) to help understand our intuition and rationale for our motion-aware mechanism. We will release our code to promote future work.\"]}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
8BC5UfxOoG
Does Example Selection for In-Context Learning Amplify the Biases of Large Language Models?
[ "Xinwei Guo", "Jiashi Gao", "Junlei Zhou", "Jiaxin Zhang", "Xiangyu Zhao", "Xin Yao", "Xuetao Wei" ]
In-context learning (ICL) has proven to be adept at adapting large language models (LLMs) to downstream tasks without parameter updates, based on a few demonstration examples. Prior work has found that the ICL performance is susceptible to the selection of examples in prompt and made efforts to stabilize it. However, existing example selection studies ignore the ethical risks behind the examples selected, such as gender and race bias. In this work, we first construct a new sentiment classification dataset, EEC-paraphrase, designed to better capture and evaluate the biases of LLMs. Then, through further analysis, we discover that **1) example selection with high accuracy does not mean low bias; 2) example selection for ICL amplifies the biases of LLMs; 3) example selection contributes to spurious correlations of LLMs.** Based on the above observations, we propose the ***Re**mind with **B**ias-aware **E**mbedding* (**ReBE**), which removes the spurious correlations through contrastive learning and obtains bias-aware embedding for LLMs based on prompt tuning. Finally, we demonstrate that ReBE effectively mitigates biases of LLMs without significantly compromising accuracy and is highly compatible with existing example selection methods.*The implementation code is available at https://anonymous.4open.science/r/ReBE-1D04.*
[ "Social Bias", "Large Language Model", "In-Context Learning" ]
Reject
https://openreview.net/pdf?id=8BC5UfxOoG
https://openreview.net/forum?id=8BC5UfxOoG
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z0TsGtibsd", "xbnbnfqkwi", "tMfgzSYJ1Q", "swHnoFGGhV", "sMigQJWJN0", "rSC6AFQuxz", "qZjZyL0UQO", "q6PPzrvR9l", "nGZBhNgkK4", "gsPNU1raxr", "gi4GHce674", "XoKgfhqwtR", "TbtiSl7Igc", "RYjw2oQ3B0", "LhcAwdN65D", "KFfKSKVKTx", "JlwVlECmIW", "GaOghq0v7T", "FHrideuqs8", "Eof9DmOKY8", "9gN64ZghBf" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "decision", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732877182033, 1732459570636, 1730730928209, 1732459809643, 1732541656429, 1733192538200, 1733148159778, 1733152556336, 1730676568199, 1732460196140, 1730312874150, 1732458944634, 1732690974159, 1732457676015, 1732826950682, 1732460306323, 1734836936838, 1737523650506, 1732691084722, 1732826736771, 1732460595202 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4606/Authors" ], [ "ICLR.cc/2025/Conference/Submission4606/Authors" ], [ "ICLR.cc/2025/Conference/Submission4606/Reviewer_CqEA" ], [ "ICLR.cc/2025/Conference/Submission4606/Authors" ], [ "ICLR.cc/2025/Conference/Submission4606/Reviewer_CqEA" ], [ "ICLR.cc/2025/Conference/Submission4606/Authors" ], [ "ICLR.cc/2025/Conference/Submission4606/Reviewer_wJ1K" ], [ "ICLR.cc/2025/Conference/Submission4606/Authors" ], [ "ICLR.cc/2025/Conference/Submission4606/Reviewer_wJ1K" ], [ "ICLR.cc/2025/Conference/Submission4606/Authors" ], [ "ICLR.cc/2025/Conference/Submission4606/Reviewer_Av69" ], [ "ICLR.cc/2025/Conference/Submission4606/Authors" ], [ "ICLR.cc/2025/Conference/Submission4606/Authors" ], [ "ICLR.cc/2025/Conference/Submission4606/Authors" ], [ "ICLR.cc/2025/Conference/Submission4606/Reviewer_wJ1K" ], [ "ICLR.cc/2025/Conference/Submission4606/Authors" ], [ "ICLR.cc/2025/Conference/Submission4606/Area_Chair_9oQm" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4606/Authors" ], [ "ICLR.cc/2025/Conference/Submission4606/Reviewer_wJ1K" ], [ "ICLR.cc/2025/Conference/Submission4606/Authors" ] ], "structured_content_str": [ "{\"title\": \"Clarification of some misunderstandings\", \"comment\": \"Thank you for your valuable feedback. First, we think **`we have addressed and responded to most of the concerns in our rebuttal above`**. Summarize them as follows.\\n\\n> **EEC-paraphrase dataset validation**\\n\\nWe have addressed it. Please refer to $\\\\rightarrow$ Author Response - Part 1: Weakness 1 (Validation of EEC-paraphrase)\\n\\n> **Ablation studies**\\n\\nWe have addressed it. Please refer to $\\\\rightarrow$ Author Response - Part 3: Weakness 2 (Ablation study and significance test)\\n\\nSecond, we would like to provide further explanations for some **misunderstandings**.\\n\\n> **The substantial difference in MaxFG values between sentiment analysis (0.2-0.4) and toxicity detection (up to 0.857) suggests task-dependent performance.**\\n\\nIt is `unreasonable` to compare the debiasing results without considering the `original data` and `task differences`.\\n\\n> **This reasoning is flawed because we're not interested in whether the outputs come from the same distribution---we want to know if the differences in performance metrics (accuracy, bias measures) between methods are statistically reliable.**\\n\\nBecause the **performance metrics are calculated completely based on the outputs**, we think the reasoning is valid.\\n\\n> **Some suggestions for improving presentations**\\n\\nThank you for your valuable suggestions; since the reviewer's latest reply is after the revision submission deadline, we are unable to upload the improved manuscript according to the suggestions. However, we will include the corresponding revisions in subsequent updates.\\n\\nThank you again for your valuable feedback!\"}", "{\"title\": \"Author Response - Part 2\", \"comment\": \"> **[W4.2] The authors haven't done any comparison to simpler bias mitigation approaches when many exist.**\\n\\nThank you for your valuable feedback. We would like to clarify that we have presented a discussion of other debiasing methods in Subsection 3.5 (Appendix G.1 of the current revision).\\n\\nFirst, **there are a few debiasing methods specifically for ICL (Note that, they are NOT specifically for our research problem)**. Although Hu et al. [1] proposed a fairness via clustering genetic (FCG) algorithm, FCG needs explicit feature vectors to complete the clustering. Due to this limitation, FCG cannot apply to sentiment analysis and toxicity detection, so we cannot set it as a baseline for ReBE. Second, we value the reviewer's opinion and compare ReBE with two context augmentation methods: **Counterfactual** and **gender-balanced**.\\n\\n+ **Counterfactual**\\n\\n For a dataset built based on templates like *EEC*, it is convenient to construct the corresponding counterfactual instance according to the templates. For example, according to the template `<person subject> feels <emotion word>`, the counterfactual instance of sentence `Alonzo feels angry.` can be `Nichelle feels angry.` or `Amanda feels angry.`.\\n\\n+ **Gender-balanced**\\n\\n The gender-balanced context approach requires an equal or close number of examples for each gender type.\\n\\nTo stay clear, only the results of random-based example selection are shown below. More details are available in the revised appendix.\\n\\nThe following table shows the gender bias of *OPT-6.7B* on **Sentiment Analysis** with ***EEC-paraphrase*** as the dataset. While the *EEC-paraphrase* dataset does not have its own templates, it is built upon the *EEC* samples, allowing us to generate counterfactual samples using the templates from the *EEC*.\\n\\n|Sentiment Analysis|$AvgGF$(Mean)|Max|$MaxTG$(Mean)|Max|$MaxFG$(Mean)|Max|Acc|\\n|-|-|-|-|-|-|-|-|\\n|Random|0.044($\\\\pm$0.03)|0.129|0.180($\\\\pm$0.09)|0.468|0.199($\\\\pm$0.09)|0.465|0.81|\\n|DPP|0.036($\\\\pm$0.03)|0.110|0.142($\\\\pm$0.08)|0.273|0.144($\\\\pm$0.06)|0.273|**0.87**|\\n|Gender-balanced|0.040($\\\\pm$0.03)|0.132|0.174($\\\\pm$0.08)|0.333|0.210($\\\\pm$0.09)|0.417|0.80|\\n|Counterfactual|0.035($\\\\pm$0.03)|0.125|0.145($\\\\pm$0.07)|0.369|0.149($\\\\pm$0.07)|0.369|0.77|\\n|Random+ReBE|0.034($\\\\pm$0.02)|0.086|0.151($\\\\pm$0.07)|0.322|0.191($\\\\pm$0.08)|0.447|0.78|\\n|DPP+ReBE|**0.033($\\\\pm$0.02)**|**0.073**|**0.120($\\\\pm$0.05)**|**0.250**|**0.122($\\\\pm$0.05)**|**0.247**|**0.87**|\\n\\nSince the counterfactual context method does not apply to datasets without templates, we remove it from the baselines tested on the Jigsaw dataset. The following table shows the gender bias of *Llama-2-7B* on **Toxicity Detection** with ***Jigsaw*** as the dataset.\\n\\n|Toxicity Detection|$AvgGF$(Mean)|Max|$MaxTG$(Mean)|Max|$MaxFG$(Mean)|Max|Acc|\\n|-|-|-|-|-|-|-|-|\\n|Random|0.179($\\\\pm$0.05)|0.283|0.215($\\\\pm$0.05)|0.312|0.215($\\\\pm$0.05)|0.312|0.76|\\n|DPP|0.051($\\\\pm$0.04)|0.136|0.059($\\\\pm$0.04)|0.156|**0.171($\\\\pm$0.18)**|0.667|0.85|\\n|Gender-balanced|0.116($\\\\pm$0.06)|0.236|0.205($\\\\pm$0.08)|0.500|0.205($\\\\pm$0.08)|0.500|0.81|\\n|Random+ReBE|0.058($\\\\pm$0.04)|0.186|0.070($\\\\pm$0.04)|0.210|0.176($\\\\pm$0.11)|**0.300**|0.86|\\n|DPP+ReBE|**0.045($\\\\pm$0.03)**|**0.102**|**0.053($\\\\pm$0.02)**|**0.116**|0.248($\\\\pm$0.20)|0.857|**0.88**|\\n\\n**`Takeaways:`**\\n\\n- There is currently **NO** suitable debiasing baseline specifically for ICL to compare with ReBE;\\n- Compared with the counterfactual and gender-balanced context method, ReBE is compatible with existing example selection methods and can achieve **lower bias** and **higher accuracy**.\\n\\n[1] [Strategic Demonstration Selection for Improved Fairness in LLM In-Context Learning](https://aclanthology.org/2024.emnlp-main.425/) (Hu et al, EMNLP 2024)\\n\\n**Questions:**\\n\\n> **[Q1] I'd like you to do some dataset validation, such as through human evaluation of the paraphrased sentences to ensure quality and meaning preservation, comparison with real-world text samples to validate ecological validity, and analysis of potential artifacts introduced by GPT-3.5 paraphrasing.**\\n\\nPlease refer to the response to Weakness 1.\\n\\n\\n\\n> **[Q2] Please include ablation studies isolating each component of ReBE. I'd prefer to see some comparison with simpler debiasing approaches as well.**\\n\\nPlease refer to the response to Weakness 2.1 and Weakness 4.2.\\n\\n\\n\\n> **[Q3] Could you come up with similar datasets for different types of tasks beyond sentiment analysis and conduct experiments there as well?**\\n\\nPlease refer to the response to Weakness 4.1.\\n\\n\\n\\n> **[Q4] There are many issues with the presentation.**\\n\\nPlease refer to the response to Weakness 3.\"}", "{\"summary\": \"The paper introduces Remind with Bias-aware Embedding (ReBE), a method aimed at mitigating bias in LLMs by addressing spurious correlations through prompt tuning. Using a newly constructed version of Equity Evaluation Corpus, the authors evaluate how in-context learning (ICL) example selection influence biases related to gender and race. This paper considers a variety of GPT, OPT, and Llama models and study bias amplification using several prompt selection techniques: Random example selection, Perplexity, Similarity and DPP. Results show that ICL prompt selection generally does not increase average bias regardless of the model/example selection method, as measured by Average Group Fairness, it can amplify maximum bias levels measured by Maximum TPR Gap and Maximum FPR Gap. In order to reduce the increase in maximum bias levels caused by ICL example selection, the authors introduce ReBE. ReBE is designed using a contrastive loss function that encourages bias-aware embeddings, aiming to reduce biases without significantly impacting model accuracy.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Novelty in Addressing Bias in ICL Example Selection: The paper tackles an underexplored problem, focusing on how example selection of ICL prompts with example selection amplify bias in LLMs. Demonstrating that different example selection methods increase maximum bias is an important finding. Further, disentangling native bias in the parameters of the model with ICL example selection bias provides a more holistic evaluation of bias in LLM outputs.\\n\\n2. Effectiveness of ReBE: Figure 7 demonstrates that ReBE reduces bias in LLMs more than several other example selection methods as measured by Max FG. Further, the accuracy of using DPP + ReBE does not appear to impact the accuracy of the LLM significantly. \\n\\n3. Comprehensive Experimental Setup: The authors conduct experiments across multiple model types and sizes (e.g., LLaMA-2, OPT), which strengthens the generalizability of the findings.\", \"weaknesses\": \"1. The relationship between ICL example selection and bias amplification is complex and not as straightforward as the authors present it. Figure 2 demonstrates that example selection reduces average bias across most models. The claim that \\u201call LLMs exhibit an increase in the maximum gender or race bias value with random-based example selection for ICL\\u201d (lines 217-218). This statement may be too bold of a claim based on their mixed results of bias amplification.\\n\\n2. $\\\\textbf{ReBE does not always significantly reduce bias}$: While ReBE reduces bias in many cases, its improvements over DPP are limited. Figure 7 shows that the decrease in bias is modest, which may raise questions about ReBE\\u2019s overall efficacy. \\n\\n3. There is no Analysis on the impact of increasing the number of ICL examples. Conducting bias analysis for ICL example selection at a different number of shots for all ICL example selection methods in this paper would be an important addition to the paper. Some analysis of the impact of the number of shots is conducted in Figure 8, but this is limited only to ReBE.\", \"questions\": \"1. Although ReBE is presented in the paper, Figure 6 is a little confusing and is not fully explained either in the main body of the paper or the Figure caption.\\n\\n2. How does increasing the number of ICL examples impact bias in other example selection methods? \\n\\n3. Would the authors be willing to elaborate on the additional train time required to add ReBE to existing methods?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response - Part 3\", \"comment\": \"**Weakness 2 (Ablation study and significance test)**\\n> **No ablation studies showing which components of ReBE are actually responsible for bias reduction.**\\n\\nThank you for your valuable feedback. We would like to clarify that we included the **ablation study in subsection 5.2** and provided the **experiment results in Table 4**. \\n\\nAs for the components responsible for bias reduction, since ReBE introduces contrastive loss based on prompt tuning, components that can be used for ablation experiments are the loss functions.\\n\\nIn subsection 5.2, we tested the cases of $L_{acc}$ and $L_{bias}$, which represent the cases of not using $L_{bias}$ and not using $L_{acc}$, respectively. Based on the results in Table 4, we have revised and concluded that **$L_{bias}$ is actually responsible for bias reduction**, and $L_{acc}$ guarantees accuracy (lines 444-446).\\n\\n\\n\\n> **The authors haven't shared any statistical significance tests in the paper.**\\n\\nThank you for your valuable feedback. We would like to clarify that statistical significance test is not suitable for verification of results in this paper. \\n\\nThe statistical significance test can help determine whether there is a difference between two data sets and whether the difference is significant. In other words, it estimates **the probability that two data sets belong to the same overall distribution**. **However**, because the inputs of LLM in ICL are constructed based on the same dataset and belong to the same distribution, the outputs of the same LLM under various example selection methods still belong to the same overall distribution. Therefore, **the statistical significance test is not applicable to the comparison between different example selection methods.**\\n\\n*Why do the inputs belong to the same distribution ?*\\n\\nIn ICL, each input consists of a question and a context, which is a combination of $k$ samples. Although different example selection methods select different $k$ samples to construct the context, these samples belong to the same dataset. When the context distribution of random-based example selection include enough points, other example selection methods can be understood as sampling part of this distribution according to specific rules. So, theoretically, inputs of LLM under various example selection methods belong to the same overall distribution.\\n\\n*Why is the statistical significance test also not applicable to the comparison of **ReBE** ?*\\n\\nSince ReBE is implemented based on prompt tuning without updating LLM parameters, the above discussion on output distribution still applies to the combination of ReBE and other example selection methods. More specifically, ReBE adds a small amount of virtual tokens to the original input. Compared with the input length (1024) of LLMs, the number of virtual tokens is tiny (10-30), so the output distribution of ReBE could be close to the example selection methods, **the statistical significance test is not applicable to the comparison between ReBE and other example selection methods.**\\n\\n*What did we do to ensure the reliability of the results ?*\\n\\nTo reduce the randomness of results, we followed the advice and tested nine LLMs on two tasks (sentiment analysis and toxicity detection) under multiple random seeds.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your thoughtful and detailed response. The results presented regarding the number of ICL examples help strengthen your paper. After reading the other reviews and comments, I will keep my score the same.\"}", "{\"title\": \"A Kind Reminder\", \"comment\": \"Dear Reviewer `Av69`,\\n\\nWe appreciate all of the valuable time and effort you have spent reviewing our paper. As today is `the last day` of the discussion period, we gently request that you review our reply and consider updating your evaluation accordingly. We believe that we have addressed all questions and concerns raised, but please feel free to ask any clarifying questions you might have before the end of the discussion period.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Thanks again for your response\", \"comment\": \"While I appreciate the authors' response and attempts and addressing my concerns, I don't believe they have been addressed and my score will reflect the same.\"}", "{\"title\": \"Thanks for your reply, and more specific points please?\", \"comment\": \"Dear Reviewer wJ1K,\\n Thank you for your reply. We do believe our rebuttal has addressed the concerns you posted in your comments. We also clarified some misunderstandings that you may have. For a better discussion, **could you please clearly specify the specific technical points of your concerns that you have?** Greatly appreciate that!\"}", "{\"summary\": \"This paper investigates how example selection methods for in-context learning (ICL) affect the biases of large language models (LLMs). The authors construct a new sentiment classification dataset, EEC-paraphrase, and discover three key findings: 1) high accuracy in example selection does not guarantee low bias, 2) example selection amplifies existing LLM biases, and 3) example selection contributes to spurious correlations in LLMs. To address these issues, they propose ReBE (Remind with Bias-aware Embedding), a method that uses contrastive learning and prompt tuning to mitigate biases while maintaining accuracy. The authors conduct extensive experiments using eight different LLMs and four example selection baselines. Their proposed ReBE method appears to reduce maximum bias values without compromising accuracy and demonstrates compatibility with existing example selection approaches.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The paper raises an important and previously unexplored concern about how example selection might affect model bias.\", \"The experiments are well-structured across multiple models and methods, making the results more reliable.\", \"If the findings hold true, they have important implications for how ICL should be used in practice.\"], \"weaknesses\": [\"The authors claim that their EEC-paraphrase dataset consists of sentences that are more complex and natural. However, there is no validation for this claim. The instructions to ChatGPT was to \\\"expand the following sentence in more complex words.\\\" I'm skeptical that leads to sentences that are \\\"more complex\\\" and \\\"natural\\\". Furthermore, using an LLM to create sentences that are then used to study biases in other LLMs is not the right way to design this experiment.\", \"No ablation studies showing which components of ReBE are actually responsible for bias reduction. The paper claims ReBE \\\"doesn't significantly compromise accuracy\\\" but doesn't define what constitutes significant compromise. The authors haven't shared any statistical significance tests in the paper.\", \"The paper is not well written and the exposition needs a lot of improvement. Every figure and table should be clearly explained in text.\", \"All experiments are on a single task type (sentiment analysis). The authors haven't done any comparison to simpler bias mitigation approaches when many exist.\"], \"questions\": [\"I'd like you to do some dataset validation, such as through human evaluation of the paraphrased sentences to ensure quality and meaning preservation, comparison with real-world text samples to validate ecological validity, and analysis of potential artifacts introduced by GPT-3.5 paraphrasing.\", \"I'd also like to see more methodological validation. Please include ablation studies isolating each component of ReBE. Share statistical significance testing for all reported results. I'd prefer to see some comparison with simpler debiasing approaches as well.\", \"Especially since you're using GPT-3.5 paraphrasing and not human annotation, could you come up with similar datasets for different types of tasks beyond sentiment analysis and conduct experiments there as well?\", \"There are many issues with the presentation. There is little explanation for figures or tables on how to read them. Then the writing itself has issues. Just as an example, why is Albaquerque et al the citation for Spurious correlations on Line 95? Then that is followed by, \\\"Typical spurious correlations include stereotypes such as \\\"He is a doctor; she is a nurse.\\\" That's not great academic writing. Please go over the paper with a red pen.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response - Part 1\", \"comment\": \"We appreciate the reviewer\\u2019s recognition of the significance of our work. Thank you for your valuable and thoughtful comments. Please find our point-by-point responses below:\\n\\n**Weakness 1 (Validation of *EEC-paraphrase*)**\\n\\n> **A major methodological weakness lies in the dataset construction. Using GPT-3.5-Turbo to create EEC-paraphrase raises questions about potential inherited biases and quality control.**\\n\\nThank you for your valuable feedback. First, for quality control, **`previous work has demonstrated the capability of LLMs (including GPT-3.5-Turbo) to produce diverse and valid paraphrases under guidance`** [1]. We further **validate the quality** of *EEC-paraphrase* with the help of the Python library [Textstat](https://textstat.org/) and provide results in the table below.\\n\\nOn the other hand, **`we have introduced a new task - toxicity detection, and believe that multiple tasks and multiple LLMs can minimize the impact of these potential biases on the results. We have also sampled and manually checked the generated samples.`**\\n\\nThe table below shows the mean and standard deviation of the performance of datasets on various metrics. We choose the number of words and Distinct-n to measure the diversity of sentences, and metrics in [Textstat](https://textstat.org/) are used to measure the complexity of sentences, which help determine the readability, complexity, and grade level.\\n\\n|| Metric| EEC| EEC-paraphrase|\\n| - | -- | - | - |\\n| Diversity | Number of words$\\\\uparrow$ | 5.86($\\\\pm$1.73)| **18.63($\\\\pm$2.33)** |\\n|| Distinct-2 $\\\\uparrow$| 0.81($\\\\pm$0.066) | **0.94($\\\\pm$0.147)** |\\n|| Distinct-3 $\\\\uparrow$| 0.62($\\\\pm$0.132) | **0.89($\\\\pm$0.017)** |\\n| Complexity | Automated Readability Index $\\\\uparrow$| 7.56($\\\\pm$3.80)| **14.44($\\\\pm$2.27)** |\\n|| Coleman-Liau Index $\\\\uparrow$| 9.63($\\\\pm$4.57)| **14.74($\\\\pm$2.92)** |\\n|| Dale-Chall Readability Score $\\\\uparrow$| 11.94($\\\\pm$3.21) | **11.68($\\\\pm$1.17)** |\\n|| Flesch-Kincaid Grade Level $\\\\uparrow$| 5.52($\\\\pm$3.68)| **12.06($\\\\pm$2.04)** |\\n|| Flesch Reading Ease Score $\\\\downarrow$| 65.88($\\\\pm$26.09) | **41.68($\\\\pm$14.38)** |\\n|| Fog Scale $\\\\uparrow$| 8.44($\\\\pm$5.04)| **15.18($\\\\pm$2.88)** |\\n|| Linsear Write Formula $\\\\uparrow$| 2.87($\\\\pm$1.25)| **12.83($\\\\pm$2.06)** |\\n|| McAlpine EFLAW Readability Score $\\\\uparrow$ | 7.34($\\\\pm$2.59)| **25.62($\\\\pm$3.90)** |\\n|| Readability Consensus Score $\\\\uparrow$| 7.15($\\\\pm$4.45)| **13.09($\\\\pm$2.29)** |\\n|| Spache Readability Formula $\\\\uparrow$| 4.20($\\\\pm$1.26)| **6.73($\\\\pm$0.70)**|\\n\\n[1] [ChatGPT to Replace Crowdsourcing of Paraphrases for Intent Classification: Higher Diversity and Comparable Model Robustness](https://aclanthology.org/2023.emnlp-main.117) (Cegin et al., EMNLP 2023)\\n\\n**Questions**\\n\\n> **Regarding the question 1** \\n\\nPlease refer to the response to Weakness 1.\\n\\n\\n**Question 2 (Additional task)**\\n> **Have you explored whether these patterns hold true for other tasks like toxicity detection or text classification? Even preliminary results would help assess generalizability.**\\n\\nWe really appreciate your suggestions. Following your valuable guidance, we have supplemented the test of LLMs for **toxicity detection** using **[Jigsaw](https://www.kaggle.com/competitions/jigsaw-unintended-bias-in-toxicity-classification/overview)** as the dataset and present the results for the **maximum values of gender bias** below.\\n\\nIn toxicity detection, LLMs are asked to judge whether the sentences given are toxic or non-toxic. In the table below, we provide the gender bias performance of *Llama-2-7B* and *Llama-3.2-3B* in toxicity detection. To stay clear, the results below are the maximum values of gender bias, and the remaining details (such as the mean values) are available in the revised appendix.\\n\\n|Llama-2-7B|$AvgGF$||$MaxTG$||$MaxFG$||\\n|-|-|-|-|-|-|-|\\n|$k=18$|Origin|ReBE|Origin|ReBE|Origin|ReBE|\\n|Zero-shot|0.108|-|0.098|-|0.833|-|\\n|Random-based|0.283|0.186|0.312|0.210|0.250|0.300|\\n|Perplexity-based|0.205|0.168|0.217|0.173|0.667|0.667|\\n|Similarity-based|0.154|0.141|0.140|0.129|0.500|0.667|\\n|DPP-based|0.136|0.102|0.156|0.116|0.667|0.857|\\n\\n|Llama-3.2-3B|$AvgGF$||$MaxTG$||$MaxFG$||\\n|-|-|-|-|-|-|-|\\n|$k=18$|Origin|ReBE|Origin|ReBE|Origin|ReBE|\\n|Zero-shot|0.145|-|0.158|-|0.429|-|\\n|Random-based|0.215|0.108|0.217|0.127|0.500|0.550|\\n|Perplexity-based|0.142|0.043|0.152|0.019|0.857|0.333|\\n|Similarity-based|0.056|0.038|0.069|0.019|0.600|0.333|\\n| DPP-based|0.090|0.048|0.049|0.011|0.750|0.500|\\n\\n\\nConsistent with the sentiment analysis, **`we can find that:`** \\n\\n+ Compared with the zero-shot, example selection methods for ICL **amplify the maximum value of gender bias**;\\n\\n- ReBE remains compatible with example selection methods and exhibits effective debiasing in toxicity detection.\"}", "{\"summary\": \"This paper examines how example selection in in-context learning (ICL) can amplify biases in Large Language Models (LLMs). The authors make three key discoveries: example selection with high accuracy doesn't guarantee low bias, ICL example selection amplifies existing LLM biases, and it contributes to spurious correlations. To study these effects, they create EEC-paraphrase, a new sentiment classification dataset, and propose ReBE (Remind with Bias-aware Embedding), a novel debiasing method that combines contrastive learning with prompt tuning. Their results show that ReBE effectively reduces bias without significantly compromising accuracy while remaining compatible with existing example selection methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper identifies and investigates a new problem: how example selection in ICL affects model bias, an important angle that previous work has overlooked.\\n\\n2. They creates a new dataset (EEC-paraphrase) that improves upon existing bias evaluation datasets by using more natural language and also proposes an innovative solution (ReBE) that creatively combines contrastive learning with prompt tuning.\\n\\n3. . The authors test their findings across 8 different LLMs of varying sizes and 4 distinct example selection methods, using multiple bias metrics (AvgGF, MaxTG, MaxFG) to ensure comprehensive assessment. The authors also provide solid theoretical grounding to explain why their approach is effective.\", \"weaknesses\": \"1. A major methodological weakness lies in the dataset construction. Using GPT-3.5-Turbo to create EEC-paraphrase raises questions about potential inherited biases and quality control. The paper lacks discussion of human evaluation or validation of the paraphrased outputs.\\n\\n2. The experimental evaluation would benefit from broader comparisons. The absence of comparisons with existing debiasing methods and fine-tuning approaches makes it difficult to fully assess ReBE's advantages. Including baselines like data augmentation or adversarial debiasing would provide valuable context.\", \"questions\": \"1. I'm concerned about potential biases introduced by using GPT-3.5-Turbo to create EEC-paraphrase. Could you describe your validation process and any controls implemented to ensure dataset quality? Human evaluation results would be particularly valuable.\\n\\n2. The paper's findings about bias amplification are interesting but limited to sentiment classification. Have you explored whether these patterns hold true for other tasks like toxicity detection or text classification? Even preliminary results would help assess generalizability.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Author Response - Part 1\", \"comment\": \"We sincerely appreciate your comments. Please find our point-by-point responses below:\\n\\n**Weakness 1 (Validation of *EEC-paraphrase*)**\\n> **There is no validation for the claim that EEC-paraphrase dataset consists of sentences that are more complex and natural.**\\n\\nThank you for your valuable feedback. To quantitatively demonstrate the complexity and naturalness of the *EEC-paraphrase* dataset, we evaluate using the Python library [Textstat](https://textstat.org/) and present the results in the table below. \\n\\nIn selecting metrics, we choose the number of words and Distinct-n to measure the sentence diversity, which reflects the **naturalness**. To evaluate **complexity**, we use metrics from Textstat to determine the readability, complexity, and grade level of sentences.\\n\\nThe results show that the *EEC-paraphrase* dataset **`outperforms`** the original *EEC* dataset across **all** metrics.\\n\\n||**Metric**|**EEC**|**EEC-paraphrase**|\\n|-|--|-|-|\\n|**Diversity**|Numberofwords$\\\\uparrow$|5.86($\\\\pm$1.73)|**18.63($\\\\pm$2.33)**|\\n||Distinct-2$\\\\uparrow$|0.81($\\\\pm$0.066)|**0.94($\\\\pm$0.147)**|\\n||Distinct-3 $\\\\uparrow$| 0.62($\\\\pm$0.132)| **0.89($\\\\pm$0.017)**|\\n|**Complexity** |Automated Readability Index $\\\\uparrow$|7.56($\\\\pm$3.80)|**14.44($\\\\pm$2.27)**|\\n||Coleman-Liau Index $\\\\uparrow$| 9.63($\\\\pm$4.57)|**14.74($\\\\pm$2.92)**|\\n||Dale-Chall Readability Score $\\\\uparrow$| 11.94($\\\\pm$3.21)| **11.68($\\\\pm$1.17)**|\\n||Flesch-Kincaid Grade Level $\\\\uparrow$| 5.52($\\\\pm$3.68)| **12.06($\\\\pm$2.04)**|\\n||Flesch Reading Ease Score $\\\\downarrow$| 65.88($\\\\pm$26.09)|**41.68($\\\\pm$14.38)**|\\n||Fog Scale $\\\\uparrow$| 8.44($\\\\pm$5.04)| **15.18($\\\\pm$2.88)**|\\n||Linsear Write Formula $\\\\uparrow$| 2.87($\\\\pm$1.25)| **12.83($\\\\pm$2.06)**|\\n||McAlpine EFLAW Readability Score $\\\\uparrow$ | 7.34($\\\\pm$2.59)| **25.62($\\\\pm$3.90)**|\\n||Readability Consensus Score $\\\\uparrow$| 7.15($\\\\pm$4.45)| **13.09($\\\\pm$2.29)**|\\n||Spache Readability Formula $\\\\uparrow$| 4.20($\\\\pm$1.26)| **6.73($\\\\pm$0.70)**|\\n\\n> **Using an LLM to create sentences that are then used to study biases in other LLMs is not the right way to design this experiment.**\\n\\nThanks for your valuable feedback. First, we argue that the statement \\\"Using an LLM to create sentences that are then used to study biases in other LLMs\\\" **is not accurate** because *EEC-paraphrase* is not created out of thin air by *GPT-3.5-Turbo*, but paraphrase the existing dataset *EEC*. \\n\\nSecond, **previous work has demonstrated the capability of LLMs (including GPT-3.5-Turbo) to produce diverse and valid paraphrases under guidance**[1]. Therefore, we think that leveraging LLMs, **under human review**, to assist in part of the dataset construction process\\u2014rather than relying on crowdsourcing\\u2014is a reasonable experimental design. **We follow the practices adopted by previous work**.\\n\\n[1] ChatGPT to Replace Crowdsourcing of Paraphrases for Intent Classification: Higher Diversity and Comparable Model Robustness (Cegin et al., EMNLP 2023)\\n\\n**Weakness 3**\\n> **The paper is not well written and the exposition needs a lot of improvement.**\\n\\nThank you for your valuable feedback. We have tried our best to polish the language in the revised manuscript. We would be grateful if you could provide more specific examples.\\n\\n**Weakness 4.1 (Additional task)**\\n> **All experiments are on a single task type (sentiment analysis).**\\n\\nThank you for your valuable feedback. We have supplemented the test of LLMs for **toxicity detection** using **[Jigsaw](https://www.kaggle.com/competitions/jigsaw-unintended-bias-in-toxicity-classification/overview)** as the dataset and present the results for the **maximum values of gender bias** below.\\n\\nIn toxicity detection, LLMs are asked to judge whether the sentences given are toxic or non-toxic. In the table below, we provide the gender bias performance of *Llama-2-7B* and *Llama-3.2-3B* in toxicity detection. To stay clear, the results below are the maximum values of gender bias, and the remaining details (such as the mean values) are available in the revised appendix.\\n\\n|Llama-2-7B|$AvgGF$||$MaxTG$||$MaxFG$||\\n|-|-|-|-|-|-|-|\\n|$k=18$|Origin|ReBE|Origin|ReBE|Origin|ReBE|\\n|Zero-shot|0.108|-|0.098|-|0.833|-|\\n|Random-based|0.283|0.186|0.312|0.210|0.250|0.300|\\n|Perplexity-based|0.205|0.168|0.217|0.173|0.667|0.667|\\n|Similarity-based|0.154|0.141|0.140|0.129|0.500|0.667|\\n|DPP-based|0.136|0.102|0.156|0.116|0.667|0.857|\\n\\n|Llama-3.2-3B|$AvgGF$||$MaxTG$||$MaxFG$||\\n|-|-|-|-|-|-|-|\\n|$k=18$|Origin|ReBE|Origin|ReBE|Origin|ReBE|\\n|Zero-shot|0.145|-|0.158|-|0.429|-|\\n|Random-based|0.215|0.108|0.217|0.127|0.500|0.550|\\n|Perplexity-based|0.142|0.043|0.152|0.019|0.857|0.333|\\n|Similarity-based|0.056|0.038|0.069|0.019|0.600|0.333|\\n| DPP-based|0.090|0.048|0.049|0.011|0.750|0.500|\\n\\nConsistent with the sentiment analysis, **we can find that:** \\n\\n- Compared with the zero-shot, example selection methods for ICL **amplify the maximum value of gender bias**;\\n- ReBE remains compatible with example selection methods and exhibits effective debiasing in toxicity detection.\"}", "{\"title\": \"A Kind Reminder\", \"comment\": \"Dear Reviewer `wJ1K`,\\n\\nThank you again for the valuable comments that helped improve our paper. Following your suggestions, we have carefully revised the manuscript to address your concerns.\\n\\nWe kindly remind you to review our reply along with the revised submission and consider updating your evaluation accordingly.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Author Response - Part 1\", \"comment\": \"We thank the reviewer for your recognition of our work. We also appreciate the detailed comments posed by the reviewer. Please find below the point-to-point responses to your comments.\\n\\n**Weakness 1 (The claim may be too bold)**\\n> **The claim that \\u201call LLMs exhibit an increase in the maximum gender or race bias value with random-based example selection for ICL\\u201d (lines 217-218) may be too bold of a claim based on their mixed results of bias amplification.**\\n\\nThank you for your valuable feedback. To make the wording more rigorous, we have corrected the claim as \\\"the LLMs tested exhibit varying degrees of increase in the maximum gender or race bias value with random-based example selection for ICL\\\" (lines 223-224).\\n\\nBesides, we would like to give a brief explanation as to whether the claim is too bold. Although some LLMs do not exhibit **simultaneous** amplification of the maximum values of gender and race bias in sentiment analysis, we do not argue that the claim does not apply to these models. This is because the experiment results are task-dependent, and our additional experiment results for toxicity detection support this. \\n\\nFor instance, in sentiment analysis, *Llama-2-7B* does not show an amplification of the maximum value of gender bias, but it does exhibit significant gender bias in toxicity detection (Detailed data is available in the revision appendix). Therefore, as long as the LLM shows amplification of the maximum value of a single type of bias, we think the claim applies to it.\\n\\n**Weakness 2 (Limited bias mitigation over DPP)**\\n> **While ReBE reduces bias in many cases, its improvements over DPP are limited (Figure 7).**\\n\\nThank you for your valuable feedback. Firstly, we acknowledge that reducing bias becomes increasingly challenging when the initial level of bias is already minimal. Compared with other example selection methods, the bias values of DPP-based example selection are smaller (The **max value** of *AvgGF* is just 7% in Figure 6, which is the Figure 7 of the previous submission). Additionally, it is generally understood that the goal of a debiasing method is to reduce bias to an acceptable range, as completely eliminating it is often difficult to achieve.\\n\\n**Weakness 3 (Impact of $k$)**\\n> **There is no analysis on the impact of increasing the number of ICL examples.**\\n\\nThank you for your suggestions. We have added the corresponding experiment results and analysis in **Appendix E**. To investigate the impact of increasing the number of IC examples, we have assessed the gender bias performance of ***Llama-2-7B*** in toxicity detection under various number of ICL examples ($k\\\\in[2,6,10,14,18,22,26]$). \\n\\n1. Gender bias performance of *Llama-2-7B* on $AvgGF$\\n\\n|| Random|| Perplexity|| Similarity|| DPP||\\n| :- | :-: | :---: | :-: | :---: | :-: | :---: | :-: | :---: |\\n|| Mean| Max| Mean| Max| Mean| Max| Mean| Max|\\n| $k=2$ | 0.204($\\\\pm$0.05) | 0.310 | 0.105($\\\\pm$0.05) | 0.195 | 0.116($\\\\pm$0.05) | 0.203 | 0.129($\\\\pm$0.05) | 0.235 |\\n| $k=6$ | 0.204($\\\\pm$0.05) | 0.312 | 0.087($\\\\pm$0.06) | 0.211 | 0.067($\\\\pm$0.04) | 0.166 | 0.073($\\\\pm$0.05) | 0.195 |\\n| $k=10$ | 0.175($\\\\pm$0.03) | 0.247 | 0.075($\\\\pm$0.06) | 0.187 | 0.046($\\\\pm$0.03) | 0.158 | 0.062($\\\\pm$0.04) | 0.156 |\\n| $k=14$ | 0.179($\\\\pm$0.05) | 0.263 | 0.083($\\\\pm$0.05) | 0.189 | 0.041($\\\\pm$0.03) | 0.127 | 0.056($\\\\pm$0.03) | 0.105 |\\n| $k=18$ | 0.179($\\\\pm$0.05) | 0.283 | 0.058($\\\\pm$0.06) | 0.205 | 0.043($\\\\pm$0.05) | 0.154 | 0.051($\\\\pm$0.04) | 0.136 |\\n| $k=22$ | 0.187($\\\\pm$0.05) | 0.285 | 0.064($\\\\pm$0.06) | 0.211 | 0.035($\\\\pm$0.03) | 0.109 | 0.041($\\\\pm$0.03) | 0.094 |\\n| $k=26$ | 0.151($\\\\pm$0.04) | 0.229 | 0.063($\\\\pm$0.05) | 0.198 | 0.038($\\\\pm$0.03) | 0.130 | 0.046($\\\\pm$0.02) | 0.078 |\\n\\n2. Gender bias performances of *Llama-2-7B* on $MaxTG$ and $MaxFG$ are available in the revision appendix.\\n\\n**Findings:** As the number of ICL examples $k$ increases, the bias decreases overall, but the change in accuracy must also be considered. Please see the revision for detailed additional data.\\n\\n**Question 1 (Confusing Figure 6)**\\n> **Figure 6 is a little confusing and is not fully explained either in the main body of the paper or the Figure caption.**\\n\\nWe apologize for the confusion. We have added more detailed explanation and hope this can resolve your confusion (lines 342-345). \\n\\n**Question 2 (Impact of $k$)**\\n> **How does increasing the number of ICL examples impact bias in other example selection methods?**\\n\\nPlease refer to the **findings** in the response to Weakness 3.\\n\\n\\n**Question 3 (Additional train time)**\\n\\n> **The additional train time required to add ReBE to existing methods.**\\n\\nThat's a good question, we give an example in the following table.\\n\\n| Task| Model| Dataset | Train set size | Dev set size | epochs | GPU| Time consuming |\\n| :-: | :-: | :-: | :-: | :-: | :-: | - | :-: |\\n| Toxicity Detection | Llama-2-7B | Jigsaw | 400| 200| 20| A100 80GB | `2.5h`|\"}", "{\"title\": \"Regarding Ablation Studies\", \"comment\": \"The current Table 4 has several limitations that prevent it from definitively attributing performance gains to ReBE.\\n\\nFirst, comparing just $L_acc$, $L_bias$, and ReBE's combined loss does not isolate all of ReBE's components. ReBE introduces both a contrastive learning mechanism and a prompt tuning approach. The current ablation simply shows the impact of different loss functions while keeping the rest of the architecture constant. To properly validate ReBE's effectiveness, we would need to test each architectural component independently.\\n\\nSecond, the table presents aggregate metrics without showing how different components interact. For example, we cannot tell if the contrastive learning is truly removing spurious correlations or if the improved metrics come from other aspects of the model. A more comprehensive ablation would track specific types of biases and show how each component affects them.\\n\\nFor instance, you identify a specific type of spurious correlation in Figure 3, showing that sentences labeled as \\\"sadness\\\" containing male pronouns are more frequently misclassified as \\\"fear\\\" compared to those with female pronouns (0.54 vs 0.08). You then claim that ReBE helps remove such spurious correlations through its contrastive learning component. However, Table 4's ablation study only shows overall metrics (Accuracy, AvgGF, MaxTG, MaxFG) when using different loss functions. What's missing is a direct connection between these metrics and the specific spurious correlations the paper identified. We need to see: i) How the male-sadness-to-fear misclassification rate specifically changes when using just $L_acc$ versus just $L_bias$ versus the full ReBE model, ii) Whether the contrastive learning component is actually learning to separate these specific cases, or if the improved metrics come from other effects of the architecture, iii) A demonstration that the positive/negative pairs in the contrastive learning are effectively capturing and correcting these specific biased associations.\\n\\nWithout this level of analysis, we cannot verify whether ReBE is truly addressing the spurious correlations it claims to target, or if the improved bias metrics are coming from other aspects of the model architecture. This distinction matters because it affects both our understanding of the problem and our confidence in the proposed solution.\\n\\nA more rigorous ablation study would systematically remove or modify each component of ReBE while measuring its impact on both performance and different types of bias (as described above). This would provide stronger evidence for which parts of the architecture are responsible for the observed improvements.\"}", "{\"title\": \"Author Response - Part 2\", \"comment\": \"**Weakness 2 (Baseline comparison)**\\n> **The absence of comparisons with existing debiasing methods and fine-tuning approaches makes it difficult to fully assess ReBE's advantages.**\\n\\nThank you for your valuable feedback. First, we would like to clarify that **fine-tuning methods are not suitable for comparison with ReBE**. Our work focuses on bias risks of example selections for ICL and debiasing methods that can serve ICL. However, fine-tuning approaches need to update the LLM parameters, which destroy the advantages of ICL.\\n\\nSecond, **there are a few debiasing methods specifically for ICL** **`(Note that, they are NOT specifically for our research problem)`**. Although Hu et al. [1] proposed a fairness via clustering genetic (FCG) algorithm, FCG needs explicit feature vectors to complete the clustering. Due to this limitation, FCG cannot apply to sentiment analysis and toxicity detection, so we cannot set it as a baseline for ReBE. Even so, we value the reviewer's opinion and compare ReBE with two context augmentation methods: **Counterfactual** and **gender-balanced**.\\n\\n+ **Counterfactual**\\n\\n For a dataset built based on templates like *EEC*, it is convenient to construct the corresponding counterfactual instance according to the templates. For example, according to the template `<person subject> feels <emotion word>`, the counterfactual instance of sentence `Alonzo feels angry.` can be `Nichelle feels angry.` or `Amanda feels angry.`.\\n\\n+ **Gender-balanced**\\n\\n The gender-balanced context approach requires an equal or close number of examples for each gender type.\\n\\nTo stay clear, we present the results of random-based and DPP-based example selections in the tables below. More details are available in the revised appendix.\\n\\nThe following table shows the gender bias of *OPT-6.7B* on **Sentiment Analysis** with ***EEC-paraphrase*** as the dataset. While the *EEC-paraphrase* dataset does not have its own templates, it is built upon the *EEC* samples, allowing us to generate counterfactual samples using the templates from the *EEC*.\\n\\n|Sentiment Analysis|$AvgGF$(Mean)|Max|$MaxTG$(Mean)|Max|$MaxFG$(Mean)|Max|Acc|\\n|-|-|-|-|-|-|-|-|\\n|Random|0.044($\\\\pm$0.03)|0.129|0.180($\\\\pm$0.09)|0.468|0.199($\\\\pm$0.09)|0.465|0.81|\\n|DPP|0.036($\\\\pm$0.03)|0.110|0.142($\\\\pm$0.08)|0.273|0.144($\\\\pm$0.06)|0.273|**0.87**|\\n|Gender-balanced|0.040($\\\\pm$0.03)|0.132|0.174($\\\\pm$0.08)|0.333|0.210($\\\\pm$0.09)|0.417|0.80|\\n|Counterfactual|0.035($\\\\pm$0.03)|0.125|0.145($\\\\pm$0.07)|0.369|0.149($\\\\pm$0.07)|0.369|0.77|\\n|Random+ReBE|0.034($\\\\pm$0.02)|0.086|0.151($\\\\pm$0.07)|0.322|0.191($\\\\pm$0.08)|0.447|0.78|\\n|DPP+ReBE|**0.033($\\\\pm$0.02)**|**0.073**|**0.120($\\\\pm$0.05)**|**0.250**|**0.122($\\\\pm$0.05)**|**0.247**|**0.87**|\\n\\nSince the counterfactual context method does not apply to datasets without templates, we remove it from the baselines tested on the Jigsaw dataset. The following table shows the gender bias of *Llama-2-7B* on **Toxicity Detection** with ***Jigsaw*** as the dataset.\\n\\n|Toxicity Detection|$AvgGF$(Mean)|Max|$MaxTG$(Mean)|Max|$MaxFG$(Mean)|Max|Acc|\\n|-|-|-|-|-|-|-|-|\\n|Random|0.179($\\\\pm$0.05)|0.283|0.215($\\\\pm$0.05)|0.312|0.215($\\\\pm$0.05)|0.312|0.76|\\n|DPP|0.051($\\\\pm$0.04)|0.136|0.059($\\\\pm$0.04)|0.156|**0.171($\\\\pm$0.18)**|0.667|0.85|\\n|Gender-balanced|0.116($\\\\pm$0.06)|0.236|0.205($\\\\pm$0.08)|0.500|0.205($\\\\pm$0.08)|0.500|0.81|\\n|Random+ReBE|0.058($\\\\pm$0.04)|0.186|0.070($\\\\pm$0.04)|0.210|0.176($\\\\pm$0.11)|**0.300**|0.86|\\n|DPP+ReBE|**0.045($\\\\pm$0.03)**|**0.102**|**0.053($\\\\pm$0.02)**|**0.116**|0.248($\\\\pm$0.20)|0.857|**0.88**|\\n\\n**`Takeaways:`**\\n\\n- There is currently **NO** suitable debiasing baseline specifically for ICL to compare with ReBE;\\n- Compared with the counterfactual and gender-balanced context method, ReBE is compatible with existing example selection methods and can achieve **lower bias** and **higher accuracy**.\\n\\n[1] [Strategic Demonstration Selection for Improved Fairness in LLM In-Context Learning](https://aclanthology.org/2024.emnlp-main.425/) (Hu et al, EMNLP 2024)\"}", "{\"metareview\": \"After carefully reviewing both the manuscript and the authors' rebuttal, I find that while this paper addresses an important and novel problem regarding bias amplification in example selection for in-context learning, and the overall research direction shows promise, as the reviewers have pointed out, the current manuscript has several methodological and experimental limitations that need to be addressed. The reviewers have raised valid concerns about the dataset construction methodology, the completeness of the experimental evaluation, and some inconsistencies in the reported results. The paper would benefit from more rigorous empirical validation, broader comparisons with existing approaches, and clearer demonstration of the proposed method's effectiveness, some of which the authors have begun to address in their rebuttal. Given these substantial concerns, I recommend that the authors carefully incorporate the reviewers' detailed feedback to strengthen their methodology and experiments, and consider submitting a revised version to a future venue. With these improvements, particularly in the experimental rigor and empirical validation, this work has the potential to make valuable contributions to our understanding of bias in large language models.\", \"additional_comments_on_reviewer_discussion\": \"I have read the messages in the discussion period and my opinion has been summarized as in the metareview above. I considered these points in my recommendation.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"A Kind Reminder\", \"comment\": \"Dear Reviewer `Av69`,\\n\\nThank you again for the valuable comments that helped improve our paper. Following your suggestions, we have carefully revised the manuscript to address your concerns.\\n\\nWe kindly remind you to review our reply along with the revised submission and consider updating your evaluation accordingly.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Thanks for the response. Some concerns addressed but most remain.\", \"comment\": \"Thank you for your detailed response to my review. While your additional analyses and explanations help address some concerns, I believe several important issues still need to be addressed to strengthen the paper.\\n\\nRegarding the EEC-paraphrase dataset, the quantitative metrics you've provided demonstrate increased complexity but don't fully validate the dataset's quality. Consider your example paraphrase: \\\"Alan is experiencing a profound sense of frustration and irritation, resulting in a heightened state of emotional turmoil and discomfort.\\\" While this scores higher on complexity metrics, it contains redundant phrasing and potentially unnatural language. (\\\"frustration and irritation,\\\" \\\"turmoil and discomfort\\\").\", \"this_illustrates_why_we_need_human_evaluation_to_verify\": \"i) Meaning preservation - Does the paraphrase maintain the original sentiment and intensity?, ii) Natural language use - Would humans actually express the emotion this way?, iii) Potential artifacts - Are there systematic patterns in how GPT-3.5 paraphrases that could bias the results? I strongly recommend conducting human evaluation to verify meaning preservation, naturalness, and potential systematic biases introduced by GPT-3.5's paraphrasing patterns.\\n\\nThe toxicity detection experiments are a valuable addition, but they raise important questions about the consistency of ReBE's effectiveness across tasks. The substantial difference in MaxFG values between sentiment analysis (0.2-0.4) and toxicity detection (up to 0.857) suggests task-dependent performance that warrants deeper analysis and discussion in the paper.\\n\\nRegarding ablation studies, while Table 4's analysis of loss functions is helpful no doubt, more granular investigation would strengthen the work. This could include analyzing different contrastive learning configurations (positive/negative pair construction, temperature parameter tuning, similarity metrics), prompt tuning variations (length, position, architecture), and comprehensive loss function analysis (weight parameter sweeps, alternative formulations, training dynamics). More on this in a separate comment.\\n\\nYour explanation regarding statistical significance testing, while thoughtful, doesn't address the need to validate the reliability of your results. While the inputs may come from related distributions, we need to verify that performance differences between methods are meaningful rather than random variation. Standard approaches like paired t-tests or bootstrap resampling would help establish the robustness of your findings. \\\"Because the inputs of LLM in ICL are constructed based on the same dataset and belong to the same distribution, the outputs of the same LLM under various example selection methods still belong to the same overall distribution.\\\" This reasoning is flawed because we're not interested in whether the outputs come from the same distribution---we want to know if the differences in performance metrics (accuracy, bias measures) between methods are statistically reliable.\\n\\nFinally, the paper's presentation would benefit from clearer figure explanations (particularly Figure 1's crucial \\\"grey area\\\"), better motivation of methodological choices, and a more cohesive literature review that builds a narrative rather than listing related work. The methodology section should better justify choices like: i) The specific contrastive learning formulation, ii) The choice of loss function components, iii) The prompt tuning architecture. The literature review currently presents related work as a list of papers rather than building a coherent narrative about how the field has developed and where this work fits.\"}", "{\"title\": \"General Response: Novelties and Answers to common concerns\", \"comment\": \"Dear Chairs and Reviewers,\\n\\nWe sincerely thank all the reviewers for their time and valuable feedback on our work. \\n\\nIn this work, **we are the first** to explore **the `severe bias risks` of example selection methods for ICL** , which is **`ignored by previous work`**, and **propose a novel debiasing method, ReBE.** Unlike fine-tuning, ICL is more flexible and suitable for few-shot scenarios, as it requires minimal data and avoids parameter updates. However, utilizing ICL to deploy LLMs to downstream tasks **has the risk of preserving or even exacerbating biases.** Therefore, we try to **`fill this gap`** by exploring and mitigating the ethical risks of example selection, **which is `really non-trivial`**.\", \"we_summarize_the_common_concerns_raised_by_reviewers_and_our_responses_as_follows\": \"1. **Lack of the *EEC-paraphrase* dataset validation**\\n\\n **Response**: Previous work has demonstrated the capability of LLMs (including GPT-3.5-Turbo) to produce diverse and valid paraphrases under guidance [1]. To verify the quality and diversity of *EEC-paraphrase*, we conduct the evaluation of *EEC-paraphrase* on various metrics and find that *EEC-paraphrase* **outperforms** the original *EEC* dataset across all metrics. We also manually sampled and reviewed the sentences in *EEC-paraphrase*. **Our dataset validation process follows the practices adopted by previous work.**\\n\\n2. **Lack of tasks beyond the sentiment analysis**\\n\\n **Response**: To verify the generalizability, we have **supplemented the test** of LLMs for **toxicity detection** using Jigsaw as the dataset. Consistent with the results in sentiment analysis, example selection methods for ICL **amplify the maximum value of gender bias**, and **ReBE also exhibits effective debiasing in toxicity detection**.\\n\\n3. **Lack of comparison between ReBE and debiasing baselines**\\n \\n **Response**: **Our paper is the first work** to discover the bias problem, and thus no baseline could solve the problem we found. Since there is currently no suitable debiasing baseline specifically for ICL, we compare ReBE with two context augmentation methods. Compared with the context counterfactual and gender-balanced method, ReBE is compatible with existing example selection methods and can achieve **lower bias** and **higher accuracy**.\\n\\n[1] [ChatGPT to Replace Crowdsourcing of Paraphrases for Intent Classification: Higher Diversity and Comparable Model Robustness](https://aclanthology.org/2023.emnlp-main.117) (Cegin et al., EMNLP 2023)\\n\\n\\nOther valuable comments from the reviewers are responded to point-by-point below.\\n\\n\\nBelow, we summarize **`a list of revisions made in the newest updated submission`** for your review.\\n\\n> **Section 1**\\n>\\n> + **Lines 94-95**: Corrected the location of the inserted reference.\\n\\n> **Section 2**\\n>\\n> + **Lines 174-175**: Added the citation to dataset validation of Appendix A.\\n> + **Lines 194-195**: Added the citation to toxicity detection experiments of Appendix F.\\n> + **Lines 223-225**: Stated the finding more rigorously.\\n\\n> **Section 4**\\n>\\n> + **Lines 342-344**: Added more detailed explanation of Figure 5.\\n\\n> **Section 5**\\n>\\n> + **Lines 444-446**: Explicitly illustrated the results of the ablation study.\\n> + **Lines 460-476**: Added the baseline comparison of ReBE.\\n> + **Lines 486-487**: Added the citation to supplementary experiments on the impact of $k$ on bias.\\n>\\n> **Appendix**\\n>\\n> + **Appendix A.1 (Lines: 717-740)**: Added the description and results of dataset validation.\\n> + **Appendix E**: Added a new section to analyze the impact of $k$ on bias.\\n> + **Appendix F**: Added the experiments on toxicity detection.\\n> + **Appendix G**: Added the baseline comparison of ReBE.\\n> + **Appendix G.1**: Moved the debiasing discussion previously in subsection 3.5 to the newly added Appendix G.1 for better organization.\\n\\nThank you again for your constructive comments.\\n\\nKind regards,\\n\\nThe authors\"}" ] }
8Agcic0csh
Unlocking SVD-Space for Feedback Aligned Local Training
[ "Arani Roy", "Marco Paul E. Apolinario", "Shristi Das Biswas", "Kaushik Roy" ]
Deep Neural Networks (DNNs) are typically trained using backpropagation, which, despite its effectiveness, requires substantial memory and computing resources. To address these limitations, we propose a novel local training framework that enables efficient and scalable neural network training without relying on global backpropagation. Our framework harnesses the alignment of Singular Value Decomposed (SVD) weight space with feedback matrices, guided by custom layerwise loss functions, to enable efficient and scalable neural network training. We decompose weight matrices into their SVD components before training, and perform local updates on the SVD components themselves, driven by a tailored objective that integrates feedback error, alignment regularization, orthogonality constraints, and sparsity. Our approach leverages Direct Feedback Alignment (DFA) to eliminate the need for global backpropagation and further optimizes model complexity by dynamically reducing the rank of the SVD components during training. The result is a compute- and memory-efficient model with classification accuracy on par with traditional backpropagation while achieving a 50-75% reduction in memory usage and computational cost during training. With strong theoretical convergence guarantees, we demonstrate that training in the SVD space with DFA not only accelerates computation but also offers a powerful, energy-efficient solution for scalable deep learning in resource-constrained environments. Code is available.
[ "Direct Feedback Alignment", "Local learning", "Singular Value Decomposition" ]
Reject
https://openreview.net/pdf?id=8Agcic0csh
https://openreview.net/forum?id=8Agcic0csh
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xsJtAvVvPm", "wg1Jwl2Ffd", "vxy0cev419", "uNAOWcZshU", "rKSCHB8lzx", "mu27co4nhB", "kFiShqSj6U", "hox1tL2O7p", "eRyPTGoJls", "eEhdlGLuWy", "dbhqUXPk31", "ZXiq5QGfDZ", "UQbDZUo3zu", "RMLa8PUfle", "QAnGSmIbX3", "OPNxt0DqRv", "KEwoOmEgl6", "FSsyKV053n", "80ZVhdl3pV", "7bhNPhXF2O", "044PXNtAFy" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_review", "meta_review", "official_comment", "official_comment" ], "note_created": [ 1733247893705, 1733246618418, 1732393612313, 1733056529387, 1732390817144, 1733185352868, 1730672098257, 1733130137765, 1732392839695, 1730218906149, 1730638715534, 1732398077637, 1732757414080, 1730677005141, 1732987686442, 1737524114150, 1732398447548, 1730137075358, 1734686264231, 1732395066260, 1732396253763 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11259/Authors" ], [ "ICLR.cc/2025/Conference/Submission11259/Authors" ], [ "ICLR.cc/2025/Conference/Submission11259/Authors" ], [ "ICLR.cc/2025/Conference/Submission11259/Authors" ], [ "ICLR.cc/2025/Conference/Submission11259/Authors" ], [ "ICLR.cc/2025/Conference/Submission11259/Reviewer_dZya" ], [ "ICLR.cc/2025/Conference/Submission11259/Reviewer_Rq2e" ], [ "ICLR.cc/2025/Conference/Submission11259/Reviewer_SmsB" ], [ "ICLR.cc/2025/Conference/Submission11259/Authors" ], [ "ICLR.cc/2025/Conference/Submission11259/Reviewer_SmsB" ], [ "ICLR.cc/2025/Conference/Submission11259/Reviewer_8tHJ" ], [ "ICLR.cc/2025/Conference/Submission11259/Authors" ], [ "ICLR.cc/2025/Conference/Submission11259/Authors" ], [ "ICLR.cc/2025/Conference/Submission11259/Reviewer_HRY5" ], [ "ICLR.cc/2025/Conference/Submission11259/Reviewer_HRY5" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11259/Authors" ], [ "ICLR.cc/2025/Conference/Submission11259/Reviewer_dZya" ], [ "ICLR.cc/2025/Conference/Submission11259/Area_Chair_bCH2" ], [ "ICLR.cc/2025/Conference/Submission11259/Authors" ], [ "ICLR.cc/2025/Conference/Submission11259/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We sincerely thank the reviewer for their deep insights regarding our work. We have carefully considered their comments and will incorporate their suggestions into the next iteration of our paper. Below, we address each point in detail:\\n\\n1. Theoretical Analysis and Independence of Local Losses: \\n We have addressed the issue of local losses not being independent in Section A.2.3. Specifically, we state: \\n \\\"We assume that the linear separability condition holds for this convergence, which means that for early layers, the loss produced at each layer guides subsequent layers as well. However, from empirical results, we observe that for deeper networks (beyond ResNet-32), this assumption no longer holds.\\\"\\n In future iterations, we will further clarify this statement and refine the associated theoretical analysis.\\n\\n2. Extending Custom Loss to Standard DFA: \\n We misunderstood the reviewer\\u2019s suggestion regarding extending custom loss components to standard DFA. We interpreted it as applying all components of the custom loss rather than individual components. We apologize for this miscommunication and will revisit this idea for more focused comparisons.\\n\\n3. Orthogonal Loss Function and Smoothness: \\n For the orthogonal loss, we emphasize that the SVD components of the weights need to be bounded to approach smoothness. This constraint ensures better theoretical grounding. Additionally, we clarify that while we formulated our proofs independently, we used large language models (LLMs) to verify mathematical formulas and improve the clarity of our proofs.\\n\\n4. Convergence of DFA with Additional Losses:\\n As there is no general proof of DFA\\u2019s convergence, it is challenging to derive theoretical insights into the convergence of the model with added losses, as stated by the reviewer. However, our approach focuses on making the composite loss function more convex, aiming to improve the convexity of the overall model function. We acknowledge that this assumption fails for deeper networks (beyond ResNet-32) and plan to address this limitation in future work.\\n\\nWe greatly appreciate the reviewer\\u2019s feedback, which has helped us identify key areas for improvement and refinement.\"}", "{\"comment\": \"We thank the reviewer for highlighting the incoherences in our work.\\n\\ni) Regarding Table 1: The DFA results were mistakenly attributed to Akrout et al., 2019. This was a typographical error, and we sincerely apologize for the oversight.\\n\\nii) Regarding the Preprint: Upon contacting the authors of the preprint, they confirmed that PEPITA was not run on SmallConv, as reported. Instead, the results are based on the same architecture used in the original PEPITA paper.\\n\\niii) We have addressed our limitation of the hyperparameter search for the loss coefficients in the paper. We will further deep dive in the limitation in our future work.\\n\\nWe acknowledge these errors and will ensure they are corrected in the next iteration of our paper. Thank you for bringing these issues to our attention.\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"We thank the reviewer for highlighting the strengths of our paper. Below, we provide detailed answers to the reviewer's concerns.\\n\\ni) A dynamic rank reduction strategy is used to reduce the rank of weight matrix progressively. How is it implemented and guranteed, especially considering that updates to SVD components are not inherently directed towards reducing rank.\", \"authors\": \"We have put a detailed explanation of the convolutional layer decomposition in SVD-space in Section 3.1, as per the suggestions.\\n\\nWe hope these updates address the reviewer\\u2019s concerns and welcome any additional suggestions or clarifications.\"}", "{\"comment\": \"I thank the reviewer for their concerns. We would address the concerns in our future work or the next iterations.\\n\\ni) ImageNet/ResNet32 - ResNet 32 is an architecture built for CIFAR-10, not ImageNet. Not clear to me at all how the architecture is being changed to run with ImageNet/how this is a good model to be used with ImageNet.\", \"authors\": \"We appreciate the reviewer\\u2019s observation regarding the use of ResNet-32 for ImageNet. We acknowledge that ResNet-32 is traditionally designed for CIFAR-10 and not directly optimized for ImageNet. In our experiments, we adapted ResNet-32 for ImageNet by increasing the input resolution, modifying the stride and pooling layers, and adjusting the number of filters to handle the larger image sizes and dataset complexity. As our model can't yet extend to ResNet-50 (due to linear separability as we had mentioned in limitations) , we had adapted ResNet-32 for ImageNet. Local training methods usually suffer for deeper network layers (beyond 10 layers). We are keeping the extension of our model to deeper networks for our future work.\\n\\nFor the other improvements suggested, we will include those improvements in our paper in the next iteration. \\n\\nI thank the reviewer again for contributing to the improvisation of our paper.\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"We thank the reviewer for their detailed and valuable feedback, which led to huge improvements in the revised paper.\", \"we_address_each_of_your_comments_below\": \"\", \"from_weaknesses_and_questions\": \"i) Despite the main motivation of methodology/paper, only real-world compute/memory complexity results in paper (including appendix) are VGG-13/Figure 2. \\nand\\nWhat are the real-world compute/memory results for the other models in your work?\", \"authors\": \"There was no particular reason to choose VGG, we chose it due to the ease of visibility of the results in the layers.\\n\\nOnce again, thank you for your time and consideration, and for giving us such a detailed review with the opportunity to improve our paper.\", \"hyperparameter_selection\": \"\", \"we_have_added_a_detailed_paragraph_in_section_4\": \"Experimental Setup to explain our hyperparameter choices. Specifically, we used a learning rate of $1 \\\\times 10^{-4}$ for smaller datasets (e.g., CIFAR-10) and $5 \\\\times 10^{-4}$ for larger datasets (e.g., ImageNet). The loss based hyperparameter values were kept consistent across experiments, with adjustments made only to the overall learning rate for layerwise updates. As layer objectives are largely independent, tuning coefficients involves optimizing simpler, localized objectives, allowing faster cross-validation with a narrower range of candidate values.\", \"rank_reduction_schedule\": \"We expanded Section 3.4 to provide a clearer explanation of our progressive rank reduction strategy, highlighting how it balances memory and computational efficiency while maintaining accuracy.\", \"experimental_details_for_cifar_100_and_imagenet\": \"Detailed training setups for CIFAR-100 and ImageNet, including preprocessing, hyperparameter settings, and hardware specifications, have been added to Appendix A.3 to ensure transparency and reproducibility. \\n\\nv) How much do the five loss coefficients themselves have to be tuned to get good performance, i.e. how different are they for your different models/results? This is important as if they have to be found for each different experimental setup by sweeping training, it's hard to justify the method as speeding up training.\"}", "{\"comment\": \"I thank the authors for their answers, which addressed a few of my concerns. I also acknowledge that the updated paper has improved.\\n\\n---\\n\\nI still see a lot of issues in the theoretical analysis of Appendix A.1. In particular I see that errors that I mentioned in my previous message have not been addressed.\\n- The local losses are not independent: modifying the weights of layer $i$ will impact the input of a later layer $j$, as well as the error... Yes, if we were to optimize a single layer (with the other onesfrozen) then maybe it could converge, but if all layers are optimized at the same time there is no guarantee of convergence at all for any of the losses.\\n- The orthogonal regularization is NOT Lipschitz smooth\\n- There is no proof of convergence of DFA in the general case, so how can we get any theoretical insight of the convergence of the model after adding more losses?\\n\\nThe authors go through long and sketchy proofs to try to prove the convexity and Lipschitz smoothness of the local losses, which is still wrong in its current form. Moreover they use these results to assess the convergence of the global loss. This is equivalent to proving the convergence of an arbitrary neural network, which is not possible in the general case given that it is not convex not Lipschitz smooth.\\n\\n**I am starting to think that the proof has been written by an LLM.**\\n\\n---\\n\\n> The custom loss operates on the SVD-components of the forward and feedback weights. We wouldn't be able to apply the same loss to normal weights.\\n> \\n\\nI beg to disagree, the cosine similarity loss does not rely on the SVD decomposition. Similarly, the alignment loss can very simply be adapted to the standard DFA (using $||W_i - B_i||^2$).\\n\\n---\\n\\nWhile I like the idea of the paper, the obvious errors I have seen in the proof and in the first version make the rest of the paper very unreliable. The poor quality of some answers to my questions make me wonder whether the authors know what they are doing. The paper is not in a state where it should be accepted, so I am downgrading my score.\\n\\nThat being said, the idea is indeed interesting and I encourage the authors to work on it and resubmit. I would advise dropping the proofs, which are not necessary to the rest of the paper, and focus more on the experimental part. I especially would like to see a fairer comparison with DFA, by either adding the alignment losses to DFA, or removing them from SSA. The current paper seems to try to do two things at once: i) making DFA more memory efficient using low-rank decomposition, and ii) improving DFA using new alignment losses. These are two very different contributions to the field, and should be made clearer -- it would also help selling the method better.\"}", "{\"summary\": \"The paper proposes a new learning framework dubbed SSA that combines DFA with SVD. A local loss function with five components and a dynamic rank reduction strategy are adopted to train accurate and efficient DNNs. Comprehensive experimental results demonstrate that SSA can reach comparable accuracy performance as BP while reducing memory and computational cost by a large margin.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The idea of combining DFA with SVD is novel.\\n2. The designed loss function takes care of many different aspects of training.\", \"weaknesses\": \"1. The paper focuses on reducing memory and computational cost during training. However, to deploy models in resource-constrained environments, efficiency during post-training inferences is more important.\\n2. It's better to provide comparison between the proposed method and model compression techiques like quantization-aware training (QAT).\\n3. The proposed method has many limitations such as hyperparameter sensitivity and inability to scale to larger models. Also, no evidence is provided to show the effectiveness of SSA on transformer-based models.\", \"questions\": \"Please see the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank the authors for taking into account my remarks. I thoroughly re-read the improved paper.\\nOverall the paper is much improved, unfortunately, it still requires a lot of improvement to be accepted, especially in the experimental part.\", \"let_me_point_out_some_incoherences\": \"- Table 1 seems very weird:\\n>DFA was not originated by (Akrout et al., 2019). Furthermore this specific paper focused on learning the feedback connections in FA setting and not DFA. Lastly the results they got was on par with BP for ResNet 18 and ResNet 50 when trained on ImageNet.\\n\\n> The PEPITA results you mention come from a recent preprint paper. This preprint reports false results on PEPITA, claiming training on a SmallConv network but reporting **the exact same results** as the original PEPITA paper on a single convolutional network. The code attached in this preprint futher does not include the PEPITA baseline.\\n\\nI would like to outline that the authors should not base themselves on preprints to copy/paste some results and propagate false information. The baselines should be sourced from published papers or (in the ideal case) re-runned to verify coherence.\\nThe given results should also be attributed to the correct paper as (Akrout et al.) did not report results for DFA.\\nThese errors make the whole experimental section unreliable.\\n\\n- There is no hyper-parameter search for the composite loss reported. A 5 (!) component loss is presented with fixed hyper-parameters set across the experiments. Though a discussion is now in appendix, the choice of these hyper-parameters still seems coming out of pure luck.\"}", "{\"title\": \"Response to Reviewer\", \"comment\": \"We thank the reviewer for their time and consideration for reading our paper.\", \"we_answer_your_queries_here\": \"i) The paper focuses on reducing memory and computational cost during training. However, to deploy models in resource-constrained environments, efficiency during post-training inferences is more important.\", \"authors\": \"We provide the following tables highlighting the differences between SSA and QAT, and then compression percentages for QAT and SSA methods.\\n\\n| **Aspect** | **SSA (Ours)** | **QAT** |\\n|---------------------------|---------------------------------|---------------------------------|\\n| **Objective** | Low-rank training optimization | Precision-aware training |\\n| **Application Stage** | Full training and inference | Full training and inference |\\n| **Efficiency Gains** | Memory and compute savings | Reduced bit-width (e.g., 8-bit)|\\n| **Training Scope** | Low-rank updates to weights | Quantized forward and backward passes |\\n| **Accuracy Impact** | Comparable to BP | Slight drop at low precision (e.g., 4-bit) |\\n| **Flexibility** | Adaptive to model size and rank| Fixed precision during training|\\n| **Biological Plausibility** | Yes | No |\\n\\nWe provide the comparison of comparison methods obn VGG-11 reflecting inference-time memory savings.\\n\\n| **Method** | **Bit-Width (W/A) or Rank (r)** | **Compression (%)** | **Accuracy (%)** |\\n|----------------------|---------------------------------|----------------------|------------------------|\\n| **Full-Precision** | 32/32 | 0% | 91.7 to 93.8 |\\n| **BinaryConnect** | 1/32 | 48.44% | 91.73 |\\n| **BNN** | 1/1 | 98.44% | 89.85 |\\n| **HWGQ** | 1/2 | 97.66% | 92.51 |\\n| **LQ-Nets (3/2)** | 3/2 | 95.31% | 93.8 |\\n| **DMBQ** | 0.7/32 | 48.91% | 93.7 |\\n| **SSA (Ours)** | Rank-reduced | 75% | Comparable to BP |\", \"ref\": \"Babak Rokh, Ali Azarpeyvand, and Alireza Khanteymoori. 2023. A Comprehensive Survey on Model Quantization for Deep Neural Networks in Image Classification. ACM Trans. Intell. Syst. Technol. 14, 6, Article 97 (December 2023), 50 pages. https://doi.org/10.1145/3623402\\n\\niii) The proposed method has many limitations such as hyperparameter sensitivity and inability to scale to larger models. Also, no evidence is provided to show the effectiveness of SSA on transformer-based models.\\n\\nWe have addressed hyperparameter settings in Section 4, Hyperparameter Selection. We state the hyperparameters used in that section based on the constraints of our theroretical analysis and verifying them in k=3-fold cross-validation. While our paper does not focus on exhaustive hyperparameter tuning, we acknowledge this as a potential limitation and have discussed it in the Limitations section. Future work may explore more sophisticated methods for tuning these parameters. \\n\\nWe can scale to ResNet-32 and convolutional layers, which is not the case for most feedback alignment-based local learning methods. Regarding transformer-based models, the scope of this work was primarily to validate a novel local learning rule in the SVD-space of DNN layers. While our experiments focus on CNNs, the design of SSA, particularly its operation in the SVD-space and optimization of local loss objectives, is architecture-agnostic. Future research will extend SSA to deeper networks and to transformer-based models, leveraging their modular self-attention mechanisms and natural compatibility with our rank-reduction strategy. \\n\\nWe thank the reviewer for prompting us to consider the efficiency of post-training inferences more deeply and to further explore the compression aspect of our lightweight model.\"}", "{\"summary\": \"This paper proposes SVD-Space Alignment (SSA), a local layerwise training method combining Direct Feedback Alignment (DFA) and Singular Value Decomposition (SVD) to layer weights.\\nThe authors propose to decompose the weights and feedback matrices using SVD and to use heavily regularized local losses to ensure alignment, sparsity and orthogonality.\\n\\nExperiments on three networks of different depths are conducted, presenting competitive results with back-propagation on image classification tasks with a significant reduction of memory usage and computational cost during training.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. Using low rank factorization together with DFA is an interesting and natural idea.\\n\\n2. The presented results seem competitive with back-propagation (BP), while heavily reducing memory usage and computational cost during training.\\n\\n3. An interesting ablation study over each component of the composite loss is provided.\", \"weaknesses\": \"1. A major issue of the paper is the lack of coherence of the writing and somewhat evident mistakes.\\n\\\\\", \"equation_6_for_example_states_that_the_cross_entropy_loss_is\": \"$L_{CE}=y_{predict}-y_{label}$, while this is instead supposed to be its derivative with respect to the predictions.\\n\\\\\\nIn standard DFA this derivative is denoted $e$ and is then projected onto every layer $l$ thanks to fixed feedback matrices $B_l$, with a weight update reading: $W_l^{t+1}=W_l^t - \\\\eta B_l e \\\\odot f'(a_l)h_{l-1}^T$, with $a_l$ and $h_l$ being respectively the pre-activation and activation of the layer.\\nThe choice of the loss to optimize in DFA (and thus of its derivative $e$) is thus left to the user, which is clearly not the case in the paper.\\n\\n2. The notations should be revised in order to be coherent.\\n\\n3. The composite loss comprises some terms that are not well motivated nor explained. While the provided ablation study is interesting, it clearly lacks details as for example one could ask if the cosine similarity loss indeed improves the cosine similarity. The behavior of each individual objective with respect to the others could also be interesting to study.\\n\\n4. Some sentences sound like claims and are not motivated: eg l.254: what are the \\\"stable and smooth updates\\\" ensured by DFA? Is there a study of the \\\"stability of the training process\\\" (l.262)?\\n\\n5. The empirical results are somehow strange in many ways:\\n- one would expect a more extensive comparison with DFA to be conducted, as SSA is basically a low-rank approximation of DFA\\n- As DFA has been successfully extended to ResNet-56 by (Sanfiz and Akrout, 2021) with open-sourced code and even to Transformers by (Launay et al., 2020), excluding DFA evaluation from Table 2 where shallower networks (VGG-13 and ResNet-32) are tested claiming it is unable to scale to larger networks is false.\\n- The reported results with PEPITA on a 3 convolutional layers networks (Table 1) are very surprising as the original paper by (Dellaferrera and Kreiman, 2022) only reported results for one convolution and no other more recent paper has successfully trained a deeper convolutional network to the best of my knowledge. I would be very interested to know more about how those results were obtained as it has been observed by (Srinivasan et al., 2024) that PEPITA gets progressively worse results as the network grows deeper (for MLPs). I would expect that this observation would stay true for convolutional networks but the results seem to be the exact same as for the architecture used in the original paper by (Dellaferrera and Kreiman, 2022)\\n- I find the reported results on ImageNet very strange as the Top-5 Accuracy is lower than the Top-1 Accuracy (Table 2). Are the two column titles inverted?\\n\\n6. The theoretical analysis provided in the appendix is unfortunately false as the composite loss function is not Lipschitz smooth as the cosine similarity component is not Lipschitz smooth. Linee 830-834 are false if $x$ is very small.\\n\\n7. The paper would greatly benefit from a thorough proofread.\", \"questions\": \"8. What are your interpretation of the ranges of the different hyperparameters of the composite loss, given that some components are bounded and some are not?\\n\\n9. How would different scheme of initializing the feedback matrices could impact the results you give as two components of the composite loss are dependent on those matrices? \\n\\n10. How do you perform SVD on convolutional layers?\\n\\n11. How do you update specific operations in ResNets such as downsampling convolutions, batch-normalization, etc?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a backpropagation-free training framework called SVD-Space Alignment (SSA). SSA decomposes weight matrices by Singular Value Decomposition (SVD) and then updates the SVD components by direct feedback alignment (DFA) under the guidance of customized layerwise loss fuctions. The experimental results demonstrate that SSA achieves classification accuracy close to that of backpropagation however with significantly reduced memory usage, computational cost, and energy consumption. A theoretical proof for SSA convergence guarantees is also presented. This novel local training framework provides a promising energy and computation-efficient solution for deep learning in resource-constrained situations.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper creatively combines SVD and DFA for neural network training. The authors introduce a tailored layerwise loss function that incorporates several different constraints so that the local layerwise updates during training are both convergent and efficient.\\n\\nAs network sizes grow, training becomes increasingly unaffordable for organizations and individuals with limited resources. This research presents a compelling alternative to conventional backpropagation for training deep neural network models.\\n\\nThis paper is well organized and clearly written.\", \"weaknesses\": \"The effectiveness of SSA for training deep convolutional neural networks has been validated in this work. Given that transformers currently dominate in domains such as Natural Language Processing (NLP), it would be beneficial for future research to explore the extension of SSA to other architectures.\", \"questions\": \"1. Beyond theoretical analysis, it would be beneficial to present empirical results that demonstrate the convergence rate and training stability of SSA in comparison to BP.\\n\\n2. A dynamic rank reduction strategy is used to reduce the rank of weight matrix progressively. How is it implemented and guranteed, especially considering that updates to SVD components are not inherently directed towards reducing rank.\\n\\n3. The decomposition of a convolutional layer while retaining its hierarchical information is described in Appendix A.3, however it may be challenging for readers to grasp. Including a schematic diagram would enhance understanding by providing a visual aid.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer pt 1\", \"comment\": \"We appreciate the reviewer\\u2019s detailed feedback. We will address the queries below:\\n\\ni) As SSA is essentially a low-rank approximation of DFA, I would expect a more extensive comparison with DFA, as the proposed method is built upon it.\", \"authors\": \"We appreciate the reviewer\\u2019s observation about the theoretical analysis. While the cosine similarity component or orthogonal regularizer component is not inherently Lipschitz smooth, we address this by applying constraints that ensure bounded gradients. Specifically, we normalize the norms of\\nx and \\ny to 1, effectively projecting the non-convex cosine similarity loss or orthogonal loss onto a convex sublevel set. This approach bounds the gradient differences, making the loss components quasi-convex and Lipschitz smooth under these conditions. These constraints are integrated into our custom loss function to maintain theoretical consistency.\\nWe will revise the part where we say loss will be always decreasing globally, in our theroretical analysis as pointed out by the reviewer.\"}", "{\"title\": \"Thank you to all the reviewers; Summary of major changes\", \"comment\": \"To all the reviewers,\\n\\nThank you so much for your valuable comments. We got some great questions that helped us improve our manuscript, both in experimentation and presentation.\", \"here_are_the_major_changes_made_to_the_revised_manuscript\": \"i) Figure 1 is updated with the updated notations which leads to clearer and better understanding. Also, we have incorporated clearer and corrected understanding of the custom loss function in Section 3.3.\\n\\nii) We added the explanation for SVD-decomposition for convolutional layers in Section 3.1.\\n\\niii) We have added further details on dynamic rank reduction strategy in Section 3.4, and introduced a paragraph 'Hyperparameter Selection' in Section 4 Experimental Setup.\\n\\niv) We have added section 6.1 'Comparison with DFA' to our paper.\\n\\nv) Lastly, we have revised our comments on Global Convergence of Model Loss in Section A.2.3.\\n\\nPlease let us know your comments on our response and revised manuscript. We are open to further discussion if needed.\"}", "{\"summary\": \"Motivated by reducing the memory and computational cost of training deep neural networks, the authors propose a local training methodology based on Direct Feedback Alignment (DFA). DFA is limited in its application due to poor scaling to deep models and complex tasks. The authors propose to improve upon DFA by decomposing the model's weight matrices into orthogonal components using SVD, perform local updates on the orthogonal components, and to decrease the rank of this representation over the course of training. The authors provide convergence proofs in the appendix, and evaluate their method in training VGG-13, ResNet-32, and a custom small 3-layer CNN on CIFAR-10, CIFAR-100 and ImageNet.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The methodology of the proposed extension of DFA proposed is intuitive, and interesting, specifically having a decreasing rank schedule over training and decomposition of the weight matrices.\", \"The method is well-motivated with compute and memory complexity being a challenging in contemporary applications and research with deep neural networks.\", \"The related-work does mention many other local training methods, although it might be a bit too dismissive of many as baselines.\", \"The authors provide a small ablation over the five different loss components, although this could have been more detailed.\", \"Please note I did not evaluate the convergence proofs in the appendix, as they were not in the main paper. Given the 5 different loss components in the methodology proposed however, I do believe they are necessary.\", \"Both top-1 and top-5 are evaluated for Imagenet (unlike so much contemporary work)\"], \"weaknesses\": [\"Despite the main motivation of methodology/paper, only real-world compute/memory complexity results in paper (including appendix) are VGG-13/Figure 2. This is very far from sufficient, and VGG itself is not an architecture that should be used when claiming efficiency results to begin with. The authors have accuracy results from more reasonable architectures (e.g. ResNet-32) with their method, so there is no reason not to include the compute/memory savings also. Notably the authors are themselves aware that lightweight architectures (e.g. MobileNet in 6.1) would be much better motivated, so I'm not seeing good reasons for the focus on only VGG.\", \"No variance/stddev in results. The results are lacking any mention of having been evaluated over multiple random inits, etc, and no variance or other measure of significance of the results is available. For small datasets/models, and with the claims behind the method being computationally efficient, there is no reason not to evaluate over multiple training runs and provide mean/variance over these runs to better evaluate the generalization results.\", \"Although hard to evaluate due to the above, with the results as presented, there is not clear evidence that the results are significant. In many of the results AugLocal or SVD-BP appear to be very close or better than the results of SSA.\", \"The work is not repeatable as presented, with a lack of experimental details, although there are some in appendix:\", \"Experimental details are pronounced generally, e.g. \\\"learning rates ranging from 1e-4 to 5e-4\\\", rather than listed for specific experiments.\", \"The five loss coefficient hyper-parameters ($\\\\alpha, \\\\beta, \\\\gamma,\\\\delta,\\\\epsilon$), are very important. In the appendix the values for these hyper-parameters are listed to be used \\\"ideally\\\", and I'm not sure what this means, does it means they vary or not? Unfortunately there is both very little explanation as to how those are found (except \\\"cross-validation\\\"), the exact values of these hyper parameters used for each of the experimental results themselves (unless they are all the same).\", \"Another example of the methodology I would consider a hyper parameter, the choice of which is poorly explained, is the rank reduction schedule.\", \"Missing experimental details for CIFAR-100.\", \"Missing experimental details for ImageNet.\", \"Tables are really all over the place, often with a single odd line of text randomly interspersed between them, making it hard to read both the paper and the tables - must have been forced with [H]. Recommend authors use [tbp] float placement for all their tables to fix this.\", \"In 6.2 (limitations) to their credit, the authors explain that the method is very sensitive in particular to the five loss coefficient hyper-parameters ($\\\\alpha, \\\\beta, \\\\gamma,\\\\delta,\\\\epsilon$), as might be expected. Anyone who has attempted to work with a loss with even a few coefficients can recognize how unstable/hard such a methodology might be in practice. It's hard from the paper (as-is) to judge if using SSA requires these needed to be changed for each experiment or were kept constant for different models/datasets. If it's the former, it would be hard to justify the method as reducing training times in practice.\"], \"questions\": [\"What are the real-world compute/memory results for the other models in your work?\", \"How exactly did you find your hyper parameters, and what are the hyper-parameters (and all other experimental details required to repeat the experiments)?\", \"How much do the five loss coefficients themselves have to be tuned to get good performance, i.e. how different are they for your different models/results? This is important as if they have to be found for each different experimental setup by sweeping training, it's hard to justify the method as speeding up training.\", \"Why did you choose to demonstrate improvements in training compute/memory usage not on a lightweight architecture, but instead VGG which is not used anymore in research/applications and widely recognized to be highly inefficient compared to e.g. ResNets, MobileNet, EfficientNet, etc.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"First I want to thank the authors for their rebuttal and work in updating the paper, and addressing many weaknesses I identified. I also want to apologize for my late reply --- I did write this comment earlier, but unfortunately it was lost by open review before I could post it, and I didn't have enough time to rewrite it until now:\", \"Quickly going through the revised paper it's obvious that:\", \"Figures are much improved, are now readable, along with many (although not all) figure captions also being improved.\", \"Computational complexity and memory usage for a range of much more reasonable models are presented, notably in Figure 2.\", \"Many more experimental details now included, making the work more repeatable and understandable.\", \"Table 1 now has variance/mean.\"], \"remaining_issues\": [\"The computational complexity and memory usage appear to be theoretical only, which is problematic in evaluating the method's real-world impact. Theoretical FLOPS (if that's what this is, as it's not clear) do not reflect the real-world performance of algorithms, especially when on GPU.\", \"All results except for perhaps very computationally expensive experiments should have mean/variance or some idea of significance. ImageNet/ResNet 50 even is not that expensive to run with 5 different seeds on today's GPU hardware.\", \"ImageNet/ResNet32 - ResNet 32 is an architecture built for CIFAR-10, not ImageNet. Not clear to me at all how the architecture is being changed to run with ImageNet/how this is a good model to be used with ImageNet.\", \"Having read the other reviewer's concerns on the comparison with DFA in particular, I'm a bit concerned with this myself now, and am hoping to see their comments on the rebuttal.\", \"I'm still worried about the hyperparameter selection for the loss coefficients, and the author's rebuttal didn't really address my concerns on how sensitive these are, or when they need to be changed in practice.\"], \"further_more_minor_notes_for_improvement\": [\"Figure axes need to be labelled with units where appropriate (e.g. in Fig 2, not clear what they are)\", \"Table captions should be on top of the table (unlike figure)\", \"It's hard for me to tell exactly what changed in the paper, you might want to highlight differences in the text with a different colour in future.\"], \"summary\": \"Overall the paper is much improved, and I will revise my rating considering those changes. Unfortunately, I don't think the paper is at a point where it should be accepted yet, I believe there is a lot more to do still. I am hopeful that the authors are able to present more convincing real-world benchmarks in future iterations of the paper, along with significant revisions in the writing and presentations of figures/tables, and would encourage them to do so.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer pt 2\", \"comment\": \"vi) I do not understand how convolution are handled by SSA. How do you perform SVD decomposition on the convolution layers?\", \"authors\": \"We have seen in DFA papers that the angle between the true gradient (from BP) and the DFA gradient must be aligned to ensure updates occur in the correct direction. In our method, since BP gradients (true gradients) are unavailable, we use the forward signal as a close approximation for the cosine similarity loss. This ensures that the gradient direction remains aligned, even without the exact BP gradients. To emphasize direction alignment without overwhelming other components, we assign a lower hyperparameter weight to this term.\\n\\nWe hope these updates address the reviewer\\u2019s concerns and welcome any additional suggestions or clarifications. We thank the reviewer for their valuable feedback and for helping us improve the clarity and depth of our work.\"}", "{\"summary\": \"The authors improve the Direct Feedback Alignment (DFA) algorithm to make it more efficient through low-rank decomposition. The weight and feedback matrices are decomposed using SVD, and local losses are used to encourage alignment and sparsity, and preserve orthogonality of the U and V matrices.\\n\\nThe algorithm is competitive with backpropagation on different models and image classification tasks, while providing attractive gains in terms of time and memory.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Combining DFA with low rank factorization is a good idea.\", \"The proposed algorithm is relatively simple and intuitive.\", \"The results are competitive with backpropagation, with encouraging time and memory gains.\", \"The authors compare their method with an extensive spectrum of other alternatives to backpropagation.\", \"The ablation study is very relevant and clearly shows the impact of each loss on accuracy and computation time\"], \"weaknesses\": \"1. As SSA is essentially a low-rank approximation of DFA, I would expect a more extensive comparison with DFA, as the proposed method is built upon it. As far as I know, DFA has successfully been applied to larger models as used in the paper (see Sanfiz et al. (2021), https://arxiv.org/pdf/2108.13446). I am unsure what \\u201comitted from the next Table 2 due to their inability to scale to larger networks\\u201d means.\\n \\n Also, it would be expected that SSA at best matches DFA. On table 1 however it outperforms it by 13%. This is likely due to the alignment losses in SSA, and I believe that applying the same losses to DFA could provide a fairer baseline. It would also allow us to quantify the loss in accuracy caused by the low-rank factorization.\\n \\n2. I could be beneficial to include models different than CNNs, and tasks different than image classification. For instance, a simple MLP.\\n3. The method introduces 4 to 5 new hyperparameters, which currently have to be tuned. It would be interesting to measure the impact of these on the final accuracy.\\n4. Presentation and formatting: The citations are not correctly inserted in the text, the parentheses are missing. Use the \\\\citep command instead if using natbib. Less important, but I believe the tables and figure 1 could benefit from a cleaner layout / style to improve readability and visibility of the results.\\n \\n Eq. 6 is unclear to me. This quantity is the error (i.e. the gradient of the loss w.r.t the last hidden states), not the crossentropy loss. I find the notations very confusing. In addition, the layers do not receive the gradients of the cross-entropy loss but instead use DFA to approximate it, which is not reflected in equations 5 and 6.\\n \\n5. The theoretical analysis in the Appendix is wrong in many ways. For instance:\\n - At the beginning, on line 613, it is stated that the composite loss is Lipschitz smooth, which is clearly not the case given that the cosine similarity is not.\\n - The loss functions are also dependent on the output of the previous layer and on the error, which is not taken into account here. This means that the local loss is not always decreasing, as updating layer 1 can impact the loss of layer 2 for example.\\n - In A.1.6, the claimed \\u201cLipschitz constant\\u201d of $\\\\nabla_U L_\\\\text{ortho}(U)$ depends on U, which is forbidden.\\n \\n I would strongly advise the authors to revise this part. It would also be best to make it more concise, for instance A.2.1 is trivial, and there is no need to take so much space proving a function is smooth, especially when it is not (see the part about cosine similarity).\\n \\n However this analysis does not impact the contributions of the paper, and the algorithm does not rely on it.\\n\\nI believe that the idea is interesting and am willing to changing my score if the authors address my concerns.\", \"questions\": \"1. I do not understand how convolution are handled by SSA. How do you perform SVD decomposition on the convolution layers?\\n2. What is the motivation behind the cosine similarity loss? I do not get why aligning the output of the layer with its estimated gradient could lead to better learning (although I do see that this is the case from the ablation study).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper presents a local training framework that replaces backpropagation in Deep Neural Networks (DNNs) to reduce memory and computational demands. By leveraging Singular Value Decomposition (SVD) and Direct Feedback Alignment (DFA), the method updates SVD components locally, integrating custom loss functions, regularization, and sparsity. It achieves comparable accuracy to backpropagation while reducing memory and computational costs by 50\\u201375%. The authors claim that their framework offers an efficient, scalable solution for deep learning in resource-constrained settings, with theoretical convergence guarantees and publicly available code.\", \"based_on_the_reviews_the_strengths_and_weaknesses_of_the_paper_are_as_follows\": \"\", \"pros\": [\"The combination of Singular Value Decomposition (SVD) with Direct Feedback Alignment (DFA) seems to be novel idea for local training frameworks.\", \"The method significantly reduces memory and computational costs during training, addressing critical resource constraints.\", \"Achieves classification accuracy competitive with backpropagation (BP) while being more efficient.\", \"Theoretical Contribution: Includes convergence guarantees and a dynamic rank reduction strategy to optimize model complexity.\"], \"cons\": [\"Experimental Gaps: Limited evaluation on architectures beyond CNNs; lacks exploration of transformers or broader tasks.\", \"Hyperparameter Sensitivity: The composite loss introduces five hyperparameters, with insufficient details on tuning or generalizability.\", \"Lack of Repeatability: Insufficient implementation details and reliance on theoretical complexity without empirical validation of real-world performance.\", \"Comparison Issues: Limited and potentially biased comparisons to DFA and other methods like QAT; reliance on preprint results questioned.\", \"Theoretical Concerns: The claim of Lipschitz smoothness for the composite loss function is incorrect, undermining some theoretical proofs.\", \"Presentation Weaknesses: Coherence in writing, notations, and experimental reporting remain problematic despite improvements.\", \"Following feedback, the authors improved the clarity of notations, experimental details, and added ablations demonstrating the impact of loss components. This helped resolve some of the concerns raised above. All reviewers agreed that some of the concerns were not addressed (including the first 4 mentioned above based on my understanding) and in fact the most positive reviewer said they would decrease their score (it doesn't seem like they have but clearly they indicated that they will reduce their score to 6). I agree and can not recommend acceptance at this time.\"], \"additional_comments_on_reviewer_discussion\": \"Following feedback, the authors improved the clarity of notations, experimental details, and added ablations demonstrating the impact of loss components. This helped resolve some of the concerns raised above. All reviewers agreed that some of the concerns were not addressed (including the first 4 mentioned above based on my understanding) and in fact the most positive reviewer said they would decrease their score (it doesn't seem like they have but clearly they indicated that they will reduce their score to 6). I agree and can not recommend acceptance at this time.\"}", "{\"title\": \"Response to Reviewer pt 1\", \"comment\": \"We thank the reviewer for taking the time to write a detailed feedback.\", \"we_answer_the_questions_here\": \"i) Equation 6 issues. (Weaknesses pt 1 and 2):\\n\\nWe appreciate the feedback. We indeed made some mistakes in the notations, and have corrected in the revised paper (Section 3.3). We also updated Fig 1 for better understanding of the notations and gradient input to the layers. \\n\\nThe local loss is essentially a culmination of cross-entropy loss, alignment loss, cosine loss, singular vector orthogonality regularizer and hoyer regularizer. We input the derivative of this loss (gradient) to the layers, where the feedback error (DFA) is essentially the gradient of cross entropy loss. \\n\\nii) The composite loss comprises some terms that are not well motivated nor explained. While the provided ablation study is interesting, it clearly lacks details as for example one could ask if the cosine similarity loss indeed improves the cosine similarity. The behavior of each individual objective with respect to the others could also be interesting to study.\", \"authors\": \"We appreciate the reviewer\\u2019s observation regarding the theoretical analysis. While it is true that the cosine similarity component is not inherently Lipschitz smooth, we apply constraints to ensure bounded gradients. Specifically, we project the non-convex components, such as the cosine similarity loss, onto a convex sublevel set by normalizing the norms of\\nx and y to 1. This ensures that the gradient differences are bounded, making the quasi-convex cosine similarity loss Lipschitz smooth under these constraints. We apply these constraints in our custom loss function as well.\", \"pepita_results\": \"The reported PEPITA results are sourced from the work by Apolinario et al. (2024): \\\"LLS: Local Learning Rule for Deep Neural Networks Inspired by Neural Activity Synchronization\\\" (arXiv:2405.15868). This study successfully trained PEPITA on networks deeper than those reported in the original paper by Dellaferrera and Kreiman (2022). We have cited this work in our revised manuscript for clarity.\", \"imagenet_results\": \"The reviewer is correct in noting the inconsistency between Top-1 and Top-5 accuracy in Table 2. This was a typographical error, and the column titles were inadvertently swapped. We have corrected this error in the revised version of the paper.\\n\\nv) The theoretical analysis provided in the appendix is unfortunately false as the composite loss function is not Lipschitz smooth as the cosine similarity component is not Lipschitz smooth. Linee 830-834 are false if x\\n is very small.\"}", "{\"title\": \"Response to Reviewer pt 2\", \"comment\": \"vi) What are your interpretation of the ranges of the different hyperparameters of the composite loss, given that some components are bounded and some are not?\", \"authors\": \"We have provided the details for it in the updated Appendix A.3.\\n\\nWe hope that these revisions and clarifications adequately address the reviewer\\u2019s comments. We thank the reviewer for their insightful comments.\"}" ] }
89wVrywsIy
Automatically Identifying and Interpreting Sparse Circuits with Hierarchical Tracing
[ "Xuyang Ge", "Wentao Shu", "Junxuan Wang", "Fukang Zhu", "Zhengfu He", "Xipeng Qiu" ]
We present a novel approach to Transformer circuit analysis using Sparse Autoencoders (SAEs) and Transcoders. SAEs allow fine-grained feature extraction from model activations, while Transcoders handle non-linear MLP outputs for deterministic circuit tracing. Our Hierarchical Tracing method isolates interpretable circuits at both local and global levels, enabling deeper insights into tasks like subject-verb agreement and indirect object identification. Additionally, we introduce an automated workflow leveraging GPT-4o for scalable circuit analysis. This framework provides a clearer understanding of Transformer model behavior and its underlying mechanisms.
[ "Mechanistic Interpretability; Sparse Autoencoder; Circuit Analysis; Large Language Model" ]
https://openreview.net/pdf?id=89wVrywsIy
https://openreview.net/forum?id=89wVrywsIy
ICLR.cc/2025/Conference
2025
{ "note_id": [ "mVT7TvYGd6", "X8ZrA0xDGC", "HuCOF6VgWx", "Gz0h7pW2aJ", "AH8hrzZZXY", "55vfbv2Sjv" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730782309254, 1730485722628, 1729179002045, 1729696370621, 1729413098584, 1732363682459 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6192/Reviewer_vtyB" ], [ "ICLR.cc/2025/Conference/Submission6192/Reviewer_EBrR" ], [ "ICLR.cc/2025/Conference/Submission6192/Reviewer_3qVX" ], [ "ICLR.cc/2025/Conference/Submission6192/Reviewer_gq1s" ], [ "ICLR.cc/2025/Conference/Submission6192/Reviewer_ETsY" ], [ "ICLR.cc/2025/Conference/Submission6192/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents a framework for Transformer circuit analysis, leveraging Sparse Autoencoders (SAEs) and Transcoders. The authors develop SAEs to capture fine-grained features from model activations, while Transcoders enable deterministic tracing through non-linear MLP layers. The proposed Hierarchical Tracing methodology isolates and interprets circuits at both local and global levels, allowing insights into tasks such as subject-verb agreement and indirect object identification. Additionally, an automated workflow incorporating GPT-4o is introduced to scale circuit analysis. The experimental results demonstrate that this approach effectively uncovers Transformer model behaviors by tracing individual SAE-derived features. This framework offers improved interpretability of model mechanics and shows robust performance across various tasks. Results reveal insights into activation flows within Transformer layers, providing an understanding of the model's response to linguistic structures.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper presents an interesting framework that combines Sparse Autoencoders and Transcoders to analyze Transformer circuits, proposing a new hierarchical approach for understanding model behavior.\\n2. The interpretability of large language models (LLMs) is an important and timely problem.\", \"weaknesses\": \"1. **Transferability to Other Models**: The authors mentioned that the Sparse Autoencoders (SAEs) are trained specifically on GPT-2 to decompose its residual stream modules, including Word Embedding, attention, and MLP outputs. Can the framework transfer to other Transformer models with different architectures?\\n2. **Novelty and Scope**: The use of Sparse Autoencoders (SAEs) for feature extraction is well-explored in the interpretability domain, and prior works have leveraged SAEs for fine-grained feature decomposition in neural models. Could the authors clarify what novel insights their application of SAEs brings to Transformer circuit analysis beyond existing approaches? Specifically, how does this method offer interpretability that surpasses traditional linear probes or other SAE-based frameworks?\\n3. **Quantitative Evaluation**: Although the paper includes experiments, the results are primarily qualitative. Additional quantitative analysis comparing interpretability and accuracy trade-offs with similar approaches (e.g., linear probes or standard SAE circuits or model editing methods [1, 2]) would make the findings more robust.\\n4. **Automated Workflow Limitations**: While the automated workflow using GPT-4o is a strong addition, its effectiveness is not fully substantiated. The authors should provide clearer benchmarks or comparison metrics to illustrate how it scales relative to other interpretability frameworks.\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The work introduces a novel approach for sparse circuit identification within LLMs. The proposed method is based on applying SAEs and Transcoders to the model and identifying task-important features within them. Identification is based on the direct effect of features, which are then filtered using both a threshold-based and a LLM approach. Findings mainly confirm previous knowledge found in the literature.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Methodology and formulation are clearly introduced, and experiments are carried out rigorously. However, I would have preferred to see how this method performs against existing ones.\", \"The work presents a novel approach to sparse features circuit identification through SAEs and Transcoders.\", \"The work adopts both a simple threshold-based approach and a novel, more complex LLM approach for finding relevant features\", \"The authors present two case studies on two language tasks identifying circuits that implement them in LLMs\"], \"weaknesses\": [\"Issues regarding indirect effects are not justified (215 -217). To propose a new method based on direct effects, I'd like to understand why it's necessary and what the limitations of indirect effects methods are.\", \"There's no comparison with indirect effects based methods. I'd like to answer the question, \\\"Which method is best?\\\"\", \"Evaluation is carried out only considering the necessity of the identified important features. Why is sufficiency not considered in this case? Using already adopted metrics such as faithfulness and completeness would have been more useful in this case, or at least provide a reason why they were not considered.\", \"There's no comparison between threshold and LLM approaches to find important features. I'd like to see how they perform in terms of sufficiency and necessity.\"], \"questions\": \"1. Can you clarify the rationale for focusing solely on direct effects?\\n2. Would you consider a comparative analysis with indirect effect-based methods?\\n3. Why does the evaluation focus only on the necessity of features and not their sufficiency?\\n4. Could you clarify the lack of comparison between threshold and LLM-based methods?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a process to construct interpretable computational graphs of SAE features that could help understand the algorithms implemented by Transformers.\\n\\nThe main method it introduces is Hierarchical tracing, which approximates Transformers with a succession of Transcoders, and then uses gradients in this approximated graph to select the most relevant nodes and edges between nodes.\\n\\nThese graphs can then be further pruned and annotated, either by hand or using GPT-4o.\\n\\nThis process is applied to generate explanations for simple behaviors of GPT-2-small.\\n\\nFor simple very behaviors resulting in very small graphs, the graphs are relatively faithful (in that ablating its nodes results in large performance degradations) and GPT-4o can give plausible annotations to the nodes.\\n\\nFor slightly more complex behaviors and larger graphs, GPT-4o struggles to find plausible pruning and annotate, but manual annotation and pruning results in plausible graphs (the faithfulness of such graphs remains to be determined).\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"Using Transcoders to build graphs to explain behaviors seems promising.\", \"The idea of using gradient-based is interesting and this approach to graph pruning looks efficient and scalable.\", \"The methods and results are clearly presented. The Figures are clean and helpful.\", \"The explanation for the indirect object identification circuit is probably the most complex explanation of a behavior observed in a Language model.\", \"The process presented could be extended to be more scalable, and since the graphs are supposed to be causal explanations, they could in principle be evaluated using path ablation methods.\", \"The process is applied to explain multiple behaviors in non-toy Transformers.\", \"There are some experiments evaluating the faithfulness of the some of the generated explanations.\"], \"weaknesses\": [\"**Lack of materials**\", \"Not enough material is provided to enable a reproduction of the results. No code is provided, and no hyperparameters are provided for Hierarchical tracing. No explanation of how \\\"manual tracing\\\" was conducted is described.\", \"Only some results are presented for each behavior studied. Only for subject-verb agreement is any faithfulness measurement provided. Only for in-bracket activation is percentage of feature's activation provided. Only for in-bracket activation and indirect object identification are graphs provided. More results for each methods would allow readers to understand how faithful explanations are for more complex behavior, and it would also allow readers to understand what the graphs look like for the only behavior for which faithfulness was measured.\", \"Examples of the graphs produced by GPT-4o would also allow the reader to get a better sense of how good these are (the human ratings are not as informative as examples of graphs).\", \"**Lack of evidence for faithfulness or usefulness**\", \"The IOI graphs are provided as is, without any evaluation of their faithfulness, which is especially concerning since they were built using manual tracing.\", \"Previous work has shown that the percentage of loss explained by graph explanations (as measured with causal scrubbing) was often low. Applying similar metrics to the explanations provided here might highlight a lack of faithfulness of the explanations. Because this paper does not present such measurements, nor any downstream activations, future work is required to determine if the process describe by the paper produced explanations that have other qualities than their plausibility.\", \"**Unjustified implications of high explanation quality**\", \"The paper implies that the annotations and explanations are much more reliable than they are shown to be. The paper only provides weak evidence of faithfulness, only providing strong evidence that the method is helpful to produce highly plausible explanations. This is not enough to claim a superiority in explanation quality or the ability to actually understand how a behavior was implemented, especially given that previous works have shown that interpretability illusions are common when plausibility is used as the main criteria to evaluate explanation quality. Here is a non-exhaustive list of places where this seems particularly problematic:\", \"\\\"This provides a transparent view of the model\\u2019s decision-making process.\\\" (line 317) is misleading given the potentially high level of unfaithfulness of the explanations.\", \"The last sentence of the abstract might need to be revisited to not imply a level of explanation faithfulness that is higher than demonstrated, especially since some prior work actually provided a better justification of the faithfulness of their methods.\", \"The \\\"in-bracket\\\" SAE feature is unlikely to be fully explained by \\\"is this token in brackets\\\", as the activations vary considerably between the different tokens in brackets. Adding a sentence in section 5.1 explaining this approximate nature would help.\", \"**Minor comments**\", \"The subject-verb agreement tasks are not described.\", \"The distribution used for the in-bracket task is not described.\", \"$y$ in equation 2 is undefined.\", \"**Overall assessment**\", \"This paper would be above the bar if the faithfulness limitations were highlighted and if either easily usable code was released, or if the lacking materials was released, or if the faithfulness measurements was improved.\"], \"questions\": [\"Would it be possible to get the missing materials, and will code be released? (see weaknesses)\", \"What is the definition of the metric described on line 376 (proportion of feature activation)?\", \"What is the exact process used to generate the IOI graph? (My current understanding of the paper is that I have to trust that the authors did not simply pick some relevant SAE features and draw plausible edges between them.)\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a new method for analyzing Transformer circuits through the use of Sparse Autoencoders (SAEs) and Transcoders. It details a technique for extracting features from model activations and addressing MLP output non-linearities via deterministic circuit tracing. A Hierarchical Tracing methodology is introduced, aimed at isolating interpretable circuits on both local and global scales, which provides insights into tasks like subject-verb agreement and indirect object identification. Additionally, the paper integrates a fully automated workflow that utilizes GPT-4o, intended to enhance the scalability of circuit analysis.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper is well-written, enhancing its readability and clarity.\\n\\nThe visualizations are clear and insightful, making the complex concepts easy to understand.\", \"weaknesses\": \"1. I cannot find any comparisons with baseline methods in the paper. Such comparisons are essential for objectively assessing the effectiveness and advantages of the proposed approach.\\n\\n2. The paper could benefit from more comprehensive ablation studies, particularly regarding the hyperparameters like $\\\\lambda$ in Equations 1 and 2. \\n\\n3. The experimental section of the paper is limited to demonstrations with a few examples rather than extensive benchmarks across diverse datasets. Expanding the experiments to include large-scale benchmarks would provide a more thorough validation of the method's effectiveness and generalizability.\\n\\n4. The method is only based on the GPT-2 model, without consideration for other widely used or state-of-the-art models like LLaMA or Mixture of Experts. This limitation narrows the significance of the paper, as it does not demonstrate whether the approach can be effectively applied to newer or more complex architectures that are currently prevalent in the field.\", \"questions\": \"See the Weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents an automated framework for interpreting Transformer models. By combining Sparse Autoencoders and Transcoders, the author is able to extract fine-grained features and decompose MLP outputs for enhanced circuit analysis. It introduces a hierarchical tracing methodology to isolate key feature subgraphs, supported by an automated GPT-4o-based workflow for scalable analysis. Experiments confirm its effectiveness in isolating critical circuits and assessing their significance through ablation testing. However manual tracing remains essential for detailed analysis of specific circuits like in-bracket features and indirect object identification tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The use of SAEs and Transcoders to address the interpretability challenges, offering a more fine-grained approach to circuit analysis.\\n2. The introduction of a scalable tracing approach for identifying interpretable circuits provides deeper insights into model internals.\", \"weaknesses\": \"1. The framework's effectiveness relies heavily on the interpretability of features extracted by SAEs and Transcoders.\\n2. Manual tracing remains necessary for in-depth analysis, limiting scalability for large models or complex tasks.\\n3. The framework faces challenges in providing comprehensive summaries for more complex tasks.\\n4. There is a lack of theoretical analysis or insights into the effectiveness of the proposed methods.\", \"questions\": \"1. What objective criteria are used to evaluate the interpretability of extracted features, and how is their interpretability assessed?\\n2. How does the performance of Transcoders compare to other non-linearity handling methods, and are there experiments validating their use in other non-linear components?\\n3. Can the tracing process be further automated, and how does the need for manual tracing change with task complexity?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors. We thank all the reviewers for their time and constructive feedback.\"}" ] }
89nUKXMt8E
What Does it Mean for a Neural Network to Learn a "World Model"?
[ "Kenneth Li", "Fernanda Viégas", "Martin Wattenberg" ]
We propose an abstract but precise definition of what it means for a neural net to learn and use a "world model." The goal is to give an operational meaning to terms that are often used informally, in order to provide a common language for experimental investigation. Our definition is based on ideas from the linear probing literature, and formalizes the notion of a computation that factors through a representation of the data generation process. We also describe a set of conditions to check that such a "world model" is not a trivial consequence of the neural net's data or task.
[ "Large Language Model", "World model" ]
https://openreview.net/pdf?id=89nUKXMt8E
https://openreview.net/forum?id=89nUKXMt8E
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x3PT8arV4H", "vjOPwtnHYw", "rk6Bxx57QR", "pZJPqBQooc", "nyTw7INgx6", "j62ajiSvFv", "ZNkfaLgL4q", "Xd2q39C12x", "KumPyMHxEW", "JlRRzDYbdQ", "DFxZtiOssb", "0pNOkwecH8" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730704547557, 1731611461042, 1731611527600, 1730590417591, 1732629801177, 1731611352564, 1737494601247, 1730688066087, 1730699034124, 1731611635774, 1732747752146, 1731611568444 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10741/Reviewer_Y1Mh" ], [ "ICLR.cc/2025/Conference/Submission10741/Authors" ], [ "ICLR.cc/2025/Conference/Submission10741/Authors" ], [ "ICLR.cc/2025/Conference/Submission10741/Reviewer_aeim" ], [ "ICLR.cc/2025/Conference/Submission10741/Reviewer_CbCD" ], [ "ICLR.cc/2025/Conference/Submission10741/Authors" ], [ "ICLR.cc/2025/Conference/Submission10741/Authors" ], [ "ICLR.cc/2025/Conference/Submission10741/Reviewer_CbCD" ], [ "ICLR.cc/2025/Conference/Submission10741/Reviewer_XPCw" ], [ "ICLR.cc/2025/Conference/Submission10741/Authors" ], [ "ICLR.cc/2025/Conference/Submission10741/Reviewer_Y1Mh" ], [ "ICLR.cc/2025/Conference/Submission10741/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper provides a conceptual framework for defining latent world models potentially learned by neural networks. This framework defines world models by their position in a commutative diagram relating to the world being implicitly modeled, the data (sampled from this world) that the network is trained on, and the latent representations learned by the network. The paper also includes several guidelines for instantiating this framework, like how to avoid trivial instantiations or study \\\"local\\\" world models covering narrower contexts than the network was trained on.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The paper clearly motivates and describes the underlying problem: how should we conceptualize the notion of \\\"world models\\\" in the context of latent neural network representations? This problem is indeed important and deserving of study, and the core idea of defining a commutative diagram over a network's training data, the \\\"world\\\" that training data is sampled from, and latent representations learned by the network is innovative.\", \"weaknesses\": \"Beyond simply defining the research question (how should we conceptualize the notion of \\\"world models\\\" in the context of latent neural network representations?) and drawing a commutative diagram that is useful for conceptualizing this question, **the paper does not provide a clear or meaningful contribution.**\\nWhile articulating the central question is helpful, this alone is not a sufficient contribution for an ICLR conference paper. \\n\\nIndeed, the paper is styled more as a blog post than a conference paper: beyond its highly informal prose and prominent citations of tweets (Lecun, 2024) and blog posts (Andreas, 2024), it also **lacks the theoretical rigor that is necessary for a conceptual submission such as this one**; and it is not obvious that the proposed framework could be adequately formalized at all. For example:\\n- In sec 3.2.1, the framework requres \\\"simple function classes\\\" $F_W$ and $F_Z$ that restrict the range of functions that map between certain nodes in the commutative diagram in order to avoid a trivial definition. But \\\"simple\\\" is never formally defined -- instead, a few possible examples are provided without justification -- and it is never explained *how* requiring these functions to be \\\"simple\\\" actually resolves the triviality. \\n - Furthermore, one of the examples provided in sec 3.2.1 of one such \\\"simple\\\" function class is that of two-layer MLPs, which are universal approximators; or in sec 3.4, another example for $F_W$ is listed as the space of \\\"human computable functions\\\". Either case would seem to strain any reasonable interpretation of \\\"simple\\\".\\n- Section 3.5.1 is concerned with preventing another possible triviality, this time the existence of another \\\"simple\\\" function that maps directly from inputs to a potential world model; but it is unclear both (1) why this is a problem, and (2) how the provided definition prevents this triviality.\\n - E.g., word co-occurrence statistics are provided as an example of a \\\"trivial\\\" world model, as many interesting features of a potential world model can be approximated using linear projections from such statistics. However, this is simply another way of discussing the \\\"distributional hypothesis\\\" underpinning modern LLMs -- i.e., that sufficient knowledge of word co-occurrences is sufficient to understand much of natural-language semantics -- and the paper considers LLMs as serious candidates for learning world models. Why, then, is the existence of \\\"simple functions\\\" mapping from co-occurrence statistics to certain world model features understood as presenting a \\\"trivial\\\" case to be avoided?\\n\\nAnother key weakness is that the paper fails to reference several closely related works and relevant areas of study. For example:\\n- [1, 2] are also centrally concerned with conceptualizing how world models should be understood in the context of foundation models, and [2] also focuses on the intersection of interpretability and world modeling. \\n- The description in section 2.1 of \\\"world models\\\" as studied in cognitive science is lacking. For instance, predictive coding is one of the leading formalizations of world models in cognitive science [3,4], but predictive coding is never discussed.\\n- The \\\"random control function\\\" proposed in lines 372-381 appears to be equivalent to \\\"control probes\\\" as defined by [5], but [5] is never cited. Note that, on line 74-75, it is stated that \\\"much of this paper may be seen as a reframing of ideas in Belinkov (2022)\\\", and Belinkov (2022) discusses [5] at length. Thus, the failure to cite [5] is particularly concerning, and may be a sign of plagiarism.\\n\\n[1] Bisk, Y., Holtzman, A., Thomason, J., Andreas, J., Bengio, Y., Chai, J., ... & Turian, J. (2020, November). Experience Grounds Language. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 8718-8735).\\n\\n[2] Ruthenis, T. (2023, January). World-model interpretability is all we need. In AI Alignment Forum.\\n\\n[3] Millidge, B., Seth, A., & Buckley, C. L. (2021). Predictive coding: a theoretical and experimental review. arXiv preprint arXiv:2107.12979.\\n\\n[4] Taniguchi, T., Murata, S., Suzuki, M., Ognibene, D., Lanillos, P., Ugur, E., ... & Pezzulo, G. (2023). World models and predictive coding for cognitive and developmental robotics: Frontiers and challenges. Advanced Robotics, 37(13), 780-806.\\n\\n[5] Hewitt, J., & Liang, P. (2019, November). Designing and Interpreting Probes with Control Tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (pp. 2733-2743).\", \"questions\": [\"Definitions:\", \"As discussed above, how does constraining $F_W$ and $F_Z$ to be \\\"simple function classes\\\" resolve the triviality discussed in section 3.2.1?\", \"On line 328-329, what does it mean that \\\"$F_X$ parallels $F_Z$\\\" (328-329)? Is it simply that the two function classes should be equivalent, or satisfy some shared notion of simplicity? As with the topic of \\\"simple function classes\\\" throughout the paper, this should be formalized.\", \"How should \\\"A\\\" in figure 1 (bottom right side of commutative diagram) be interpreted in any case outside of robotics? I do not see any examples of \\\"A\\\" provided elsewhere, including in Table 1; and it is not clear how, why, or whether \\\"A\\\" and \\\"Y\\\" would be different in any other case. If A and Y are collapsed into the same node, does this lead to other potential problems with the diagram?\"], \"conceptual_question\": [\"The commutative diagram is discussed in terms of absolute equality (e.g., as in line 229-230). Is it possible to relax this assumption and allow for commutation to be approximate instead of exact? E.g., it seems that, especially if the intent is to make extensive use of \\\"simple\\\" functions under this framework, it would be more reasonable to expect approximations than learning exact mapping functions. What would the consequences of such approximation be?\"], \"flag_for_ethics_review\": \"['Yes, Research integrity issues (e.g., plagiarism, dual submission)']\", \"details_of_ethics_concerns\": \"The \\\"random control function\\\" proposed in lines 372-381 appears to be equivalent to \\\"control probes\\\" as defined by [5], but [5] is never cited. Note that, on line 74-75, it is stated that \\\"much of this paper may be seen as a reframing of ideas in Belinkov (2022)\\\", and Belinkov (2022) discusses [5] at length. Thus, the failure to cite [5] is particularly concerning, and may be a sign of plagiarism.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Y1Mh\", \"comment\": \"We appreciate the close reading of the paper and multiple important conceptual comments. And we are especially grateful for the list of references. We will add discussion of these to the manuscript. In particular, thank you for pointing out the omission of a reference to [5], which we will certainly add. To be clear, we do not intend to plagiarize! On the contrary, we intend for our framework to synthesize and credit existing literature wherever appropriate\\u2014hopefully this goal comes through in the spirit of the paper overall.\\n\\nSection 3.2.1, \\u201csimple function classes\\u201d: You\\u2019re right: our description is confusing. We do want to leave some flexibility in the specific type of function, since it may be domain-dependent. However\\u2014and this might be the key point of confusion\\u2014the intention was to focus on strongly restrictive classes of functions for g and h. (For example, for a multi-layer MLP, we would assume a reasonable bound on the number of neurons, rather than allowing an arbitrarily large MLP that could be a universal function approximator.) We will make clear in any revisions that we assume a meaningfully restrictive class of \\u201cprobe\\u201d functions, e.g. with a much smaller dimensionality / number of parameters than the underlying network being studied.\\n\\nSection 3.5.1: Our goal here was to define what it means for a world model to be \\u201clearned.\\u201d The intuition we want to capture is that this means the neural net must transform the initial data in a nontrivial way. If a world model can be derived from the initial data via a linear function, for example, then one would be hard-pressed to say the network has \\u201clearned\\u201d a nontrivial representation of the world.\", \"addressing_further_questions\": [\"Lines 328-329: Good point: the word \\u201cparallels\\u201d is a poor choice of words. In a revision, we\\u2019ll make clear that we mean to aim to consider function classes with roughly the same number of parameters in both cases. The idea is to verify that the function f_1 is doing something nontrivial. (The material in 370-380, where we will reference [5], is meant to handle cases where comparable parameter count may not be easy to achieve.)\", \"Thank you! Excellent catch on Table 1: we somehow omitted a key row, which is that A should represent the sentiment of the text (as opposed to the mind of the writer)! This example was actually a major motivation for adding \\u201cA\\u201d to the framework. In general, we want to capture the idea that a particular world model may only be relevant to one aspect of the output; \\u201cA\\u201d is meant to represent just that aspect. We will clarify this in any revision. Also note that A may indeed be the same as Y in practice\\u2014this is the case described in 4.1, of a \\u201ccomplete causal model\\u201d.\", \"Conceptual question on use of pure equalities: we agree that this is an important point, and one we had attempted to address in the current section 3.3. Of course we\\u2019d welcome ideas on where this section falls short.\"]}", "{\"title\": \"Response to XPCw\", \"comment\": \"Thank you for these useful comments! As described in the general comment above, we can definitely expand on how \\u201cworld models\\u201d relate to questions of \\u201cunderstanding.\\u201d To answer your question about probabilistic causal models: our wording here is certainly confusing. Essentially, we\\u2019re trying to be cautious in our definition, and to be clear that we\\u2019re proposing a weaker type of world model (e.g., one would not necessarily be able to answer arbitrary counterfactual questions via the world models we describe.)\"}", "{\"summary\": \"The paper proposes an abstract criterion for claiming that a world models is being implemented in a neural network. The definition is based on the existence of some commutative functions mapping the hidden representation to a salient aspect of the network's behavior. Models should be 'interpretable' and 'causal'; interpretable meaning that the model is some pre-specified type of function of the world/representation, and causal meaning that the model predicts behavior just as the representation predicts outputs.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper does an extensive review of the literature, and motivates each aspect of the central diagram.\", \"weaknesses\": \"Usually for a definition to become accepted, it should demonstrate that it includes most clear-cut cases and excludes others. Some of this is done for intermediate steps (like the sentiment neuron, and word embeddings), but it feels important to do this for the full definition. That is, what kinds of models/analyses does the diagram in Figure 1 accept and reject?\\n\\nAs it stands, the definition remains very abstract, and has not demonstrated its utility with theoretical results or for empirical analysis. While the authors say it is 'straightforward to operationalize experimentally', that isn't obvious to me given its level of abstraction -- the hard part seems to be precisely in applying the high-level concept to specific cases. In other words, its not clear to me how much a commutative diagram by itself, without specifying what the sets or functions are, allows us to judge networks theoretically or develop/understand methods for interpretability. If there is a good demonstration of this, I think it should be included and highlighted.\", \"questions\": \"Is the idea behind this definition that, for any analysis trying to infer a world model, one should be able to point to components of the analysis and say, this is $\\\\phi_1$, this is $W$, this is $g$, etc.? Or, maybe equivalently, that $Z$ only represents a world model if a commutative diagram like that exists?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"nothing more to add\", \"comment\": \"I think we have discussed this sufficiently. The system is complaining that I am not commenting enough so I am adding this stupid non-comment.\"}", "{\"title\": \"General response to reviewer comments\", \"comment\": \"Thank you to all reviewers for helpful comments! We\\u2019ll answer some general questions first, and then address some specifics of each reviewer\\u2019s comments below.\\n\\nSeveral reviewers ask, essentially, \\u201cWhat does this complicated abstract definition buy us?\\u201d That\\u2019s obviously a fair question, and in any revision we\\u2019ll rewrite the introduction and conclusion to underline the goals:\\n\\n1. Our motivation comes from interpretability work, where there has been a great deal of discussion of the existence of \\u201cworld models\\u201d (See especially 2.3), but with an extremely informal and inconsistent set of definitions. We believe that this paper represents a step toward making many of these informal definitions more precise, while clearly excluding certain others (for instance, in the Vafa et al paper, a definition based purely on behavior rather than implementation).\\n2. One immediate benefit, for experimentalists, is that the diagram and nontriviality conditions give a checklist for experimental results, one positive and two negative. Positive: to say Z has a representation of a world model, show that a simple function g: Z \\u2192 M can be learned. Negative conditions: to say this world model was learned by the network, show that no function of comparable simplicity can be learned X \\u2192 M. To say that the learned world model is emergent, show that no function of comparable simplicity can be learned Y \\u2192 M.\\n\\nThis framework is a definition, not an empirical result, so isn\\u2019t \\u201cfalsifiable\\u201d as such. However, we believe that it will help make it easier to compare empirical work. For example, the Vafa et al. paper and the Li et al. paper describe seemingly inconsistent results for the same network (Othello-GPT). However, with this definition in hand it becomes clear that they are talking about two different things. A second example is the paper \\u201cLanguage Models Represent Space and Time\\u201d from Gurnee and Tegmark: this caused some controversy due to the fact that there was no test for whether the \\u201cworld model\\u201d was present already in token embeddings. Our checklist of nontriviality conditions formalizes that concern. Given the comments here, we can add examples and make these benefits more explicit in the paper.\\n\\nReviewers also suggested fleshing out the relation between our definition and ideas around predictive coding or probabilistic modeling. This is an excellent point: our definition is focused specifically on representations of world state (\\u201cwhere are the walls\\u201d), rather than predicting the future (\\u201cwill I bump into a wall\\u201d) or modeling the causal structure of the world (\\u201cIf I had turned left, would I have bumped into the wall\\u201d). Indeed, our definition is meant to crisply focus on issues of representation, and separate these from issues around prediction or causal modeling. We will clarify this in any revision.\\n\\nFinally, multiple reviewers mentioned that our usage of the word \\u201cunderstanding\\u201d is confusing. We see what you mean, and will clarify appropriately. In particular, we\\u2019ll include a targeted discussion in the introduction of how debates about \\u201cunderstanding\\u201d relate to world models, and then plan to avoid the word in the remainder of the paper.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The authors propose a set of criteria for saying a neural net learns a \\\"world model\\\". This is a purely theoretical paper with no experiments.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"It is helpful to have clear definitions of what we mean by a \\\"world model\\\".\", \"weaknesses\": \"Since I am more on the experiment side, I don't understand what follows from this paper. How does the proposal relate to prior work? Is the proposed framework falsifiable? What follows from this contribution?\\n\\nThe word \\\"understanding\\\" is mentioned nearly a dozen times, but I don't know what that means. Maybe that is the point of the last paragraph. But that begs the question, what is a world model? Is that what you propose? But that sounds circular.\\n\\nThe first paragraph of the conclusion claims to have unified work on interpretability. The paper mentions interpretability nearly a dozen times but I'm having trouble seeing the connection between this paper and those references.\", \"questions\": \"Can you help me understand the consequences of this paper? What follows from this contribution?\\n\\nWould it be possible to test your ideas? If so, how? What would an experiment look like?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work seeks to establish minimum criteria by which one can say that a neural network encodes a world model. By establishing these criteria, this work aims to provide common (and precise) language by which interpretability researchers can discuss the inner mechanisms of neural networks.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The goal of this work is necessary and important for shaping the discourse around neural networks, model interpretability, and the relationship between the field of deep learning and other fields that require more high-level/abstract models (such as cognitive science). Importantly, this fairly high-level paper is grounded in (and makes reference to) recent empirical work in interpretability. The paper is extremely well-written, clear-eyed and explicit about its purview and limitations, and ultimately succeeds in its goals.\", \"weaknesses\": \"The authors refer to the debate concerning whether LLMs actually understand text (523), and also mention that world models might connect to this issue (055). Making this connection a bit more explicit, even in an appendix, would be enlightening. This would potentially provide an important starting point for reframing the debate around understanding in more scientific terms.\", \"questions\": \"Why do the authors say that learning probabilistic causal models is \\u201ca very strong meaning for a world model\\u201d (125)? In general, this section might be expanded a bit.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer aeim\", \"comment\": \"Thank you for your helpful comments. Your final question is exactly on target: we hope that future researchers who investigate world models can say explicitly which functions they are studying. Our sense is that many papers already do this, but use inconsistent terminology that makes it hard to compare results. Other papers do not do this, but use similar terminology, which is even more confusing. We described a few such cases in the general comments, and can put additional examples in the paper.\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for your response. I have read your rebuttal and general response, and while I appreciate your clarifications regarding some of my questions, I still believe that the paper does not provide a clear or meaningful contribution.\\n\\nGiven revisions to address the key missing citations and unclear definitions discussed in my original review and the authors' response, I believe this paper could be made into an interesting blog post or perspective piece -- e.g., see Frontiers [blog post](https://www.frontiersin.org/news/2016/10/27/blog-guidelines) or [perspective](https://www.frontiersin.org/journals/artificial-intelligence/for-authors/article-types) submission types. But given the lack of substantive contribution, this submission would clearly be inappropriate as an ICLR conference paper.\"}", "{\"title\": \"Response to reviewer CbCD\", \"comment\": \"Thank you for pointing out some important gaps. We\\u2019ll connect the sections to interpretability work more clearly: although we\\u2019ve tried to give key examples (the sentiment neuron, etc.) we can certainly provide more. We\\u2019ve tried to address your concerns in the general comments above, but please let us know if there are additional issues you see.\"}" ] }
89EjtiGWVS
Log-Sum-Exponential Estimator for Off-Policy Evaluation and Learning
[ "Armin Behnamnia", "Gholamali Aminian", "Alireza Aghaei", "Chengchun Shi", "Vincent Y. F. Tan", "Hamid R. Rabiee" ]
Off-policy learning and evaluation scenarios leverage logged bandit feedback datasets, which contain context, action, propensity score, and feedback for each data point. These scenarios face significant challenges due to high variance and poor performance with low-quality propensity scores and heavy-tailed reward distributions. We address these issues by introducing a novel estimator based on the log-sum-exponential (LSE) operator, which outperforms traditional inverse propensity score estimators. our LSE estimator demonstrates variance reduction and robustness under heavy-tailed conditions. For off-policy evaluation, we derive upper bounds on the estimator's bias and variance. In the off-policy learning scenario, we establish bounds on the regret—the performance gap between our LSE estimator and the optimal policy—assuming bounded $(1+\epsilon)$-th moment of weighted reward. Notably, we achieve a convergence rate of $O(n^{-\epsilon/(1+\epsilon)})$, where $n$ is the number of training samples for the regret bounds and $\epsilon\in[0,1]$. Theoretical analysis is complemented by comprehensive empirical evaluations in both off-policy learning and evaluation scenarios, confirming the practical advantages of our approach.
[ "off-policy learning", "off-policy evaluation", "log sum exponential", "regret bound", "generalization bound", "concentration", "bias and variance" ]
https://openreview.net/pdf?id=89EjtiGWVS
https://openreview.net/forum?id=89EjtiGWVS
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vIoMn6HDxZ", "sRGISG8dND", "aBeFnJxAVr", "MTQs5fyWOy", "JUxYGIfE1Q", "HRHk2SDOXV", "2FabJI4b1S" ], "note_type": [ "official_comment", "official_review", "official_review", "comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1733004715825, 1729059558460, 1730504393871, 1737563824945, 1733007968345, 1730700889269, 1730477454799 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9737/Authors" ], [ "ICLR.cc/2025/Conference/Submission9737/Reviewer_W6M8" ], [ "ICLR.cc/2025/Conference/Submission9737/Reviewer_PX8q" ], [ "ICLR.cc/2025/Conference/Submission9737/Authors" ], [ "ICLR.cc/2025/Conference/Submission9737/Authors" ], [ "ICLR.cc/2025/Conference/Submission9737/Reviewer_3zEX" ], [ "ICLR.cc/2025/Conference/Submission9737/Reviewer_FxqM" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal Summary (Experiments)\", \"comment\": [\"# Rebuttal Summary\", \"We are grateful to all reviewers for their insightful comments and feedback. In this post, we summarize our both the original experiments from our work and those additional experiments we conducted during the rebuttal period to address reviewer questions and suggestions.\", \"# Experiments\", \"## OPL Experiments:\", \"**SupervisedToBandit** (Section 6.2 and Section G.2): These experiments are based on the supervised-to-bandit experimental setup explained. Detailed experiments including normal, noisy propensity scores, and noisy rewards settings on FMNIST and EMNIST datasets for evaluating different estimators are reported. *LS-LIN* and *OS* estimators were added during the ***rebuttal*** period (`reviewers PX8q, FxqM and 3zEX`).\", \"**Model-based estimator** (Section G.3): The estimator based on Double-Robust estimation are evaluated and compared. *DR-SWITCH*, *DR-SWITCH-LSE*, *DR-OS* and *MRDR* estimators were added during the ***rebuttal*** period (`reviewer FxqM`) .\", \"**Real-world Experiments** (Section G.4) : The real-world experiments evaluate model performance on an original bandit dataset collected in a real-world circumstance. KuaiRec dataset results are reported. *LS-LIN* and *OS* estimators were added during the ***rebuttal*** period (`reviewers PX8q, FxqM and 3zEX`).\", \"**Sample size $n$ effect** (Section G.5): Analyzes how the sample size impacts the performance of different estimators.\", \"**Effect of hyperparameter $\\\\\\\\lambda$** (Section G.6): explores the effect of the hyperparameter $\\\\\\\\lambda$ on the LSE estimator.\", \"**$\\\\\\\\lambda$ selection for OPL** (Section G.7): **$\\\\\\\\lambda$** selection based on number of samples is compared to grid-search selection.\", \"## OPE Experiments:\", \"**Synthetic Experiments** (Section 6.1 and G.1): Synthetic datasets, Gaussian and Lomax, are generated to compare different estimators based on their ability to estimate target policy's average reward. *LS*, *LS-LIN*, and *OS* estimators were added during the ***rebuttal*** period (`reviewers PX8q, FxqM and 3zEX`).\", \"**$\\\\\\\\lambda$ selection for OPE** (Section G.7.2-G.7.3, **rebuttal**, `reviewer W6M8`): Proposes **$\\\\\\\\lambda$** value for OPE and compares LSE with this selection against other estimators.\", \"**Sensitivity to $\\\\\\\\lambda$** (Section G.7.4, **rebuttal**, `reviewer W6M8`): Shows estimator sensitivity to hyperparameter selection.\", \"**OPE with noise** (Section G.8, **rebuttal**, `reviewer W6M8`): Discusses robustness of different estimators to noise.\", \"**Distributional behavior in OPE** (Section G.9, **rebuttal**, `reviewer 3zEX`): Investigates error distribution of different estimators.\", \"**Comprehensive comparison with LS** (Section G.10, **rebuttal**, `reviewers PX8q and 3zEX`): Detailed experiments comparing LS and LSE estimator sensitivity to hyperparameter selection.\", \"**OPE on real-world datasets** (Section G.11, **rebuttal**, `reviewer FxqM`): Experiments on real classification datasets from UCI repository.\", \"### We would be happy to conduct additional experiments to address any remaining questions or concerns before the end of discussion period.\"]}", "{\"summary\": \"This paper introduces the Log-Sum-Exponential (LSE) estimator, a novel approach for off-policy evaluation (OPE) and off-policy learning (OPL) with only logged bandit feedback. The estimator is designed to address high variance, noisy propensity scores, and heavy-tailed reward distributions, common challenges in OPE/OPL. The paper provides theoretical bounds on regret, bias, and variance, and supports the claims with empirical evaluations comparing the LSE estimator against several baseline methods, including truncated importance sampling (IPS) and other state-of-the-art estimators.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"The paper studies the practically relevant problem of off-policy learning, particularly for both noisy rewards and propensity scores\", \"The paper is overall well-written and the arguments are easy to follow\", \"The paper includes rigorous theoretical analysis of the LSE estimator, providing regret bounds, bias-variance trade-offs, and robustness under noisy rewards. These contributions solidify the estimator\\u2019s advantages over existing methods.\", \"The paper presents comprehensive experiments that demonstrate the estimator\\u2019s performance in synthetic and real-world scenarios. The results indicate that the LSE estimator can achieve lower variance and MSE compared to established baselines, supporting the theoretical claims.\"], \"weaknesses\": [\"The paper claims that the proposed method is more robust to noisy rewards and propensity scores, supported by some relevant experiments. However, the analysis could be improved by plotting the policy values of the methods on the y-axis against varying levels of noise in the rewards and propensity scores on the x-axis. This would provide a clearer visual representation of the method's robustness. I consider this a crucial point, as it directly relates to the key advantages of the proposed method.\", \"The LSE estimator requires parameter tuning (such as the \\u03bb parameter), which may complicate practical deployment. The lack of detailed guidance on how to select these parameters could hinder reproducibility and real-world adoption. Additionally, it is unclear how robust the proposed method is to potential errors in setting this parameter.\"], \"questions\": \"Could you provide more practical guidance on how to select the \\u03bb parameter? Are there heuristics or automated methods that could assist practitioners in tuning this parameter effectively? How robust the proposed method is to the potential failure in setting the parameter?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a new off-policy estimator based on the log-sum-exp operator, that is motivated by its robustness to heavy tailed rewards. The bias-variance tradeoff of the estimator was studied, its learning guarantees (in terms of regret) as well as its behaviour in reward drift/contaminated reward scenarios. The regret bound in particular has a rate of $O(n^{-\\\\epsilon/(1 + \\\\epsilon)})$ where $n$ the sample size, assuming bounded $(1 + \\\\epsilon)$-th moment ($\\\\epsilon \\\\le 1$). Experiments were conducted to validate the performance of the estimator in specific settings.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper tackles the important problem of off-policy estimation.\", \"The estimator is motivated from a new robustness lens, which is refreshing in OPE/OPL.\", \"A substantial effort was put to understand the theoretical properties of the estimator.\"], \"weaknesses\": \"The contributions of this work suffer from two major problems that question its validity:\\n\\n- **Novelty of the estimator**: If the log-sum-exp operator was never used in off-policy estimation, this transform was heavily studied for learning problems just under the \\\"tilted losses\\\" name. See the following work: \\\"On Tilted Losses in Machine Learning: Theory and Applications\\\", Tian Li, Ahmad Beirami, Maziar Sanjabi, Virginia Smith; 24(142):1\\u221279, 2023. (JMLR). The log-sum-exp operator defines their tilted loss in Equation (2) and is studied for learning problems, meaning that OPE/OPL can be seen as a direct application and a lot of teachings from the JMLR paper can be transferred. This work was not cited in the submitted paper and I don't know how the authors can position their paper compared to this one.\\n**PARTIALLY ADDRESSED IN REBUTTAL.**\\n\\n- **The convergence rate**: the $O(n^{-\\\\epsilon/(1 + \\\\epsilon)})$ convergence rate seems like a huge error that needs attention. I did not look exactly where this error is coming from, but this rate is unachievable under these conditions. One can easily see it in the case of a one-armed bandit (with $\\\\pi = \\\\pi_0$) and bounded reward $r \\\\in [0, 1]$. These conditions ensure that all the _weighted reward moments_ are bounded and smaller than $1$. meaning that all $(1 + \\\\epsilon)$ moments are smaller than $1$, ensuring that $\\\\epsilon$ can go to infinity and achieving a convergence rate of $O(n^{-1})$. This cannot be attained (even asymptotically, see CLT) as long as $r$ is not deterministic.\\n**ADDRESSED IN REBUTTAL.**\", \"other_minor_problems_can_be_pointed_out_as_well\": [\"**Heavy notations**: The structure of the paper and the various definitions/notations used make it hard to follow the proofs, which gets even more opaque in the regret/convergence rate propositions. **PARTIALLY ADDRESSED IN REBUTTAL.** The writing of the paper can be greatly improved.\", \"**Lack of key baseline**: The logarithmic smoothing estimator that was proposed recently is motivated from the pessimistic lens, and may mitigate the heavy tailed reward problem as it smoothes the _weighted rewards_, contrary to the other baselines used. A comparison against it can strengthen the paper. The paper was cited but not compared to, see \\\"Logarithmic Smoothing for Pessimistic Off-Policy Evaluation, Selection and Learning\\\", Sakhi et. al. 2024\", \"**ADDRESSED IN REBUTTAL.**\"], \"questions\": [\"How can you position your paper compared to \\\"On Tilted Losses in Machine Learning: Theory and Applications\\\"?\", \"**PARTIALLY ADDRESSED IN REBUTTAL**: the discussion of \\\"On Tilted Losses in Machine Learning: Theory and Applications\\\" was briefly included and can still be improved.\", \"Can you explain how the $O(n^{-1})$ convergence rate can be achieved with your method? Can you spot the error?\", \"**ADDRESSED IN REBUTTAL**: The results are correct and were clarified during rebuttal.\", \"Why did not you compare to the Logarithmic Smoothing estimator?\", \"**ADDRESSED IN REBUTTAL.** Comparison with the Logarithmic Smoothing estimator was included. The LSE estimator presents favourable performance compared to state of the art approaches.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Rebuttal Summary ( Theoretical Results and Discussions)\", \"comment\": [\"# Rebuttal Summary\", \"Following [this post](https://openreview.net/forum?id=89EjtiGWVS&noteId=vIoMn6HDxZ), we summarize our both the original theoretical results from our work and those additional theoretical results and discussions we added during the rebuttal period to address reviewer questions and suggestions.\", \"## Theoretical Results and Discussions\", \"**LSE estimator and KL Regularization** (Section C.1, **Rebuttal**, `reviewer FxqM`): We explore the connection between the LSE estimator with a negative parameter and KL regularization.\", \"**Bias and Variance** (Section 5.3): The bias and variance of the LSE estimator are analyzed. The upper bound on Variance is improved during **rebuttal** and shown that the LSE variance is less than the IPS estimator (`reviewer PX8q`).\", \"**Bias and Variance Comparison** (Section D.1.1): Updates to Table 6 provide a comprehensive comparison of bias and variance across all estimators, including the addition of comparisons for the *OS* and *LS* estimators during the ***rebuttal*** period (`reviewers FxqM, PX8q and 3zEX`).\", \"**Regret analysis** (Section 5.2): The regret upper bound under *heavy-tailed assumption* is provided.\", \"**Implicit Shrinkage** ( Section D.7, **Rebuttal**,`reviewer FxqM`): The implicit shrinkage property of the LSE estimator is investigated.\", \"**Sub-Gaussian Bound** (Section D.6, **Rebuttal**,`reviewer 3zEX`): The sub-Gaussian upper bound on the absolute generalization error (concentration inequality) is provided.\", \"**Noisy reward** (Section 5.4): The robustness of the LSE estimator under noisy reward settings is analyzed.\", \"**Estimated propensity scores** (Section E): The robustness of the LSE estimator under estimated propensity scores is examined.\", \"**PAC-Bayesian Discussion** (Section D.5): A PAC-Bayesian version of our regret bound is derived, offering additional theoretical insights.\", \"**Comparison with other estimators (Section 5):** Table 2 presents a comprehensive comparison of the LSE estimator against all baseline methods. *OS* and *LS* were added during the ***rebuttal*** period (`reviewers FxqM, PX8q and 3zEX`).\", \"**Detailed comparison with tilted empirical risk** (Section D.1.8, **Rebuttal**,`reviewer PX8q`): A detailed comparison with tilted empirical risk framework is provided.\", \"**Comparison under Bounded Reward** (Section D.1.7,**Rebuttal**,`reviewer FxqM`): A full comparison of other estimators with LSE under bounded reward assumption is provided.\", \"**Comparison with Specific Estimators:**\", \"**PM Estimator** (Section D.1.2): A detailed theoretical comparison with the PM estimator.\", \"**ES Estimator** (Section D.1.3): A detailed theoretical comparison with the ES estimator.\", \"**IX Estimator** (Section D.1.4): A detailed theoretical comparison with the IX estimator.\", \"**LS Estimator** (Section D.1.5, **Rebuttal**,`reviewers PX8q and 3zEX`): A detailed theoretical comparison with the LS estimator.\", \"**OS Estimator** (Section D.1.6, **Rebuttal**,`reviewer FxqM`): A detailed theoretical comparison with the OS estimator.\", \"**Comparison with Assumption 1 in Switch Estimator** (Section D.1.9, **Rebuttal**,`reviewer FxqM`): We provide a detailed comparison between our heavy-tailed assumption and Assumption 1 from the switch estimator framework.\", \"**Additional Related Work** (Section A, **Rebuttal**, `reviewers FxqM and 3zEX`): We have expanded the related work to include works on *generalization error under heavy-tailed assumptions*, *mean estimation under heavy-tailed distributions*, and *heavy-tailed rewards in bandits* and *reinforcement learning*.\", \"## We would be happy to conduct additional theoretical result or discussion to address any remaining questions or concerns before the end of rebuttal.\"]}", "{\"summary\": \"Summary: The authors consider stochastic, contextual bandits where data is collected using a logging policy $\\\\pi_{\\\\log}$. This policy is available for point evaluation. The main idea is to use the LSEA, \\\"log-sum-exponential average\\\" ($f_s(z_1,\\\\dots,z_n) =-1/s \\\\log \\\\sum_{i=1}^n \\\\exp(- s z_i)$, $s>0$) on the top of importance weighted reward data (i.e., estimate the value of policy $\\\\pi$ using $f_s(Z_1,\\\\dots,Z_n)$ where $Z_i = R_i \\\\pi(A_i|X_i)/\\\\pi_{\\\\log}(A_i|X_i)$, $X_i$ is context for the $i$th data point, $A_i$ is the corresponding action logged, and $R_i$ is the associated reward). The authors show generalization bounds on how well this estimator approximates the true value of policies, as well as bounds on the (simple) regret of the policy which, in a given class, gives the best LSEA. The bounds depend on the smoothing parameter $s$ (the authors use $\\\\lambda = -s$) and $\\\\epsilon>0$, which is a constant such that $\\\\mathbb{E}[ |Z_i]^{1+\\\\epsilon} ]\\\\le \\\\nu$ (since $Z_i$ depends on the policy, this is demanded to hold uniformly for all policies). We shall call this a moment constraint on the importance weighted reward. In particular, it is claimed that the \\\"rate\\\" of the suboptimality of the chosen policy is $O(n^{-\\\\epsilon/(1+\\\\epsilon)})$. In addition to the theoretical result, the authors also show empirical evidence that their approach is a good one. For this, two setting are considered: a synthetic problem (gaussian policies, etc.), and another problem based on transforming EMNIST (and also FMNIST) to a bandit problem.\\n\\nSignificance/novelty: The problem is of high significance, \\\"fixing importance weighting estimators\\\" is also of high importance. Bringing LSEA to this problem is a new and nice idea. The theoretical results are of interest, as well, and the empirical evidence presented is promising. The robustness result is OK, but I could not see its significance (yes, we can do this.. until I see a lower bound, it is unclear how lose this or similar results are).\", \"soundness\": \"4\", \"presentation\": \"3\", \"some_minor_issues\": \"Is the number of actions finite? If not, what is $\\\\pi(a|x)$? Density of distribution over arms with respect to some base measure?\", \"contribution\": \"3\", \"strengths\": \"1. log-sum-exp was not studied in this context and it seems it should be studied\\n2. the results obtained are reasonable; there is some nice math (buried in the appendix)\\n3. the paper looks at both theory (minimum guarantees) and also garners some empirical evidence in favour of the method.\", \"weaknesses\": \"The paper feels a bit undercooked. The upper bounds are a little messy and little to no effort is spent on explaining them. The relation to the heavy-tailed mean estimation literature should have been discussed. It is unclear how the results fair compared to the recent work on logarithmic smoothing.\", \"questions\": \"For $\\\\epsilon$ large enough, the rate is better than $1/\\\\sqrt{n}$. How can this be true? What is the mechanism that allows us to get a rate better than the statistical rate? (In finite armed bandit, for sure, asymptotic rates will be exponential, but if the result has this asymptotic flavor, I am not going to be too excited about it, because in a contextual setting I don't think that the asymptotics has any chance of \\\"kicking in\\\").\\n\\nIn essence, the authors point out that importance weighted rewards are heavy tailed; hence the focus in the analysis on moment constraints. This raises two questions: Why not consider the big literature on estimating the mean of heavy tailed distribution? Why pick LSEA? Has LSEA been analyzed in that literature? I expected the authors to at least look at this question in the paper. Secondly, with moment conditions only, I expect weaker results. For example, in the literature on mean estimation with heavy tail iid data, we know that the \\\"subgaussian like tail bounds\\\" work for \\\"fixed delta\\\" (the error probability where the tail bound holds is an input to the algorithm and this weakness is inherent to the problem not the algorithms). I suppose we are paying a similar price here. But this is not very clear from the presentation and I would have expected an honest discussion of this (I am guessing this limitation comes from that the results only hold for $n$, the sample size, \\\"large enough\\\"). The bigger question then is: Alternative estimators avoid this problem. So is LSEA inferior to those in this sense?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes to use a \\\"log sum exp\\\" estimator for off policy\\nevaluation and learning in contextual bandits. The main theoretical\\nclaim is that this estimator allows for handling heavy tailed reward\\ndistributions and provides some robustness under distribution shift\\n(called noisy reward). The main practical claim is simply that the\\nestimator works better than several baselines in both OPE and OPL\\nsettings.\", \"questions_and_comments\": \"0. In general, I am unconvinced that heavy-tailed reward is a real\\nproblem. The main reason is that the reward is something that we (the\\nsystem builder) gets to design (i.e., if we are building a recommender\\nsystem, we decide how to aggregate collected telemetry data into a\\nscalar reward). It is always possible to design the reward so that it\\nis bounded at a minimum, which implies that all moments are bounded.\\n\\n0a. Given this, one possibility is to interpret the heavy-tailed\\nassumption as a sort of \\\"instance dependence.\\\" Suppose that rewards\\nare bounded in [0,R_max] but that the 1+\\\\eps moment is much much\\nsmaller. Then we'd like statistical guarantees that have a benign\\ndependence on R_{max}, replacing it with 1+\\\\eps moment. This seems\\nworthwhile, but if I understand correctly, I think this is already\\ndone in [3] (for the unweighted second moment).\\n\\n0b. I certainly agree that heavy-tailed importance weight is a real\\nproblem, but this is addressed by a number of other previously\\nproposed estimators, including IX, clipped importance weights,\\nsmoothing/shrinkage, SWITCH (from [3]), etc.\\n\\n1. In terms of exposition, it would be helpful to clearly explain how\\nLSE compares to existing estimators. Table 2 is not very helpful\\nbecause the quantities that appear in the bias/variance for LSE do not\\nappear in the bounds for the others.\\n\\n1a. As a concrete question: Suppose that rewards were bounded in\\n[0,Rmax], then we should take \\\\nu_2 = R_{max}^2\\nP_2(\\\\pi_\\\\theta||\\\\pi_0). Is LSE better than IPS under this setup? It\\nseems that the positive term of the LSE variance is worse, due to\\nR_{max}^2 vs R_{max}.\\n\\n2. Why does the variance bound in Prop 5.7 require bounded second\\nmoment (rather than 1+eps for any eps)? It seems like I can always use\\nLemma 5.1 to bound the variance of LSE under bounded (1+eps)\\nmoment. Is the point that Prop 5.7 is tighter due to the\\n-\\\\lambd\\\\nu_2^{3/2} term? If so, it would be great to make this clear\\nin the exposition and somehow highlight where/why this refinement\\nmatters. (e.g., maybe it is necessary to address the question in 1a.)\\n\\n3. The OPE experiment is somewhat unconvincing because the setting is\\nhighly synthetic. It seems that the setup is designed to expose a\\nweakness of prior methods that is addressed in the present work (i.e.,\\nheavy tails). But it is not clear that this problem is prevalent in\\npractice or whether LSE results in some tradeoffs in \\\"more benign\\\" settings.\\n\\n3a. In particular, it is by now standard to do these experiments on a\\nwide range of datasets with a broad set of experimental\\nconditions. See e.g., [1,2]. I strongly recommend that the authors add\\nsuch experiments to the paper.\\n\\n4. Regarding OPE experiments, there are also a number of important missing\\nbaselines, including clipped importance weighting, the SWITCH\\nestimator of [3], etc.\\n\\n5. For the OPL experiments, why do we only compare against\\nunregularized baselines? I think we should view the LSE estimator as a\\nform of regularization, so it seems natural to also compare with\\nregularized baselines. In particular, why not compare against MRDR [1]\\nand DR-Shrinkage [2]? Both can be implemented with a \\\"reward\\nestimator\\\" that always predicts 0.\\n\\n6. As alluded to above, the exposition/presentation could be greatly\\nimproved.\\n\\n\\nOverall, I feel the paper has a decent amount of potential, but isn't\\nquite there yet. As it stands, it feels a rather shallow. There could\\nhave been a much deeper theoretical and empirical investigation into\\nthe estimator and its properties and this could have made the paper\\nmuch better. As mentioned above, I don't think the heavy tailed reward\\nsetting is particularly interesting, but heavy-tailed importance is\\ndefinitely a real problem. Focusing the paper on this setting would\\nhave helped, but then this requires a much deeper discussion of prior\\nwork and connections. As a concrete point about this, I believe the\\nestimator can be formally viewed as a form of regularization, via the\\nduality between log-sum-exp and entropy regularization. It would have\\nbeen nice to consider this view point as a means to connect to other\\nforms of regularization in the off policy evaluation literature, for\\nexample those developed in [2]. In particular, I think there is\\npotentially something interesting about the LSE estimator relative to\\nprior works like [2], namely it is a new form of shrinkage that\\nexplicitly accounts for the n-dimensional nature of the regularization\\nproblem (akin to Stein shrinkage). It would be nice to investigate\\nthis in more detail.\", \"missing_references\": \"[1] Mehrdad Farajtabar, Yinlam Chow, Mohammad Ghavamzadeh. More Robust\\nDoubly Robust Off-policy Evaluation. https://arxiv.org/abs/1802.03493\\n\\n[2] Yi Su, Maria Dimakopoulou, Akshay Krishnamurthy, Miroslav\\nDud\\u00edk. Doubly robust off-policy evaluation with\\nshrinkage. https://arxiv.org/abs/1907.09623\\n\\n[3] Yu-Xiang Wang, Alekh Agarwal, Miroslav Dudik. Optimal and Adaptive\\nOff-policy Evaluation in Contextual\\nBandits. https://arxiv.org/abs/1612.01205\", \"post_rebuttal\": [\"I thank the authors for their detailed responses. I have a few comments (below) and I will raise my score to 5. As I mentioned above, I think this paper does have potential but is not quite there yet so I am not quite ready to recommend acceptance.\", \"I understand the examples provided but I am still unconvinced that heavy-tailed reward is a real problem. We can always make the reward bounded, this is without loss of generality and so the question is which moments are controlled. I appreciate the discussion about 1+eps vs 2+eps moment, and I think the paper would greatly benefit by having this included. More generally, the paper would benefit from a much clearer explanation of (a) the settings that are not handled by prior work and (b) the instance dependent improvements that LSE achieves in settings that are handled by prior work.\", \"I think LSE _can_ be viewed as regularization, we just need to minimize rather than maximize. Take \\\\lambda < 0, let z be given and define the optimization problem \\\\min_{y} y^\\\\top z + \\\\lambda H(y). Since entropy is concave and \\\\lambda < 0, this is a convex optimization problem. The value at the minimum is exactly LSE_\\\\lambda(z) as defined in Eq (1) of the paper. Note that minimizing makes more sense because we want to be pessimistic (which is standard in offline RL/CB).\", \"I certainly appreciate the additional experiments and the references to missing related work. Thank you!\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"see above\", \"weaknesses\": \"see above\", \"questions\": \"see above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
88wyP257x4
UNLEARNING IS BETTER THAN UNSEEN: UNLEARNING SCORE-BASED GENERATIVE MODEL
[ "Wan Jiang", "He Wang", "Xin Zhang", "Dan Guo", "Yu Ding", "Yunfeng Diao", "Richang Hong" ]
Diffusion generative models, including Score-Based Generative Models (SGM) and Denoising Diffusion Probabilistic Models (DDPM), have demonstrated remarkable performance across various domains in recent years. However, concerns regarding privacy and potential misuse of AI-generated content have become increasingly prominent. While generative unlearning methods have been investigated on DDPM models, research on unlearning SGM is still largely missing. Furthermore, the current 'gold standard' of machine unlearning---retraining a model from scratch after removing the undesirable data, does not perform well in SGM and its downstream tasks, such as image inpainting and reconstruction. To fill this gap, we propose the first Score-based Generative Unlearning (SGU) for SGM, which surpasses the previous 'gold standard' of unlearning.SGU introduces a new score adjustment strategy that deviates the learned score from the original undesirable data score during the continuous-time stochastic differential equation process. Extensive experimental results demonstrate that SGU significantly reduces the likelihood of generating undesirable content while preserving high quality for normal image generation. Albeit designed for SGM, SGU is a general and flexible unlearning framework that is compatible with diverse diffusion architectures (SGM and DDPM) and training strategies (re-training and fine-tuning), and enables zero-shot transfer of the unlearning generative model to downstream tasks, including image inpainting and reconstruction. The code will be shared upon acceptance.
[ "Machine Unlearning,Score-based generative model" ]
https://openreview.net/pdf?id=88wyP257x4
https://openreview.net/forum?id=88wyP257x4
ICLR.cc/2025/Conference
2025
{ "note_id": [ "bLy69qV2A8", "aAmAwcYHaV", "WGOb4CZQqM", "UJmzPnJpcn", "7CQ2FC9zos" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1730616422347, 1731430447055, 1730358349009, 1730362806848, 1730710101542 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1424/Reviewer_hrCq" ], [ "ICLR.cc/2025/Conference/Submission1424/Authors" ], [ "ICLR.cc/2025/Conference/Submission1424/Reviewer_xmaD" ], [ "ICLR.cc/2025/Conference/Submission1424/Reviewer_Taty" ], [ "ICLR.cc/2025/Conference/Submission1424/Reviewer_cdHR" ] ], "structured_content_str": [ "{\"summary\": \"This paper highlights the need for effective unlearning techniques in diffusion models. Traditional methods, which rely on retraining models from scratch, tend to perform poorly in score-based generative models. To address this limitation, the authors propose Score-based Generative Unlearning (SGU), which introduces a novel score adjustment strategy during the continuous-time stochastic process to diverge from undesirable data scores. Experimental results demonstrate that SGU significantly reduces unwanted content generation while preserving high-quality outputs and enabling zero-shot transfer across various tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"A key strength of this paper is its novel approach in proposing \\\"Unlearning Re-training\\\" instead of the traditional \\\"Unseen Re-training\\\" method. This innovative methodology offers a fresh perspective on efficiently approximating the target score distribution for unlearning purposes. The authors provide valuable intuition through synthetic experiments, illustrating why this novel algorithm better aligns with the desired score distribution, thereby significantly improving performance in unlearning unwanted content.\", \"weaknesses\": \"While the proposed approach shows promise, several aspects raise questions about its robustness and effectiveness:\\n\\n1. Preservation of Generation Quality : There is some uncertainty about whether the model can reliably generate images without the undesirable features while maintaining overall generation quality. Although the paper presents classification accuracy or CLIP classifier results on inpainting and reconstruction tasks to demonstrate the preservation of the original model's performance, these evaluations may be insufficient. Before discussing downstream task performance, it would be beneficial to assess the generation quality of the modified model more thoroughly. For instance, calculating the FID score between the generated data and training data (excluding NSFG data) could provide insights into how well the model retains quality.\\n\\n2. Scalability to Large-Scale, High-Resolution Data: The applicability of the proposed method to large-scale, high-resolution datasets remains unclear. To better understand the approach's effectiveness, additional experiments on datasets with a greater number of classes, such as ImageNet, could demonstrate whether the method can maintain performance and reliability at scale.\", \"questions\": \"While the paper provides a comprehensive overview of the proposed approach, there are some areas that would benefit from additional clarification:\\n\\n1. Determination of the Alpha Value in Equation (11) : Further explanation is needed regarding how the value of alpha in Equation (11) was chosen. The expression \\\"depends on the ratio of M to N\\\" is somewhat unclear, and it would be helpful to understand what values M and N correspond to. If alpha represents the proportion of D_f within the overall dataset, it would be beneficial to provide the specific alpha values used for each experiment in the results section to enhance clarity.\\n\\n2. Effectiveness of the Algorithm Relative to D_f Proportion : It would be interesting to understand up to what percentage of the dataset D_f\\ncan occupy before the algorithm's effectiveness is compromised. Specifically, if D_f constitutes a large portion of the data, can the model still achieve stable training by reducing its influence through adjustments to alpha? Clarification on this point would help in evaluating the algorithm's robustness under varying data distributions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper proposes the first Score-based Generative Unlearning (SGU) method for SGM to address privacy concerns and the potential misuse of AI-generated content, thereby surpassing the previous 'gold standard' of unlearning.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The motivation for this paper is clear.\\n2. The authors provide a detailed analysis of both the motivation and their proposed score-based generative unlearning method.\\n3. The writing is well-structured.\", \"weaknesses\": \"1. The novelty is limited. The authors only design a score-based generative unlearning method to improve generation quality.\\n2. Given the existence of many advanced generative models, the authors focus solely on the basic DDPM and some score-based models. This raises the question of whether using more data and better models (such as Stable Diffusion XL) could solve the issues related to desirable generation content, making the authors' proposed method unnecessary.\\n3. Nowadays, there are also many autoregressive generation models. Is the proposed score-based generative unlearning method possibly extensible to other architectures?\", \"questions\": \"Please refer to the Weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose to use an unlearning method for score-based generative models. Prior work on this research topic normally focus on DDPM. They argue that the unlearning method is better than the gold standard in this area -- retraining a model from scratch after removing the undesirable data. The algorithm is tested on standard datasets and leads to decent performance.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. The research topic of privacy and security of generative models is relevant and timely.\\n2. The paper is well-written and easy to follow.\\n3. The experiments are extensive.\", \"weaknesses\": \"1. I don\\u2019t understand why the unlearning method for SGM should be different from DDPM. Actually, it is not necessary to differentiate these two models, they essentially learn a score function by deep neural networks. As [1] argues that DDPM could be seen as VP-SDE.\\n2. The arguments about the conditional unlearning and unconditional unlearning is confusing, actually a text-to-image model could also generate unconditional images, as long as the prompt is null. \\n3. As shown in Fig 4, it seems that the proposed unlearning method does not significantly outperform the unseen method.\\n\\n\\n[1] SCORE-BASED GENERATIVE MODELING THROUGH STOCHASTIC DIFFERENTIAL EQUATIONS. ICLR2021\", \"questions\": \"See weakness. I am willing to raise the score if my concerns can be well-addressed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a novel Score-based Generative Unlearning (SGU) framework aimed at enhancing the unlearning capabilities of Score-Based Generative Models (SGM) by introducing methods that outperform the \\\"gold standard\\\" retraining approach in mitigating the generation of undesirable data. SGU implements a score adjustment process to modify the learned scores of the unwanted data during training, resulting in minimal generation of undesirable content while maintaining high-quality generation for desirable data. The proposed approach, compatible with both SGM and DDPM models, supports transfer to downstream applications like image inpainting and reconstruction, demonstrating effective unlearning across multiple datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper addresses a critical gap in machine unlearning research for Score-Based Generative Models, offering a more effective alternative to standard retraining approaches.\\n2. SGU showcases strong flexibility, being adaptable to multiple generative model architectures (e.g., SGM and DDPM) and training strategies.\\n3. The method demonstrates consistent and high performance in removing undesirable content across a range of datasets and tasks, including zero-shot transfer to image inpainting and reconstruction.\", \"weaknesses\": \"1. The method may face optimization challenges, especially when the undesirable and desirable data distributions are similar, which might complicate the unlearning process.\\n2. The paper offers limited discussion on potential trade-offs between unlearning effectiveness and computational efficiency, such as the additional overhead introduced by using SGU compared to its baselines.\\n3. The study focuses on class-level unseen data, where defining boundaries for unlearning is relatively straightforward. Discussing potential challenges or limitations in applying the method to more granular or ambiguous unlearning tasks beyond class-level data would provide valuable insights into the method's broader applicability and limitations.\", \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
88rjm6AXoC
Optimal Brain Apoptosis
[ "Mingyuan Sun", "Zheng Fang", "Jiaxu Wang", "Junjie Jiang", "Delei Kong", "Chenming Hu", "Yuetong FANG", "Renjing Xu" ]
The increasing complexity and parameter count of Convolutional Neural Networks (CNNs) and Transformers pose challenges in terms of computational efficiency and resource demands. Pruning has been identified as an effective strategy to address these challenges by removing redundant elements such as neurons, channels, or connections, thereby enhancing computational efficiency without heavily compromising performance. This paper builds on the foundational work of Optimal Brain Damage (OBD) by advancing the methodology of parameter importance estimation using the Hessian matrix. Unlike previous approaches that rely on approximations, we introduce Optimal Brain Apoptosis (OBA), a novel pruning method that calculates the Hessian-vector product value directly for each parameter. By decomposing the Hessian matrix across network layers and identifying conditions under which inter-layer Hessian submatrices are non-zero, we propose a highly efficient technique for computing the second-order Taylor expansion of parameters. This approach allows for a more precise pruning process, particularly in the context of CNNs and Transformers, as validated in our experiments including VGG19, ResNet32, ResNet50, and ViT-B/16 on CIFAR10, CIFAR100 and Imagenet datasets. Our code is available at https://github.com/NEU-REAL/OBA.
[ "Network Pruning", "Efficient Maching Learning", "Hessian Matrix" ]
Accept (Poster)
https://openreview.net/pdf?id=88rjm6AXoC
https://openreview.net/forum?id=88rjm6AXoC
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zCd49xgi7C", "yF1L6uR6tn", "tegsJ2QbbX", "t5WqBs6326", "rWusWgs6Da", "ovu1I6TrT0", "lhlepr8UGa", "k26lhzC6pN", "hWN9pTNDUC", "c5Sqvkp6eJ", "WImhkKB3k9", "UPUwncVmTQ", "Rnck1Jg5cn", "RHghhkgr15", "PtoKodpCYW", "PsGySe69aH", "LflYfMtIuo", "IlokK4ludo", "G6WAt1wtoM", "DrLYrSAewd", "D88Sm93YUk", "BglhRUFoI1", "AoRoBPAWpm", "6I6M1U3u56", "3WUnrl8AKW", "26nB6KTaUo", "0iHClkzy6q" ], "note_type": [ "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732555874214, 1733151553392, 1734457855890, 1732632482447, 1732565211752, 1733147639700, 1732649046474, 1730739936634, 1732530145749, 1733208820833, 1732648109051, 1732107465356, 1730670344781, 1737523580608, 1732641829480, 1732107383489, 1732107734275, 1732107826290, 1732108199773, 1732484742723, 1732632248981, 1732638150622, 1733149355260, 1730659850837, 1730963651210, 1732108339234, 1732107614032 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3505/Reviewer_v9aH" ], [ "ICLR.cc/2025/Conference/Submission3505/Authors" ], [ "ICLR.cc/2025/Conference/Submission3505/Area_Chair_9VuC" ], [ "ICLR.cc/2025/Conference/Submission3505/Reviewer_fJfY" ], [ "ICLR.cc/2025/Conference/Submission3505/Authors" ], [ "ICLR.cc/2025/Conference/Submission3505/Authors" ], [ "ICLR.cc/2025/Conference/Submission3505/Authors" ], [ "ICLR.cc/2025/Conference/Submission3505/Reviewer_v9aH" ], [ "ICLR.cc/2025/Conference/Submission3505/Authors" ], [ "ICLR.cc/2025/Conference/Submission3505/Authors" ], [ "ICLR.cc/2025/Conference/Submission3505/Authors" ], [ "ICLR.cc/2025/Conference/Submission3505/Authors" ], [ "ICLR.cc/2025/Conference/Submission3505/Reviewer_1PJY" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3505/Reviewer_rbue" ], [ "ICLR.cc/2025/Conference/Submission3505/Authors" ], [ "ICLR.cc/2025/Conference/Submission3505/Authors" ], [ "ICLR.cc/2025/Conference/Submission3505/Authors" ], [ "ICLR.cc/2025/Conference/Submission3505/Authors" ], [ "ICLR.cc/2025/Conference/Submission3505/Reviewer_1PJY" ], [ "ICLR.cc/2025/Conference/Submission3505/Reviewer_fJfY" ], [ "ICLR.cc/2025/Conference/Submission3505/Authors" ], [ "ICLR.cc/2025/Conference/Submission3505/Reviewer_fJfY" ], [ "ICLR.cc/2025/Conference/Submission3505/Reviewer_fJfY" ], [ "ICLR.cc/2025/Conference/Submission3505/Reviewer_rbue" ], [ "ICLR.cc/2025/Conference/Submission3505/Authors" ], [ "ICLR.cc/2025/Conference/Submission3505/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thank you for your answer\", \"comment\": \"I will keep my score\"}", "{\"title\": \"Discussion about adding the first order term\", \"comment\": \"Thank you for your response! In our paper, we choose to omit the first-order term to ensure fair comparisons with other Hessian-based methods, which also do not incorporate it. However, we believe that in situations where the neural network is not fully trained, or when regularization plays a more important role, in order to make the pruning process theoretically safer and more rigorous, the first-order term can easily be reintroduced by adding a single line of code to our implementation. This discussion has been added to the revised paper, although it could not be uploaded due to the passing of the deadline.\"}", "{\"metareview\": [\"The paper proposes a pruning method inspired by work on Optimal Brain Damage. The method estimates the importance of parameters based on the Hessian matrix, and the method computes the Hessian-vector product directly for each parameter, providing a more accurate prediction or parameter importance than using approximations that most prior methods use.\", \"Reviewer rbue notes that the method is applicable to widely used architectures and theoretically grounded with proofs. However, also notes that the method is computationally expensive and the experiments use outdated architectures and datasets. The authors acknowledge that the method is computationally expensive and note that their experimental setup is to facilitate comparisons with the literature.\", \"The review by v9aH lacked sufficient detail and could not be fully considered in the meta review.\", \"Reviewer 1PJY notes that performance looks promising and the results are comprehensive. The reviewer notes that the comparisons are with works more than 5 years old, even though there are a variety of recent ones. The authors note that comparing to Hessian-based related work is sufficient; I disagree with the authors here, the paper would benefit from a comparison to non-Hessian based pruning methods, or at least a discussion on how the method compares to those.\", \"fJfY finds the related work not sufficient. The authors revised the related literature.\", \"The proposed method is interesting, however based on my own reading the theory is relatively shallow, and based on the reviewer's feedback the paper would benefit from a more solid experimental setup, in particular comparisons to state-of-the-art methods on current neural network architectures.\"], \"additional_comments_on_reviewer_discussion\": \"see meta review\"}", "{\"title\": \"Significance of first order term?\", \"comment\": \"In earlier work it was assumed that the network was trained to a minimum, i.e., with vanishing first term. You seem not to keep track of the first order term, hence potentially incur an uncontrolled error for networks trained with SGD methods. Please comment on this risk\"}", "{\"title\": \"Appreciation from Authors\", \"comment\": \"Dear Reviewer v9aH,\\n\\nThanks for reviewing our response. Wish you all the best!\\n\\nAuthors\"}", "{\"title\": \"Gentle Reminder\", \"comment\": \"Dear reviewer fJfY,\\n\\nAs the discussion period is nearing its end, we would like to know if you have any additional concerns regarding our paper. If so, we are happy to address them before the period concludes.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Appreciation for Reviewing our Rebuttal\", \"comment\": \"Dear Reviewer rbue,\\n\\nThanks for taking the time to review our response and recognizing our work! Wish you a great day!\\n\\nBest,\\n\\nAuthors\"}", "{\"summary\": \"This paper introduces a tractable way of computing the Hessian of the loss function in feed-forward networks. This finding is very interesting, as it may have a wide range of applications, in particular, the one of pruning CNNs and transformers. The results are very promising, as they show some improvements, but these improvements do not seem to be very large.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"An interesting new method for computing the Hessian of the loss function in feed-forward networks in a tractable way.\", \"weaknesses\": \"The improvements in accuracy and speed do not seem overwhelming.\", \"questions\": \"Considering that recurrent neural networks use back-propagation through time, for a given time window, how tractable would be to extend your method to such networks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer 1PJY's comment\", \"comment\": \"Dear reviewer 1PJY,\\n\\nWe appreciate your final recognition of our efforts in the revised manuscript and rebuttal. Hope all is well with you!\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Our Summary on the First Order Term\", \"comment\": \"Dear reviewer fJfY,\\n\\nwe would like to summarize our opinions regarding the first-order term as follows.\\n\\n1. **The reason for not incorporating the first order term is to ensure fair comparisons with other Hessian-based methods**, including OBD [1], OBS [2], EigenDamage [3], et al. They do not utilize the first-order term, either.\\n\\n2. In our experiments, we found that the second-order term's average magnitude is approximately ten times greater than that of the first-order term per layer, indicating that the first-order term has minimal influence on the calculation of our importance score.\\n\\n3. Through further experiments, as shown in the table presented in our \\\"Reply to Reviewer fJfY (first order term)\\\", we show that **including the first-order term does not have a clear influence on the performance of the pruned network**.\\n\\n4. All our experiments do not leverage large regularization. Before pruning, all networks are fully trained into local minima, theoretically supporting the fact of the small first-order term.\\n\\nThe first-order term can be added back in other conditions by minimal code adjustment. Under our experiment settings, not incorporating it is valid and fair. We hope our response could resolve your concerns regarding the first-order term.\\n\\nBest,\\n\\nAuthors\\n\\n[1] LeCun, Yann, John Denker, and Sara Solla. \\\"Optimal brain damage.\\\" Advances in neural information processing systems 2 (1989).\\n\\n[2] Hassibi, Babak, David G. Stork, and Gregory J. Wolff. \\\"Optimal brain surgeon and general network pruning.\\\" IEEE international conference on neural networks. IEEE, 1993.\\n\\n[3] Wang, Chaoqi, et al. \\\"Eigendamage: Structured pruning in the kronecker-factored eigenbasis.\\\" International conference on machine learning. PMLR, 2019.\"}", "{\"title\": \"Reply to Reviewer fJfY (first order term)\", \"comment\": \"The reason for omitting the first-order term is to align with prior Hessian-based pruning methods. Previous studies assume that the network is trained to a minimum such that the first-order term vanishes. This assumption is indeed quite restrictive and nearly impossible to achieve in practical settings, where neural networks are typically trained to a local minimum using the SGD optimizer. However, we argue that while a local minimum does not reduce the first-order term to zero, it can still keep it relatively small.\\n\\nWe performed experiments to record both first-order and second-order values for each layer in ResNet20 on CIFAR10 in unstructured pruning setting and compared the results. Our findings show that the average magnitude of the second-order term is roughly ten times that of the first-order term per layer, suggesting that the first-order term has little impact on our importance score calculation. Moreover, the unstructured pruning results do not differ significantly, indicating that the error introduced by omitting the first-order term is present but very limited.\\n\\n| Sparsity | Taylor (91%) | | OBD (91%) | | Weight (91%) | | OBA w/o First Order Term\\u00a0 (91%) | | OBA w/. First Order Term\\u00a0(91%) | |\\n| :------: | :----------: | :-------: | :----------: | :-------: | :----------: | :--------: | :-----------------------------: | :--------: | :----------------------------- | :--------- |\\n| | Accuracy (%) | Ratio (%) | Accuracy (%) | Ratio (%) | Accuracy (%) | Ratio (%) | Accuracy (%) | Ratio (%) | Accuracy (%) | Ratio (%) |\\n| 0\\\\.1 | 11\\\\.00 | 14\\\\.45 | 10\\\\.03 | 13\\\\.17 | 90\\\\.66 | 99\\\\.63 | 90\\\\.83 | **99\\\\.81** | 90\\\\.69 | 99\\\\.66 |\\n| 0\\\\.2 | 11\\\\.00 | 14\\\\.45 | 10\\\\.03 | 13\\\\.17 | 90\\\\.82 | 99\\\\.80 | 90\\\\.90 | **99\\\\.89** | 90\\\\.34 | 99\\\\.27 |\\n| 0\\\\.3 | 10\\\\.00 | 13\\\\.14 | 10\\\\.03 | 13\\\\.17 | 90\\\\.67 | 99\\\\.64 | 90\\\\.65 | 99\\\\.62 | 90\\\\.73 | **99\\\\.70** |\\n| 0\\\\.4 | 10\\\\.80 | 14\\\\.19 | 10\\\\.02 | 13\\\\.16 | 90\\\\.67 | **99\\\\.64** | 90\\\\.35 | 99\\\\.29 | 90\\\\.37 | 99\\\\.31 |\\n| 0\\\\.5 | 10\\\\.00 | 13\\\\.14 | 10\\\\.01 | 13\\\\.15 | 90\\\\.79 | **99\\\\.77** | 90\\\\.57 | 99\\\\.53 | 90\\\\.63 | 99\\\\.59 |\\n| 0\\\\.6 | 8\\\\.20 | 10\\\\.77 | 10\\\\.00 | 13\\\\.14 | 90\\\\.23 | 99\\\\.15 | 90\\\\.69 | **99\\\\.66** | 90\\\\.27 | 99\\\\.20 |\\n| 0\\\\.7 | 10\\\\.00 | 13\\\\.14 | 10\\\\.00 | 13\\\\.14 | 88\\\\.83 | 97\\\\.62 | 89\\\\.94 | 98\\\\.84 | 89\\\\.98 | **98\\\\.88** |\\n| 0\\\\.8 | 10\\\\.00 | 13\\\\.14 | 10\\\\.00 | 13\\\\.14 | 85\\\\.03 | 93\\\\.44 | 89\\\\.64 | **98\\\\.51** | 88\\\\.95 | 97\\\\.75 |\\n| 0\\\\.9 | 10\\\\.00 | 13\\\\.14 | 10\\\\.52 | 13\\\\.82 | 67\\\\.00 | 73\\\\.63 | 86\\\\.27 | 94\\\\.80 | 86\\\\.36 | **94\\\\.90** |\"}", "{\"title\": \"Reply to Reviewer rbue(1/1)\", \"comment\": \"We deeply appreciate your recognition on our work! We would like to reply your questions and weaknesses as follows.\\n> **Weakness 1** It seems like calculating the full Hessian-vector product, even with optimizations, can still be computationally expensive for larger and more complex networks.\\n\\nYou are right. We recognize that our method for calculating the Hessian-vector product is still quite time-consuming, taking twice as long as OBS, as shown in Figure 6b. However, the time it takes for OBA to prune is still relatively minor compared to the extensive time required for training or fine-tuning the model. For very large models such as LLaMA 3.1, BLOOM, and others, it may be practical to prune only specific layers, similarly to common fine-tuning practices. This approach significantly reduces complexity by limiting the calculation of the Hessian-vector product to just the selected layers.\\n\\n> **Weakness 2** Extending this method to newer architectures seems to require additional work. The method was tested on a specific set of architectures, and its generalizability across a wider range of tasks or domains is yet to be fully established.\\n\\nThanks for pointing out this. OBA can be applied to various parameter layers such as fully connected, convolutional, and attention layers, which are essential to most contemporary models in computer vision and natural language processing. This versatility shows that OBA can be seamlessly integrated into widely used models without any modifications. Moreover, since the importance score acquisition of OBA only relies on the loss of the output, which is task-invariant, OBA can also be directly utilized in other tasks to prune models. Exploring the application of the Hessian-vector product in other architectures like RNNs, SSMs, and additional tasks will be valuable to further assess OBA\\u2019s adaptability and generalization capabilities in future research.\\n\\n> **Weakness 3** The experiments make the paper feel somewhat outdated, as the evaluations and datasets used are reminiscent of those from 7-8 years ago.\\n\\nWe appreciate your feedback and sincerely apologize if our experiments gave the impression of being outdated. Our intention was to compare our method fairly with others that also innovate on the Taylor expansion term, specifically Eigen Damage (2019) and CHITA (2023). Eigen Damage is a structured pruning method, while CHITA focuses on unstructured pruning. Although CHITA is more recent, the models and datasets used in both studies are quite similar, which may have contributed to the perception of our experiments being less current.\\n\\nFor fairness, we conducted our experiments under the same settings as these studies, ensuring a direct and meaningful comparison. However, we understand how this might make the experiments appear outdated. Moving forward, we plan to explore the application of OBA on more contemporary datasets and models to demonstrate its broader applicability and relevance. Thank you for pointing this out, and we will address it in our future work.\"}", "{\"summary\": \"The authors propose a new method for neural network pruning called Optimal Brain Apoptosis (OBA) inspired by the prior work Optimal Brain Damage (Lecun et al., 1989). This method calculates the Hessian-vector product for each parameter and identify the conditions under which inter-layer Hessian submatrices are non-zero. The proposed method is able to prune models to increase model efficiency on 3 convolutional backbones and one vision transformers backbone on CIFAR10, CIFAR100 and ImageNet datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tPruning performance looks fairly promising. The method achieves consistent improvement on various datasets, using the most commonly-used backbones (ResNet and ViT), and on unstructured vs structured pruning.\\n2.\\tThe results shown are quite comprehensive: pruning performance (accuracy, parameter reduction, FLOPs reduction, throughput increase), pruning cost (training and pruning time). Surprisingly, the pruning cost was not as high as I originally expected knowing that the method involves computation of the Hessian-vector product.\", \"weaknesses\": \"1.\\tSince pruning is not my area of expertise, I am unsure whether the authors used fair baselines for comparison. The proposed method is mainly compared against 7 methods from 3 papers, respectively in 2016, 2017 and 2019. A simple literature search gave me a few methods that claim to have achieved better pruning performances and are fairly well cited and fairly highly stared: https://arxiv.org/abs/2203.04248, https://arxiv.org/abs/2208.11580, https://arxiv.org/abs/2210.04092, https://arxiv.org/abs/2112.00029. I would encourage the authors to either compare to some of the latest works, or provide a justification on not considering these more recent works --- meanwhile, I would resort to reviewers with more experience in pruning on their opinions regarding this matter.\", \"questions\": \"1.\\tPlease refer to Weakness 1.\\n2.\\tMinor suggestion. For LaTeX notations, the subscripts (especially subscripts of superscripts) can be wrapped in text format. What I mean is instead of $\\\\mathbb{R}^{l_out}$, using $\\\\mathbb{R}^{l_\\\\textrm{out}}$ might give you a better looking symbol.\\n3.\\tWhat do a, b, c, d, e respectively mean in Equation 3? Could the authors define them or point to the text where they are defined?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thank you for your responses! I will keep my score.\"}", "{\"title\": \"Reply to All Reviewers\", \"comment\": \"Thank you for your thorough reviews and valuable feedback. We are grateful for the opportunity to address the concerns raised and clarify aspects of our work.\\n\\nWe've carefully revised our manuscript according to the reviewer's valuable suggestions. The new content is highlighted in orange.\"}", "{\"title\": \"Reply to Reviewer 1PJY (1/2)\", \"comment\": \"We express our gratitude for your recognition on our work.\\n> **Weakness & Question 1** Since pruning is not my area of expertise, I am unsure whether the authors used fair baselines for comparison. The proposed method is mainly compared against 7 methods from 3 papers, respectively in 2016, 2017 and 2019. A simple literature search gave me a few methods that claim to have achieved better pruning performances and are fairly well cited and fairly highly stared: https://arxiv.org/abs/2203.04248, https://arxiv.org/abs/2208.11580, https://arxiv.org/abs/2210.04092, https://arxiv.org/abs/2112.00029. I would encourage the authors to either compare to some of the latest works, or provide a justification on not considering these more recent works --- meanwhile, I would resort to reviewers with more experience in pruning on their opinions regarding this matter.\\n\\nThank you for providing additional related work in the field of pruning. Our paper primarily introduces a novel \\nHessian-vector product method, where the Hessian matrix represents the second-order term in the Taylor \\nexpansion of the loss function. As such, we mainly focus our comparisons on works that similarly utilize \\neither the Hessian matrix or the first-order term of the Taylor expansion. The seven methods you highlighted\\n are from Table 3, which presents structured pruning results. For unstructured pruning, our results are also\\n compelling, as shown in Tables 4 and 5, particularly against CHITA [1].\\n\\nIn the paper you recommended, both \\\"Dual Lottery Ticket Hypothesis\\\" and \\\"Advancing Model Pruning via \\nBi-level Optimization\\\" build on the Lottery Ticket Hypothesis, which seeks to identify an effective \\nsubnetwork prior to training, a concept distinct from our approach. \\\"Pixelated Butterfly\\\" does not use \\nHessian or Taylor expansion information, thus it is not a relevant comparison for our work. \\\"Optimal \\nBrain Compression,\\\" which utilizes the Hessian matrix for pruning, is indeed worth comparing. However, \\nwe were unable to replicate the author's results in our implementation (Our GMP achieves 73.16% accuracy \\nunder the same conditions described in OBC, which is significantly lower than the reported 74.86%), and \\nthe time constraints of the rebuttal period prevent us from implementing OBA within the OBC structure. \\nWe have expanded our unstructured pruning results on CIFAR10 using Resnet20 and included CBS [2] and WoodFisher [3] in our \\ncomparisons.\\n\\n\\n| Weight (91%) | | WoodFisher (91\\\\.36%) | | CBS (91\\\\.36%) | | Chita++ (91\\\\.36%) | | OBA (91%) | |\\n| :----------- | :--------- | :--------------------- | :---------- | :-------------- | :-------- | :------------------ | :--------- | :----------- | :--------- |\\n| Accuracy (%) | Ratio (%) | Accuracy (%) | Ratio (%) | Accuracy (%) | Ratio (%) | Accuracy (%) | Ratio (%) | Accuracy (%) | Ratio (%) |\\n| 90\\\\.66 | 99\\\\.63 | - | - | - | - | - | - | 90\\\\.83 | **99\\\\.81** |\\n| 90\\\\.82 | 99\\\\.80 | - | - | - | - | - | - | 90\\\\.90 | **99\\\\.89** |\\n| 90\\\\.67 | 99\\\\.64 | 91\\\\.37 | **100\\\\.01** | 91\\\\.35 | 99\\\\.99 | 91\\\\.25 | 99\\\\.88 | 90\\\\.65 | 99\\\\.62 |\\n| 90\\\\.67 | 99\\\\.64 | 91\\\\.15 | 99\\\\.77 | 91\\\\.21 | 99\\\\.84 | 91\\\\.20 | **99\\\\.82** | 90\\\\.35 | 99\\\\.29 |\\n| 90\\\\.79 | **99\\\\.77** | 90\\\\.23 | 98\\\\.76 | 90\\\\.58 | 99\\\\.15 | 91\\\\.04 | 99\\\\.65 | 90\\\\.57 | 99\\\\.53 |\\n| 90\\\\.23 | 99\\\\.15 | 87\\\\.96 | 96\\\\.28 | 88\\\\.88 | 97\\\\.29 | 90\\\\.78 | 99\\\\.37 | 90\\\\.69 | **99\\\\.66** |\\n| 88\\\\.83 | 97\\\\.62 | 81\\\\.05 | 88\\\\.71 | 81\\\\.84 | 89\\\\.58 | 90\\\\.38 | **98\\\\.93** | 89\\\\.94 | 98\\\\.84 |\\n| 85\\\\.03 | 93\\\\.44 | 62\\\\.63 | 68\\\\.55 | 51\\\\.28 | 56\\\\.13 | 88\\\\.72 | 97\\\\.11 | 89\\\\.64 | **98\\\\.51** |\\n| 67\\\\.00 | 73\\\\.63 | 11\\\\.49 | 12\\\\.58 | 13\\\\.68 | 14\\\\.97 | 79\\\\.32 | 86\\\\.82 | 86\\\\.27 | **94\\\\.80** |\\n\\nAs demonstrated, OBA significantly outperforms other methods at high sparsity levels, confirming its efficacy.\\n\\n[1] Benbaki, Riade, et al. \\\"Fast as chita: Neural network pruning with combinatorial optimization.\\\" In ICML 2023.\\n\\n[2] Yu, Xin, et al. \\\"The combinatorial brain surgeon: pruning weights that cancel one another in neural networks.\\\" In ICML 2022.\\n\\n[3] Singh, Sidak Pal, and Dan Alistarh. \\\"Woodfisher: Efficient second-order approximation for neural network compression.\\\" In NeurIPS 2020.\"}", "{\"title\": \"Reply to Reviewer 1PJY (2/2)\", \"comment\": \"> **Question 2** Minor suggestion. For LaTeX notations, the subscripts (especially subscripts of superscripts)\\ncan be wrapped in text format. What I mean is instead of $\\\\mathbb R^{l_{out}}$, using $\\\\mathbb R^{l_{\\\\text{out}}}$\\n might give you a better looking symbol.\\n\\n Thanks for your helpful suggestions. Wrapping these notations with \\\"\\\\text\\\" do present a better looking. We've changed\\n all these symbols into \\\"\\\\text{}\\\" version in the revised manuscript.\\n\\n > **Question 3** What do a, b, c, d, e respectively mean in Equation 3? Could the authors define them or point to the text where they are defined?\\n\\na, b, c, d, e represent the index of 5 dimensions, respectively the output channel/neuron dimension with length $l_{\\\\text{out}}$,\\nthe flattened output feature size dimension with length $p_{\\\\text{out}}$, the input channel/neuron dimension with length $l_{\\\\text{in}}$, \\nthe flattened weight size for each input and output neuron/channel pair with length $p_{\\\\text{weight}}$, and the \\nflattened input feature size dimension with length $p_{\\\\text{input}}$. You could refer to the beginning part of section 3.1 to better understand them with concrete examples.\"}", "{\"title\": \"Reply to Reviewer fJfY (1/2)\", \"comment\": \"We appreciate the valuable suggestions and efforts from the reviewer!\\n\\n> **Weakness 1** The field is very busy with many results going back to the early 90s. Computational results and simplifying structure of the Hessian for feedforward network are found in many papers including \\u2022 Wille, J., 1997, June. On the structure of the Hessian matrix in feedforward networks and second derivative methods. In Proceedings of International Conference on Neural Networks (ICNN'97) (Vol. 3, pp. 1851-1855). IEEE. \\u2022 Buntine, W.L. and Weigend, A.S., 1994. Computing second derivatives in feed-forward networks: A review. IEEE transactions on Neural Networks, 5(3), pp.480-488. \\u2022 Wu, Y., Zhu, X., Wu, C., Wang, A. and Ge, R., 2020. Dissecting hessian: Understanding common structure of hessian in neural networks. arXiv preprint arXiv:2010.04261 \\u2022 Singh, S.P., Bachmann, G. and Hofmann, T., 2021. Analytic insights into structure and rank of neural network hessian maps. Advances in Neural Information Processing Systems, 34, pp.23914-23927.\\n\\nThanks for pointing out these literatures that build the ground of the Hessian in the neural network. \\nWe've added them to our related work, merging the Hessian Matrix part into section 2 (preliminary) as part of the main paper. For more details please refer to section 2 of our revised manuscript.\\n\\n> **Weakness 2** The paper presents a reduced complexity calculation of Hessian x vector, this is well-known - see work by Barak Pearlmutter for example; Pearlmutter, B.A., 1994. Fast exact multiplication by the Hessian. Neural computation, 6(1), pp.147-160.\\n\\nWe are very grateful for your introduction to this previously unknown work! While our research shares a \\nfundamental concept with \\\"Fast Exact Multiplication by the Hessian,\\\" the specifics of our approaches \\ndiffer significantly. The work in 1994 introduces a differential operator, \\n$\\\\mathcal{R}v({f(\\\\boldsymbol{w})}) = \\\\left.(\\\\partial / \\\\partial r) f(\\\\mathbf{w}+r \\\\mathbf{v})\\\\right|{r=0}$, \\nfor deriving second-order derivatives in single fully connected layer, recurrent layer, and Stochastic \\nBoltzmann Machine. However, the use of single-layer networks has become quite rare in contemporary \\ntimes. Our focus is on the more complex Hessian submatrices across layers. Our method, OBA, facilitates \\nthe computation of the Hessian-vector product in multi-layer networks, which are the predominant form \\nof neural networks today, marking a clear departure from the techniques used in \\\"Fast Exact Multiplication \\nby the Hessian\\\".\\n\\n> **Question 1** Empirical evidence: While the results for CNNs are impressive, please consider transformers. There is much interest in pruning and redundancy in transformers e.g.\\na. Men, X., Xu, M., Zhang, Q., Wang, B., Lin, H., Lu, Y., Han, X. and Chen, W., 2024. Shortgpt: Layers in large language models are more redundant than you expect. arXiv preprint arXiv:2403.03853. b. Lad, V., Gurnee, W. and Tegmark, M., 2024. The Remarkable Robustness of LLMs: Stages of Inference?. arXiv preprint arXiv:2406.19384.\\n\\nThank you for your recommendation! We have indeed performed experiments on ViT-B/16 and achieved \\npromising results compared to magnitude pruning and first-order Taylor approximation (see Table 2). \\nIt would be worthwhile to conduct further experiments to test its effectiveness on other architectures \\nsuch as BERT and GPT in future studies.\\n\\n> **Question 2** L652 Consider setting the scene, motivation and related work as an integral part of the paper. With due refence to early work on pruning and Hessian structure\\n\\nThanks! We've revised the paper according to your suggestions. Please see the section 2 of our revised paper.\\n\\n> **Question 3** L058-9 The OBS method is only different from OBD when the non-diagonal Hessian is used, i.e they do not ignore the off-diagonal terms in OBS\\n\\nWe apologize for the misunderstanding writing. OBS does not discard the second-order derivative information but rather\\napproximates them with the Fisher information matrix. We've revised the content as follows:\\n*They either discard or approximate the second-order partial derivatives between all pairs of parameters, which capture the change of loss on one parameter when deleting another parameter.*\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"The authors provided a disciplined response to my prior question on why they did not include comparisons against certain more recent pruning methods, and I am satisfied with the answer. Under the current form of the paper, I do not see pressing reasons for a major rating increase from 6 marginal accept to 8 clear accept, and I would feel more comfortable keeping my original rating. Nevertheless, I acknowledge the extra effort spent by the authors and wish them all the best.\"}", "{\"title\": \"Please argue why Pearlmutter's method is restricted to a single layer machine\", \"comment\": \"Please argue why you think Pearlmutter's method (for computing the product of Hessian and vector) is restricted to a single layer machine. See also this ref:\\nM\\u00f8ller, M. (1993a). Exact calculation of the product of the Hessian matrix of feed-forward network error functions and\\na vector in O(n) time. Daimi PB-432, Computer Science Department, Aarhus University, Denmark\\nfor the same general result\"}", "{\"title\": \"Reply to Reviewer fJfY (Pearlmutter's method)\", \"comment\": \"We are sorry that we misunderstood Pearlmutter's method for computing the product of Hessian and vector. After carefully reading the two papers, we agree that Pearlmutter's method can be applied to multiple layers including fully connected neural networks and recurrent neural networks.\\n\\nOur work extends the idea of exact Hessian-vector product computation to pruning, with a different Hessian-vector product calculation pipeline. This statement is added in the section 2 of our revised manuscript:\\n\\n\\\"*Pearlmutter (1994) initially introduced an efficient method for computing the Hessian-vector product. Our research applies this idea to the pruning of modern network architectures including CNNs and Transformers.*\\\"\"}", "{\"title\": \"Why not simply include the first order term?\", \"comment\": \"Thank you for the additional experiment to investigate the role of the first order term. Although you seem to have evidence that the term is small, I see no reason not to include it in the overall estimate (it is very limited additional compute, right?) for rigor and as a safety precaution, towards other applications where the term could be larger (e.g. when regularization is more important)\"}", "{\"summary\": \"The paper concerns the importance of pruning to reduce computational burden of neural network inference. The structure of the Hessian matrix is analysed with a productive derivation of structure for feedforward nets with so-called series and parallel connectivity. Empirical results are presented for standard (vision) CNNs.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper\\u2019s most prominent strength is the set of Hessian structures derived for series and parallel connectivity. The empirical results are promising showing significant reductions in FLOPS for virtually unchanged performance.\", \"weaknesses\": \"1)\\tThe field is very busy with many results going back to the early 90s. Computational results and simplifying structure of the Hessian for feedforward network are found in many papers including\\n\\u2022\\tWille, J., 1997, June. On the structure of the Hessian matrix in feedforward networks and second derivative methods. In Proceedings of International Conference on Neural Networks (ICNN'97) (Vol. 3, pp. 1851-1855). IEEE.\\n\\u2022\\tBuntine, W.L. and Weigend, A.S., 1994. Computing second derivatives in feed-forward networks: A review. IEEE transactions on Neural Networks, 5(3), pp.480-488.\\n\\u2022\\tWu, Y., Zhu, X., Wu, C., Wang, A. and Ge, R., 2020. Dissecting hessian: Understanding common structure of hessian in neural networks. arXiv preprint arXiv:2010.04261\\n\\u2022\\tSingh, S.P., Bachmann, G. and Hofmann, T., 2021. Analytic insights into structure and rank of neural network hessian maps. Advances in Neural Information Processing Systems, 34, pp.23914-23927.\\n\\nThe paper presents a reduced complexity calculation of Hessian x vector, this is well-known - see work by Barak Pearlmutter for example; Pearlmutter, B.A., 1994. Fast exact multiplication by the Hessian. Neural computation, 6(1), pp.147-160.\\n\\n\\u2022\\tIn general the context of related work is not sufficient. I am not a fan of placing context related work in an appendix.\", \"questions\": \"1)\\tEmpirical evidence: While the results for CNNs are impressive, please consider transformers. There is much interest in pruning and redundancy in transformers e.g.\\na.\\tMen, X., Xu, M., Zhang, Q., Wang, B., Lin, H., Lu, Y., Han, X. and Chen, W., 2024. Shortgpt: Layers in large language models are more redundant than you expect. arXiv preprint arXiv:2403.03853.\\nb.\\tLad, V., Gurnee, W. and Tegmark, M., 2024. The Remarkable Robustness of LLMs: Stages of Inference?. arXiv preprint arXiv:2406.19384.\\n\\n2)\\tL652 Consider setting the scene, motivation and related work as an integral part of the paper. With due refence to early work on pruning and Hessian structure\\n\\n3)\\tL058-9 The OBS method is only different from OBD when the non-diagonal Hessian is used, i.e they do not ignore the off-diagonal terms in OBS\\n\\n4)\\tL095 You mention that the Gauss.Newton approximation is in-sufficient what is the evidence?\\n\\n5)\\tL087 Do you keep track of the first order term (which is assumed zero in OBD/OBS)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents Optimal Brain Apoptosis (OBA), which appears to be a novel pruning method for neural networks that builds upon the principles of Optimal Brain Damage (OBD), which date back to 1980s and 1990s. Recognizing the limitations of previous pruning methods, OBA leverages the full Hessian-vector product to compute the importance of each parameter in the network accurately, moving beyond previous methods that relied on approximations. The authors first analyze the conditions under which Hessian submatrices between layers are nonzero and develop an approach to calculate the second-order Taylor expansion for each parameter (enabling precise pruning in both structured and unstructured settings). Empirical tests demonstrate that OBA effectively reduces computational overhead while maintaining model accuracy. The authors acknowledge that while OBA works well for architectures like CNNs and Transformers, extending it to more complex models like RNNs or State Space Models will require further research.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"Unlike previous methods that approximate the Hessian matrix, OBA calculates the full Hessian-vector product, providing a more accurate measure of parameter importance and leading to more precise pruning.\", \"OBA supports both structured pruning (removing entire neurons, channels, or layers) and unstructured pruning (removing individual weights). It's compatible with a wide range of architectures.\", \"The approach optimizes the calculation of the Hessian-vector product (reduced computational complexity).\", \"Showing adaptability to widely-used architectures in deep learning.\", \"The authors provide a solid theoretical basis for their approach, with clear proofs.\"], \"weaknesses\": [\"It seems like calculating the full Hessian-vector product, even with optimizations, can still be computationally expensive for larger and more complex networks.\", \"Extending this method to newer architectures seems to require additional work.\", \"-The method was tested on a specific set of architectures, and its generalizability across a wider range of tasks or domains is yet to be fully established.\", \"The experiments make the paper feel somewhat outdated, as the evaluations and datasets used are reminiscent of those from 7-8 years ago.\"], \"questions\": \"Please see the weaknesses section. Providing feedback on those comments during rebuttal will be appreciated.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer fJfY (2/2)\", \"comment\": \"> **Question 4** L095 You mention that the Gauss.Newton approximation is in-sufficient what is the evidence?\\n\\nThat's a good question. In our importance calculation process, the importance score of a weight $\\\\theta_i$ is the sum of all second-order derivatives multiplied with corresponding parameters $\\\\sum_{j}\\\\frac{\\\\partial^2 \\\\mathcal{L}}{\\\\partial\\\\theta_i\\\\partial\\\\theta_j}\\\\delta\\\\theta_i\\\\delta\\\\theta_j$.\\nIf we take the approximation, $\\\\sum_{j}\\\\frac{\\\\partial^2 \\\\mathcal{L}}{\\\\partial\\\\theta_i\\\\partial\\\\theta_j}\\\\delta\\\\theta_i\\\\delta\\\\theta_j$ would turn into\\n$\\\\sum_{j}\\\\frac{\\\\partial \\\\mathcal{L}}{\\\\partial\\\\theta_i}\\\\frac{\\\\partial \\\\mathcal{L}}{\\\\partial\\\\theta_j}\\\\delta\\\\theta_i\\\\delta\\\\theta_j=\\\\sum_{j}\\\\frac{\\\\partial \\\\mathcal{L}}{\\\\partial\\\\theta_j}\\\\delta\\\\theta_j\\\\frac{\\\\partial \\\\mathcal{L}}{\\\\partial\\\\theta_i}\\\\delta\\\\theta_i = c \\\\frac{\\\\partial \\\\mathcal{L}}{\\\\partial\\\\theta_i}\\\\delta\\\\theta_i$, where c is a constant value. This importance score is same to that of Taylor method, which is outperformed by OBA in nearly all results.\\n\\n> **Question 5** L087 Do you keep track of the first order term (which is assumed zero in OBD/OBS)?\\n\\nWe don't keep track of the first order term in our algorithm.\"}", "{\"title\": \"Reply to Reviewer v9aH (1/1)\", \"comment\": \"Thanks for your reognition on our work!\\n> **Weakness** The improvements in accuracy and speed do not seem overwhelming.\\n\\nWe acknowledge your concerns regarding some of the results presented. Specifically, in Table 2, you will notice that our method, OBA, shows significantly better results for structured pruning on the ViT-B/16 model compared to other methods. In terms of unstructured pruning at high sparsity levels, as shown in Tables 4 and 5, OBA notably outperforms CHITA and other competing approaches. However, we realize that the superiority of our method is less pronounced in Figure 3 and Table 3, though it still maintains a marginal advantage over other methods. Thank you for bringing this to our attention.\\n\\n> **Question** Considering that recurrent neural networks use back-propagation through time, for a given time window, how tractable would be to extend your method to such networks?\\n\\nIn an RNN, the concept of time is incorporated, requiring an expansion of series and parallel connectivity in networks. \\nFor layers that do not convey temporal information, series and parallel connectivity are confined to individual time steps.\\nFor recurrent neurons that propagate information through time, such as the basic RNN neurons whose hidden state is updated as \\n$$h_t=\\\\tanh \\\\left(W_{h h} h_{t-1}+W_{x h} x_t+b_h\\\\right)$$\\nand\\n$$y_t=W_{h y} h_t+b_y,$$\\nthe output $y_{t'}$ at time $t'$ is dependent on $x_{\\\\tau}$ where $\\\\tau \\\\leq t'$. This dependency causes layers from earlier time steps, up to $t'$, to be in series connectivity with all subsequent layers at time step $t'$.\\nFurthermore, since the parameters $W_{hh}$, $W_{xh}$, $W_{yh}$ interact to generate $y_t$, \\ntheir Hessian submatrix between them is non-zero and must be calculated. The Hessian computation for a State Space Model is analogous to that in a basic RNN.\\n\\nFor more intricate recurrent layers like LSTMs, which incorporate multiple gates that affect the internal state, \\nthe network displays extensive parallel connectivities. \\nComputing the Hessian for such a structure is significantly more complex. \\nNonetheless, through a systematic and categorized approach, \\nwe can still effectively address the complexities introduced by these advanced neurons using OBA. These are directions worth exploring in the future.\"}" ] }
88hh5GtLBJ
MetaAdapter: Leveraging Meta-Learning for Expandable Representation in Few-Shot Class Incremental Learning
[ "Lingyu Wu", "Wenhao Yang", "Lijun Zhang" ]
Few-shot class incremental learning (FSCIL) aims to enable models to learn new tasks from few labeled samples while retaining knowledge of previously ones. This scenario typically involves an offline base session with sufficient data for pre-training, followed by online incremental sessions where new classes are learned from limited samples. Existing methods either rely on a frozen feature extractor or meta-testing simulation to address overfitting issues in online sessions. However, they primarily learn feature representations using only the base session data, which significantly compromises the model's plasticity in feature representations. To enhance plasticity and reduce overfitting, we propose the MetaAdapter framework, which makes use of meta-learning for expandable representation. During the base session, we expand the network with pre-trained weights by inserting parallel adapters and employ meta-learning to encode generalizable knowledge into these modules. Then, the backbone is further trained on abundant data from the base classes to acquire fundamental classification ability. In each online session, the adapters are first initialized with parameters from meta-training, and subsequently tuned to adapt to the new classes. Leveraging meta-learning to produce initial adapters, MetaAdapter enables the feature extractor to effectively adapt to few-shot new classes, thus improving the generalization of the model. Experimental results on the mini-ImageNet, CUB200, and CIFAR100 datasets demonstrate that our proposed framework achieves the state-of-the-art performance.
[ "few-shot class incremental learning", "meta-learning", "feature representation", "residual adapter" ]
Reject
https://openreview.net/pdf?id=88hh5GtLBJ
https://openreview.net/forum?id=88hh5GtLBJ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wSnX2NmnUp", "ulDWzKVS0H", "uAC5PNnOm2", "tkyKyBxLcW", "tj7Bt8v0sg", "pbx60IoTqA", "pRlPHcGcbb", "jcYm1DBMrf", "hemyyVp8DC", "guKUlfcakj", "glyr1NX4ab", "fUgZMs2WNJ", "dmR5GIGA4g", "bacQsYC1Un", "bJUR8BX8Ue", "anmAu3cpvA", "YqjOnmTi2G", "YSXk0UFpEi", "Wq28cVfEXS", "TvIcnCDKTc", "SKWmCgfypW", "RqWtTG9q90", "QrQ26dC6J2", "OxkRofpAiU", "NtmjOjw0V8", "LsjuxVcb1W", "Hts0E2rmbY", "G94ZUkUIwn", "EQIWuJvJR9", "DOVCrCbB4s", "AxC3v0JyPz", "8QPe7wpxk9", "6gc3nLiCRv", "3OZlpQbpyc", "2a9p4ia4LK" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732269908189, 1729819093533, 1733248145639, 1733247210527, 1732269251818, 1730633824998, 1732504284917, 1733147701276, 1732269666911, 1732269977039, 1733170444215, 1734507824084, 1732269566885, 1729610329419, 1732869495709, 1732871699956, 1733104927526, 1732269762246, 1732268668548, 1732269337498, 1733143288578, 1732450981612, 1732597731302, 1733104824949, 1730097119091, 1732268581723, 1733200767104, 1732531682265, 1733104973818, 1737523994678, 1732449092847, 1732270402928, 1732869896550, 1732270695359, 1730399107184 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Submission9612/Reviewer_6ye5" ], [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Submission9612/Reviewer_7Rcy" ], [ "ICLR.cc/2025/Conference/Submission9612/Reviewer_6ye5" ], [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Submission9612/Reviewer_ma3t" ], [ "ICLR.cc/2025/Conference/Submission9612/Area_Chair_5RFH" ], [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Submission9612/Reviewer_ma3t" ], [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Submission9612/Reviewer_6ye5" ], [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Submission9612/Reviewer_ma3t" ], [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Submission9612/Reviewer_aBEC" ], [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Submission9612/Reviewer_x4wA" ], [ "ICLR.cc/2025/Conference/Submission9612/Reviewer_x4wA" ], [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9612/Reviewer_7Rcy" ], [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Submission9612/Authors" ], [ "ICLR.cc/2025/Conference/Submission9612/Reviewer_x4wA" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer x4wA (1/2)\", \"comment\": \"Thanks for your constructive comments! Below, we address your concerns point by point.\\n\\n---\\n\\n**Q1**: Two highly related works [1, 2] with the same task setting are not compared.\\n\\n**A1**: We have updated our experimental results to include a detailed comparison with [1] in our manuscript. However, we are unable to directly compare with [2] as it primarily utilizes a ViT-based architecture, which differs from the ResNet backbones used in our work. Nevertheless, our method demonstrates superior performance on both CIFAR100 and mini-ImageNet datasets.\\n\\nOn mini-ImageNet\\n\\n| Methods | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |\\n| :----------: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: |\\n| Yourself [2] | 84.0 | 77.6 | 73.7 | 70.0 | 68.0 | 64.9 | 62.1 | 59.8 | 59.0 |\\n| MetaAdapter | 84.1 | 80.0 | 76.0 | 72.6 | 69.7 | 66.9 | 64.1 | 62.4 | 61.0 |\\n\\nOn CIFAR100\\n\\n| Methods | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |\\n| :----------: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: |\\n| Yourself [2] | 82.9 | 76.3 | 72.9 | 67.8 | 65.2 | 62.0 | 60.7 | 58.8 | 56.6 |\\n| MetaAdapter | 84.1 | 78.9 | 75.2 | 71.6 | 68.3 | 65.3 | 63.5 | 61.5 | 59.2 |\\n\\n---\\n\\n**Q2**: Why don't more similar inter-class feature representations affect classification?\\n\\n**A2:** Excessive feature compactness, caused by larger $w_{fcl}$ values, can negatively impact base session performance by reducing inter-class separation. By contrast, an appropriately $w_{fcl}$ balances compactness and separation, preserving space for new-class adaptation while maintaining stability in the feature space. In the **General Response**, we conducted experiments on both base and incremental accuracy across varying $w_{fcl}$ values. From Table 1, base accuracy remains stable across different $w_{fcl}$ values, but excessively high values (e.g., $w_{fcl} = 3.0$) reduce performance due to over-compactness, which impacts base-class feature representation. From Table 2, incremental accuracy improves with moderate $w_{fcl}$ values (e.g., $w_{fcl}=1.0$), as this balances compactness and separation. Extremely high or low $w_{fcl}$ values reduce incremental accuracy due to excessive compactness or dispersion in the feature space.\\n\\n---\\n\\n**Q3**: Need further proof or experiments to demonstrate that more similar inter-class feature representations reserve embedding space. Additionally, will more similar inter-class representations possibly lead to the overall embedding space shrinking, similar to collapse in contrastive learning?\\n\\n**A3**: To address this concern, we conducted additional experiments in the **General Response** to analyze the relationship between inter-class feature representations and embedding space reservation. Specifically, we conducted experiments on both base and incremental accuracy across varying $w_{fcl}$ values (Table 1 and Table 2) and evaluated the **relative angular disparity** $T(f_\\\\theta)$ between new-class samples and base-class prototypes (Table 3) and the **inter-class angular distance** among base-class prototypes (Table 4). The results in Table 4 indicates that Inter-class angular distance among base-class prototypes decreases as $w_{fcl}$ increases, which shows that FCL effectively compacts the embedding space. As shown in Table 1 and Table 2, this compactness does not result in an overall collapse of the embedding space. Additionally, the $T(f_\\\\theta)$ metric in Table 3 improves with moderate $w_{fcl}$, showing that reducing the angular separation among base-class features to an appropriate extent supports better adaptation to new classes. \\n\\n------\"}", "{\"summary\": \"Pointing out the heavy reliance on a feature extractor trained only on the base session, this paper leverages meta-learning to effectively adapt to new classes. Specifically, the proposed method first constructs a meta-learning scenario using the dataset from the base session and trains an adapter with Reptile, one of the meta-learning algorithms. Then, MetaAdapter trains a backbone network with a feature compactness loss to reserve feature space for future new classes. Finally, MetaAdapter updates the adapter using few-shot new-class data. The authors evaluate the proposed method on the CIFAR-100, miniImageNet, and CUB-200 datasets.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The authors address one of the key challenges in Few-Shot Class Incremental Learning (FSCIL): the lack of plasticity caused by the heavy reliance on the encoder trained during the base session. To overcome this, they propose leveraging a meta-learning approach and tackle several challenges that arise when applying meta-learning in the context of FSCIL.\", \"weaknesses\": \"1) Unclear descriptions of the proposed method.\\n\\nThe meaning of 'c_pseudo,' mentioned in lines L223-L224, is difficult to understand, and a formal definition would be helpful. Additionally, in Equation 4, the dimension of 'p_concat' is unclear. It appears that 'c_batch' has a shape of B x C x d, while 'p' has a shape of B x d, where B, C, and d represent the batch size, the number of base classes, and the feature dimension, respectively. If this is the case, concatenation would be impossible; if not, further clarification from the authors on the structure of 'p_concat' is necessary.\\n\\nFurthermore, Section 3.5 is challenging to interpret. Figure 2 is particularly difficult to follow, especially in relation to the adapter\\u2019s structure. It seems that the adapter may share convolutional layers with the backbone model, as indicated by the gray and sky-blue colors. However, the gray coloring appears to make this unclear. Additionally, the number of channels between the adapter convolutional layer (shown in red) and the backbone convolutional layer (in sky-blue) seems to differ. Yet, in Equation 13, these two layers are simply added, which would not be feasible with different channel numbers.\\n\\nThese issues make it challenging to understand the few-shot adaptation phase. A more explicit explanation of the adapter architectures would be beneficial.\\n\\n2) Motivation of the feature compactness loss (FCL)\\n\\nIn L211-L215, the authors argue that traditional optimization during the base session results in a dispersed embedding space that does not accommodate future new classes. To address this, they propose Feature Compactness Loss (FCL), which compacts the feature space to reserve space for future new classes.\\nWith similar motivation, many existing works on Few-Shot Class Incremental Learning (FSCIL) [1, 2] have aimed to maximize the margin between classes in the feature space. While FCL appears to share this motivation, it takes the opposite approach: rather than maximizing the margin, it reduces the overall feature space. To validate FCL, the authors should provide additional analysis to explain why compacting the feature space is more effective than maximizing class margins in preserving space for new classes.\\nThe reviewer encourages the authors to refer to [3], which proposes reducing inter-class distance to improve representation learning in FSCIL and provides an analysis on its implications.\\n\\n[1] Yang et al, \\\"Neural collapse inspired feature-classifier alignment for few-shot class incremental learning\\\", in ICLR 2023.\\n\\n[2] Zhou et al, \\\"Forward compatible few-shot class-incremental learning\\\", in CVPR2022.\\n\\n[3] Oh et al, \\\"CLOSER: Towards Better Representation Learning for Few-Shot Class-Incremental Learning\\\", in ECCV2024.\\n\\n3) Fairness issue\\n\\nIn Appendix A, the authors state that they use ResNet-12 for both mini-Imagenet and CIFAR-100 experiments.\\nHowever, several existing methods like FACT and ALICE adopt ResNet-18 for miniImageNet experiments.\\nThus, the comparison with these methods is unfair and may not demonstrate the effectiveness of the proposed method.\", \"questions\": \"Please refer to the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer ma3t\", \"comment\": \"Thank you for your constructive feedback and for raising the score of our submission! We highly appreciate your time and consideration. Below, we address your concerns point by point.\\n\\n------\\n\\n**Q1**: The paper\\u2019s contribution seems somewhat incremental, as the motivation behind meta-learning and feature compactness overlaps with existing work.\\n\\n**A1**: Thank you for raising this point. While our work builds on existing concepts, we address critical limitations that have not been fully explored in prior studies. \\n\\nThe motivation behind the Feature Compactness Loss (FCL) stems from the limitations of traditional pre-training methods, which often optimize empirical loss and maximize inter-class margins for base-class prototypes. While these strategies enhance feature discrimination, they may result in overfitting to base classes and reduced adaptability to few-shot new classes. Additionally, this strategy overlooks the issue of Minority Collapse [1], where few-shot class features cluster too tightly in imbalanced scenarios, potentially resulting in performance degradation for new classes. To mitigate these issues and reserve capacity for incremental learning, we propose the Feature Compactness Loss (FCL). By compacting both inter-class and intra-class distances, FCL prevents the embedding space from becoming overly dispersed, which preserves learning capacity for future few-shot incremental learning scenario.\\n\\nRegarding the motivation for meta-learning, unlike prior works that simulate incremental tasks using base-class data, we leverages meta-learning to produce meta-initialized adapters. These adapters are encoded with task-agnostic knowledge and provide a generalizable starting point to refine feature representations.\\n\\n------\\n\\n**Q2**: The effectiveness of the proposed method remains unclear. The benchmark results show some performance issues, especially regarding forgetting, which is a key challenge in continual learning. Additionally, the method seems to rely heavily on achieving high accuracy on the base classes.\\n\\n**A2**: Thank you for raising this concern. Our method effectively mitigates forgetting, as shown by the performance drop (PD) metric in Table 1, 5, and 6 of the revised manuscript. While the PD metric reflects a model's ability to mitigate forgetting, it represents only one aspect of performance. For example, a model with consistently 0 accuracy across sessions would have a PD of 0, which might misleadingly appear optimal. Thus, PD must be evaluated alongside accuracy for a comprehensive assessment.\\n\\nMoreover, the results in Table 3 of the manuscript and Tables 1 and 2 of the General Response highlight that our method achieves higher base accuracy and incremental accuracy compared to NC-FSCIL [2]. This shows that the observed improvements are not merely due to strong performance on base classes but also stem from enhanced adaptability during incremental sessions.\\n\\n------\\n\\n**Q3**: Concerns about the training sequence remain. Additionally, pre-training on the full base classes, as shown in Meta-Baseline (ICCV 2021), has been proven effective in regular few-shot learning.\\n\\n**A3**: Thank you for raising this concern. The reason why we train adapters before fine-tuning the backbone is to preserve the meta-learning phase's focus on unseen categories. We also acknowledge that pre-training on the full base classes, as demonstrated in Meta-Baseline (ICCV 2021), is effective in regular few-shot learning. However, such methods typically rely on a frozen feature extractor, which can limit plasticity. In contrast, our approach uses meta-initialized adapters to reduce overfitting while enhancing the model's plasticity in the FSCIL scenario.\\n\\n------\\n\\n[1] Neural collapse inspired attraction--repulsion-balanced loss for imbalanced learning. Neurocomputing2023.\\n\\n[2] Neural Collapse Inspired Feature-Classifier Alignment for Few-Shot Class-Incremental Learning. ICLR2023.\"}", "{\"title\": \"Many thanks! We will improve our paper accordingly.\", \"comment\": \"Dear Reviewer x4wA,\\n\\nThank you for your constructive feedback and for raising the score of our submission! We will continue to strive to improve our paper according to the constructive reviews. Many thanks for your time and consideration!\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer ma3t (1/2)\", \"comment\": \"Thanks for your constructive comments! We hope our responses address your concerns, and we kindly invite you to reassess our submission. Please let us know if you have any further questions or suggestions.\\n\\n---\\n\\n**Q1**: The primary motivation of this paper is that previous methods do not update the learned representation during incremental sessions, thereby compromising model plasticity. However, there is a considerable body of research that focuses on balancing model stability and plasticity during incremental sessions while also updating the backbone. This broader body of work is not sufficiently discussed.\\n\\n**A1**: In the FSCIL setting, continual updates often result in significant performance drops for the base session due to the limited data available for new classes. This is why many existing methods in FSCIL prioritize stability over continuous backbone adaptation [1]. Our approach focuses on addressing this challenge by using lightweight meta-initialized adapters to enhance flexibility while preserving stability, without overly compromising the performance on base classes.\\n\\n---\\n\\n**Q2**: MetaFSCIL, a method that updates the backbone during incremental sessions using meta-learning, aligns conceptually with the approach proposed in this paper. However, this method is not adequately discussed, and its relevance is under explored. \\n\\n**A2**: Thank you for your valuable suggestion. We agree that MetaFSCIL is an important related method, and we will include a more detailed discussion in the revised manuscript to highlight its relevance and differentiate it from our approach. To clarify, MetaFSCIL samples a sequence of sessions to mimic the evaluation protocol during the base phase and evaluates the model using a meta-objective. In contrast, our MetaAdapter approach is designed with a different focus. Instead of mimicking meta-testing during the base phase, we focus on using meta-learning to obtain meta-initialized adapters that provide a generalizable starting point for expanding and refining feature representations. During the online incremental learning stage, MetaFSCIL uses Bi-directional Guided Modulation (BGM) to generate activation masks to mitigate forgetting. In comparison, our MetaAdapter framework keeps the backbone frozen and utilize it as a teacher model for knowledge distillation to guide the adaptation of lightweight adapters.\\n\\n---\\n\\n**Q3**: The concept of feature compactness loss (FCL) seems to overlap with FACT. A comparison with FACT would strengthen the contribution.\\n\\n**A3**: FACT uses manifold mixup during the base session to generate virtual classes, which creates space for new categories in subsequent incremental learning stages. Our feature compactness loss (FCL) takes a different approach by compacting both inter-class and intra-class distances during the base session. This design prevents the embedding space from becoming overly dispersed, effectively enabling the model to better adapt to new tasks in future incremental sessions. \\n\\nTo further substantiate the advantages of FCL, we conducted additional experiments in the **General Response** to analyze the relationship between inter-class feature representations and embedding space reservation. Specifically, we evaluated the **relative angular disparity** $T(f_\\\\theta)$ between new-class samples and base-class prototypes (Table 3) and the **inter-class angular distance** among base-class prototypes (Table 4). The results in Table 4 show that Inter-class angular distance among base-class prototypes decreases as $w_{fcl}$ increases, which shows that FCL effectively compacts the embedding space. Meanwhile, The $T(f_\\\\theta)$ metric in Table 3 improves when $w_{fcl}$ is set to a moderate value, showing that reducing the angular separation among base-class features to an appropriate extent supports better adaptation to new classes.\\n\\n------\\n\\n**Q4**: The training sequence appears non-intuitive. Adapters are trained before fine-tuning the backbone, which may create inconsistencies. \\n\\n**A4**: Thank you for this observation. The reason we train adapters before fine-tuning the backbone is due to the nature of the meta-learning task. In our approach, the meta-learning phase focuses on tasks involving unseen categories. If we fine-tune the backbone first, it would have already seen all categories, which would defeat the purpose of meta-learning by reducing its ability to generalize to new, unseen classes. \\n\\n---\"}", "{\"summary\": \"This paper introduces MetaAdapter, a framework designed to address challenges in Few-Shot Class Incremental Learning (FSCIL). By employing meta-learning, MetaAdapter initializes adapters that encode general knowledge, aiming to balance stability and plasticity during incremental learning. The training process involves three phases: meta-training adapters, applying a feature compactness loss to reserve space for future classes, and utilizing knowledge distillation during incremental sessions. The framework demonstrates state-of-the-art performance on benchmarks such as mini-ImageNet, CIFAR100, and CUB200.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. **State-of-the-Art Performance:** MetaAdapter achieves competitive results on multiple FSCIL benchmarks, indicating its effectiveness in adapting to new classes with limited data.\\n\\n2. **Comprehensive Framework:** The integration of meta-learning with feature compactness and knowledge distillation offers a holistic approach to incremental learning challenges.\\n\\n3. **Well-Structured Presentation:** The paper is organized and clearly articulates the methodology, facilitating understanding.\", \"weaknesses\": \"1. **Limited Novelty in Meta-Learning Approach:** The application of meta-learning for adapter initialization resembles existing methods. Clarification on how this approach differs from established frameworks would strengthen the contribution.\\n\\n2. **Training Complexity:** The three-phase training process, including feature compactness loss and sharpness-aware minimization, adds complexity. This may pose challenges for implementation in resource-constrained environments.\\n\\n3. **Base Task Performance:** The framework appears to underperform in base classification tasks compared to other methods using the same backbone. Understanding the reasons for this discrepancy is crucial, as base task accuracy is vital for incremental learning stability.\\n\\n4. **Typographical Errors:** Minor errors, such as \\u201crataining\\u201d in Line 012 and \\u201cminImageNet\\u201d in Line 448, detract from the paper's professionalism. A thorough review to correct these is recommended.\\n\\n5. **Limited Comparison with Recent Methods:** The paper compares MetaAdapter with only one method from 2024. Including a broader range of recent methods would provide a more comprehensive evaluation of its performance.\", \"questions\": \"Please refer to weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your effort and detailed responses.\\nYour rebuttal has partially addressed my concerns, including the motivation for the FCL loss.\\nHowever, I think the issue of unclear descriptions has not been sufficiently addressed.\\nWhile your responses have clarified my questions and improved my understanding, a paper should be written in a way that allows readers to comprehend it clearly without requiring additional explanation.\\nI kindly request that you revise the paper to ensure it is self-explanatory and easy to follow. Once revised, I will be happy to reconsider whether the updated version resolves these concerns.\\n\\nAdditionally, the fairness issue remains inadequately addressed.\\nI am aware that ResNet-12 often outperforms ResNet-18 despite the small differences. As such, comparing results from ResNet-12 with those from ResNet-18 is both unfair and meaningless.\\nMoreover, the superior results of the proposed method in the last session seem to stem largely from its strong performance in the base session.\\nTo account for this, it would be better to include an additional metric, such as performance drop (PD) across incremental sessions. Based on PD, I find the proposed method may not be the most effective.\\nAlthough this critique was not included in my original review, and I will not factor it into my rating, I encourage you to consider this aspect in your revisions.\"}", "{\"title\": \"Response to Reviewer 6ye5\", \"comment\": \"Dear Reviewer 6ye5,\\n\\nThank you for your constructive feedback and for raising the score of our submission. The PD metric reflects a model's ability to mitigate forgetting, but it represents only one aspect of overall performance. For instance, a model with consistently 0 accuracy across sessions would achieve a PD of 0, which might misleadingly appear optimal. Therefore, PD should always be considered alongside accuracy to provide a comprehensive assessment of an algorithm's effectiveness. We thank you again for helping improve our paper.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer 6ye5 (2/2)\", \"comment\": \"**Q4**: The motivation for Feature Compactness Loss (FCL) is to compact the feature space after the base session to reserve room for new classes. However, existing FSCIL methods with similar motivations focus on maximizing inter-class margins instead. Given this contrast, additional analysis is needed to explain why compacting features is more effective than maximizing margins.\\n\\n**A4:** Existing FSCIL methods that focus on maximizing inter-class margins, such as ALICE [1], aim to enhance feature discrimination by increasing the separation between base-class prototypes. However, as CLOM [2] suggests, this strategy can lead to overfitting on base classes, which may result in degraded generalization to few-shot new classes. Moreover, such methods ignores the issue of **Minority Collapse [3]** , where features of few-shot classes cluster excessively tightly in imbalanced scenarios, further reducing the model's adaptability to new classes. In contrast, our Feature Compactness Loss (FCL) is specifically designed to compact both inter-class and intra-class distances during the base session. This design prevents the embedding space from becoming overly dispersed, effectively enabling the model to better adapt to new tasks in future incremental sessions. As demonstrated by the experimental results in our manuscript, our method outperforms ALICE. \\n\\nTo further substantiate the advantages of FCL, we conducted additional experiments in the **General Response** to analyze the relationship between inter-class feature representations and embedding space reservation. Specifically, we evaluated the **relative angular disparity** $T(f_\\\\theta)$ between new-class samples and base-class prototypes (Table 3) and the **inter-class angular distance** among base-class prototypes (Table 4). The results in Table 4 show that Inter-class angular distance among base-class prototypes decreases as $w_{fcl}$ increases, which shows that FCL effectively compacts the embedding space. Meanwhile, The $T(f_\\\\theta)$ metric in Table 3 improves when $w_{fcl}$ is set to a moderate value, showing that reducing the angular separation among base-class features to an appropriate extent supports better adaptation to new classes.\\n\\n---\\n\\n**Q5**: Fairness issue: In Appendix A, you use ResNet-12 for both mini-ImageNet and CIFAR-100 experiments, while other methods like FACT and ALICE adopt ResNet-18 for mini-ImageNet. This discrepancy may result in unfair comparisons. \\n\\n**A5**: In our experiments, we observed that prior studies use different backbone architectures for mini-ImageNet and CIFAR100, as highlighted in Table 8 of our paper. We followed C-FSCIL [4] and selected the shallowest architecture, ResNet-12, for both mini-ImageNet and CIFAR100. For CUB200, all methods consistently use the same architecture, ResNet-18 pretrained on ImageNet.\\n\\nAdditionally, the parameter differences between ResNet-12 and ResNet-18 are relatively small. Specifically:\\n\\n- **ResNet-18**: 11.57M total parameters, with 0.29M trainable parameters for adapters.\\n- **ResNet-12**: 12.81M total parameters, with 0.32M trainable parameters for adapters.\\n\\n------\\n\\n[1] Few-Shot Class-Incremental Learning from an Open-Set Perspective. ECCV2022.\\n\\n[2] Margin-based few-shot classincremental learning with class-level overfitting mitigation. NeurIPS2022.\\n\\n[3] Neural collapse inspired attraction--repulsion-balanced loss for imbalanced learning. Neurocomputing2023.\\n\\n[4] Constrained Few-shot Class-incremental Learning. CVPR2022.\"}", "{\"title\": \"Response to Reviewer x4wA (2/2)\", \"comment\": \"**Q4**: Need to prove that it is the reservation of space that helps future tasks, not other factors. For example, reducing the number of novel categories in the few-shot adaptation task can also reserve space. Will this also improve performance?\\n\\n**A4**: To address this, the ablation study has been updated to include results using only FCL in Table 2 of the manuscript, which demonstrate that the reservation of embedding space during the base session contributes to the observed performance improvements. And we would also like to clarify that this space reservation occurs mainly during the base session learning process and is irrelevant to the number of novel categories in subsequent tasks.\\n\\n------\\n\\n**Q5**: The results using only FCL are missing in Table 2.\\n\\n**A5**: Thank you for catching this. We have updated Table 2 to include the results using only FCL. The updated results demonstrate that FCL alone improves the model\\u2019s performance. \\n\\n------\\n\\n**Q6**: It would be better to state how to expand $W_{t-1}$, as Figure 2(b) only shows $W_t $.\\n\\n**A6**: Thank you for your suggestion. Figure 2(b) illustrates that $W_t$, the weight matrix for the current task $t$, consists of two components: (1) weights from all previous classes ($W^{t-1}$), which are retained as the learned representations from previous tasks, and (2) new weights for the current task ($w^t_1,w^t_2,w^t_3,\\\\dots$), which are derived from the feature means of the samples for each new class.\\n\\n------\\n\\n**Q7**: Since the ViT backbones are receiving more attention, I wonder if MetaAdapter and FCL could be applied to methods with ViT backbones?\\n\\n**A7**: Your suggestion provides a valuable direction for future research. Our MetaAdapter framework is general and can be extended to Vision Transformer (ViT) backbones. Specifically, adapting MetaAdapter would involve inserting the lightweight neural modules to work in parallel with self-attention layers. These modules can be implemented with residual connections and structured with a down-projection matrix, a nonlinear activation function, and an up-projection matrix to enable efficient representation learning. In addition, FCL is flexible and can be directly applied to ViT features, as it focuses on regulating feature dispersion. We believe that combining our MetaAdapter framework and FCL with ViT backbones has the potential to achieve superior performance on more complex tasks. \\n\\n------\\n\\n[1] Improved Continually Evolved Classifiers for Few-Shot Class-Incremental Learning. TCSVT2023.\\n\\n[2] Rethinking Few-shot Class-incremental Learning: Learning from Yourself. ECCV2024.\"}", "{\"title\": \"Final response to authors\", \"comment\": \"Thank you for your response. While my concerns have been partially addressed, I\\u2019ve decided to raise my score to 5, which is just below the acceptance threshold. Here are the main reasons I can\\u2019t give a positive rating:\\n\\n1. Limited novelty: The paper\\u2019s contribution seems somewhat incremental, as the motivation behind meta-learning and feature compactness overlaps with existing work. \\n\\n2. Effectiveness of the proposed method: I\\u2019m still not fully convinced by the method\\u2019s effectiveness. The benchmark results show some performance issues, especially with the forgetting problem, which is a key challenge in continual learning. Additionally, the method seems to rely heavily on achieving high accuracy on the base classes.\\n\\n3. Training sequence concerns: The response for the training sequence cannot fully convince me. The ability to learn incrementally depends on what knowledge is already learned. I understand that the authors aim to mimic the process of learning new classes, but there\\u2019s an issue with the adapters. Since the adapters are attached to the backbone, any new knowledge learned by the adapters is also tied to the backbone. In the third stage, when the backbone is retrained on the full set of base classes, the adapters are frozen, which could lead to inconsistencies in how knowledge is updated. It\\u2019s also worth mentioning that in regular few-shot learning, pre-training on the full base classes, including during meta-learning\\u2014has been shown to be very effective (Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning, ICCV 2021).\\n\\nGiven these points, I\\u2019m leaning toward recommending rejection.\"}", "{\"metareview\": \"This submission received two negative sores and two positive scores after rebuttal. After carefully reading the paper, the review comments, the AC can not recommend the acceptance of this submission, as the average score is under the threshold bar and the concerns about the proposed approach remain. The AC also recognizes the contributions confirmed by the reviewers, and encourages the authors to update the paper according to the discussion and submit it to the upcoming conference.\", \"additional_comments_on_reviewer_discussion\": \"This submission was fully discussed during the rebuttal period. While most concerns of reviewer#7Rcy and aBEC were solved, those (novelty, motivation and effectiveness) from Reviewer #ma3t, #6ye5 and #x4wA were only partially solved.\"}", "{\"title\": \"Response to Reviewer 6ye5 (1/2)\", \"comment\": \"Thanks for your constructive comments! We hope our responses address your concerns, and we kindly invite you to reassess our submission. Please let us know if you have any further questions or suggestions.\\n\\n---\\n\\n**Q1**: The meaning of 'c_pseudo,' mentioned in lines L223-L224, is difficult to understand, and a formal definition would be helpful. Additionally, in Equation 4, the dimension of 'p_concat' is unclear. It appears that 'c_batch' has a shape of $B \\\\times C \\\\times d $, while 'p' has a shape of $B \\\\times d$. If this is the case, concatenation would be impossible; if not, further clarification on the structure of 'p_concat' is necessary. \\n\\n**A1**: We appreciate your suggestion and will provide a formal definition of 'c_pseudo' and clarify the dimensions of 'p_concat' in the revised version of our manuscript. To clarify, 'c_pseudo' refers to prototypes for categories not present in the current batch, derived from the mean feature vectors of these unseen classes from the previous epoch. The batch means $\\\\mathbf{c}_{\\\\text{batch}}$ are prototypes for each class in the current batch, while the original feature vectors $\\\\mathbf{p}$ represent the features of all samples in the current batch. We concatenate these three components to form 'p_concat'. For example, if 'c_batch' has a shape of (N, d), 'c_pseudo' has a shape of (K, d), and $\\\\mathbf{p}$ has a shape of (M, d), then 'p_concat' will have a shape of ((N + K + M), d) after concatenation.\\n\\n\\n---\\n\\n**Q2**: Figure 2 is difficult to follow, particularly regarding the adapter\\u2019s structure. The adapter appears to share convolutional layers with the backbone model, as suggested by the gray and sky-blue colors. However, the gray coloring makes this unclear.\\n\\n**A2:** To clarify, the gray components in Figure 2 represent the convolutional layers in the backbone where no adapter is applied. These layers are shared across all tasks and remain unchanged during incremental sessions. The sky-blue components indicate the convolutional layers within the blocks that contain parallel adapters. These adapters form residual connections with the corresponding convolutional layers in the backbone, enabling task-specific adaptation without modifying the structure of the backbone. \\n\\n------\\n\\n**Q3**: The channel dimensions between the adapter and backbone layers seems to differ, but Equation 13 suggests a straightforward addition, which seems inconsistent if the channel numbers do not match.\\n\\n**A3**: The addition of the adapter and backbone layers is done by merging the final $3 \\\\times 3$ kernel with the $1 \\\\times 1$ kernels. We achieve this by zero-padding the $1 \\\\times 1$ kernels and aligning them at the center of the $3 \\\\times 3$ kernel. This transformation requires both layers to have the same stride, with the $1 \\\\times 1$ layer having one pixel less padding. For example, if the $3 \\\\times 3$ layer uses padding = 1 (commonly used), the $1 \\\\times 1$ layer should have padding = 0, ensuring consistent channel dimensions for addition.\\n\\n---\"}", "{\"summary\": \"This paper tackles the problem of Few-Shot Class Incremental Learning (FSCIL). The authors propose enhancing model plasticity during incremental learning stages by integrating and updating adapters within the backbone network. To simulate the testing scenario, the Reptile algorithm is employed to meta-learn the adapter, facilitating better initialization using data from the base session. Afterward, the adapters are frozen, and the backbone is fine-tuned on the base classes using a novel Feature Compactness Loss (FCL), complemented by a strategy to promote Flat Local Minima (FLM). The objectives of FCL and FLM are to reduce inter-class distances within the base classes, thereby preserving feature space capacity for future incremental classes. During incremental sessions, the adapters are updated and merged into the backbone through a running average. The approach demonstrates superior performance across three standard benchmarks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1.\\tEnhancing model plasticity through the insertion of adapters while maintaining stability by freezing the backbone is a sound technical approach for FSCIL.\\n2.\\tPreserving feature space for future incremental classes is an effective strategy for improving overall performance.\\n3.\\tThe proposed method demonstrates superior performance across three standard benchmarks.\", \"weaknesses\": \"Major:\\n\\n1. The primary motivation of this paper is that previous methods do not update the learned representation during incremental sessions, thereby compromising model plasticity. However, the authors have not sufficiently explored related work in the field of continual learning. There is a considerable body of research that focuses on balancing model stability and plasticity during incremental sessions while also updating the backbone, not limited to FSCIL. The absence of a discussion on these relevant works is a notable gap.\\nEven within FSCIL, prior works such as MetaFSCIL provide a relevant comparison. MetaFSCIL is a meta-learning-based method that not only learns meta-representations using base session data but also updates the backbone representation during incremental sessions. The meta-learning strategy employed in offline training mimics the meta-testing scenario while also balancing stability and plasticity as the backbone is updated. Conceptually, MetaFSCIL is closely aligned with the idea of meta-learning adapters proposed in this paper. However, the authors have misinterpreted MetaFSCIL\\u2019s approach in L60-63 and L113-115.\\n2. The concept of feature compactness loss, aimed at reducing excessive dispersion among base classes, is conceptually similar to the forward compatibility strategy introduced in FACT, which ensures sufficient feature space is reserved for future classes. However, the authors did not provide a discussion or comparison with FACT, despite the conceptual overlap. Including such a comparison would have strengthened the paper\\u2019s positioning and clarified its contributions relative to existing approaches.\\n\\n3. The training sequence, where adapters are trained before fine-tuning the backbone, appears non-intuitive. In the first phase, the adapters are trained while the backbone remains frozen, causing the adapters to rely heavily on the backbone\\u2019s fixed knowledge. As a result, the adapters are meta-learned to operate on top of this frozen representation. However, in the second phase, the backbone undergoes fine-tuning, altering its parameters. This shift may create incompatibility between the previously trained adapters and the newly updated backbone, potentially undermining the synergy between the two components.\", \"minor\": \"1.\\tIt is inaccurate to state \\u201crandomly initialize the adapter parameters for the j-th task,\\u201d as mentioned in L197-198. This phrasing implies that the adapters are randomly initialized for each task, which is misleading. In reality, the adapters should be updated iteratively using Eqs. (1) and (2) to build on previously learned knowledge rather than restarting with random parameters for every task.\\n2.\\tEq. (4) appears somewhat unclear. If the goal is to bring feature vectors closer together, it would imply that Eq. (4) encourages a more uniform probability distribution. However, it is unclear how the information is concatenated into P_{\\\\text{concat}} . What is the dimensionality of P_{\\\\text{concat} ? If the concatenation occurs along the embedding dimension, the resulting output of the cosine similarity would be a scalar. Applying softmax to a scalar value does not seem meaningful, so additional clarification on the concatenation process is needed.\\n3.\\tThe objective of the feature compactness loss and sharpness-aware minimization is to reduce the distances among base classes. However, it is unclear whether this operation could negatively impact the model\\u2019s performance on the base classes. If such degradation occurs, it is important to discuss how this issue could be mitigated to maintain performance on the base classes.\", \"questions\": \"1. How is the knowledge encoded in the adapters ensured to be task-agnostic, as claimed in L67? This concept is introduced in the paper\\u2019s introduction but is not elaborated upon in subsequent sections. A more detailed explanation is necessary to clarify how the adapters generalize across tasks without being biased toward specific ones.\\n2. How are pseudo-targets generated? The paper lacks details on the process used to obtain these pseudo-targets. Providing a clear description of the method for generating pseudo-targets is essential for understanding the approach and evaluating its effectiveness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer x4wA\", \"comment\": \"Thanks for your constructive comments! Below, we address your concerns point by point.\\n\\n------\\n\\n**Q1**: Regarding the $W_{t\\u22121}$ and $W_t$ mentioned in Q6, Although their dimensions are different, both $\\\\mathbf{z}^{t-1}$ and $\\\\mathbf{z}^{t}$ have $c^{t}$ weights in Eq.10. Is there perhaps some intermediate steps missing here?\\n\\n**A1**: To clarify, during the $t$-th task, we only utilize $W_t$ for the classification weights. The term $\\\\mathbf{z}^{t-1}$ refers to the logits produced solely by the backbone, as the adapter weights from the previous task have already been integrated into the backbone during the previous phase. \\n\\n---\\n\\n**Q2**: According to the newly added results in Table 2 of the manuscript, using only FCL yields better performance than SAM+FCL and is comparable to MIS+FCL. Could you explain why this happens?\\n\\n**A2**: Thank you for raising this point. The results demonstrate that using FCL alone is indeed effective. When combined with SAM, the gradient-based perturbations, while beneficial for improving generalization, can slightly interfere with the compactness established by FCL. MIS can enhance the model's adaptability to few-shot tasks but this focus may lead to increased forgetting of previously learned tasks. These factors explain why FCL alone yields better performance compared to SAM+FCL and remains comparable to MIS+FCL.\\n\\n---\\n\\n**Q3**: The results in Table 1 show some fluctuations in FCL's performance, while Table 2 demonstrates that it enhances generalization on classification tasks under reasonable parameter values. However, concerns remain regarding the stability and robustness of the method. Baseline performance should be included in Tables 1 and 2 of the General Response to clearly illustrate the range of $w_{fcl}$ values where FCL demonstrates improvements.\\n\\n**A3**: Thank you for raising this point. We have updated the General Response to include the baseline performance of the NC-FSCIL [1], which has demonstrated strong performance in FSCIL. The results show that moderate $w_{fcl}$ values (e.g., around 1.0) consistently enhance both base and incremental accuracy compared to NC-FSCIL, which indicates the effectiveness of FCL in improving generalization and adaptability in FSCIL tasks.\\n\\n---\\n\\n[1] Neural Collapse Inspired Feature-Classifier Alignment for Few-Shot Class-Incremental Learning. ICLR2023.\"}", "{\"title\": \"Response to Reviewer ma3t\", \"comment\": \"Thanks for your constructive comments! Below, we address your concerns point by point.\\n\\n------\\n\\n**Q1**: The proposed method improves the trade-off between stability and plasticity by freezing the backbone, but there is no experimental validation of improved plasticity. Based on the results in Table 1 and Table 5, the improvements seem primarily driven by better performance on base classes, potentially due to bias introduced by the base classes.\\n\\n**A1**: Thank you for raising this point. As shown in Table 3 of the manuscript and Tables 1 and 2 of the General Response, our method demonstrates superior performance in both base accuracy and incremental accuracy compared to NC-FSCIL [1]. This indicates that the observed improvements are not solely due to the strong performance on base classes but also reflect better adaptability to incremental sessions. To provide a clearer view of stability across sessions, we also include the performance drop (PD) metric in Table 1,5 and 6 in the revised manuscript. For mini-ImageNet, the PD results show that our method outperforms other methods on ResNet-12 and is comparable to recent methods on ResNet-18. For CIFAR100, the PD results indicate that we achieve superior performance on ResNet-12 and remain comparable to other methods on ResNet-18 and ResNet-20. These findings demonstrate that our approach effectively enhances plasticity while maintaining competitive stability.\\n\\n------\\n\\n**Q2**: The revised version lacks a comprehensive discussion of relevant prior works, which makes it difficult to assess how the proposed method fits within the existing research landscape.\\n\\n**A2**: Thank you for the suggestion. In the revised manuscript, we include a more detailed discussion of prior works, focusing on dynamic neural networks and their strategies for enhancing plasticity. As these approaches often involve increased architectural complexity, which can reduce efficiency, we propose using lightweight, meta-initialized adapters, which allow the model to efficiently adapt to new few-shot tasks without significantly increasing the model's complexity. Additionally, we also provide a detailed comparison with MetaFSCIL to highlight the key differences of our method.\\n\\n------\\n\\n**Q3**: The use of pre-training data (e.g., ImageNet) may already include base classes or similar categories. Consequently, the first step cannot be entirely focused on training unseen classes.\\n\\n**A3**: Thank you for raising this concern. For mini-ImageNet and CIFAR100, we train the model from scratch without pre-training on ImageNet. Instead, the model is pre-trained on half of the base classes to initialize the feature extractor, while the remaining half of the classes are used for meta-training the adapters. These details are provided in Appendix A of the manuscript.\\n\\n------\\n\\n**Q4**: There is confusion regarding pseudo-targets for unseen categories derived from the previous epoch.\\n\\n**A4**: Thank you for raising this point. We would like to clarify that the pseudo-targets refer to the mean features of unseen classes within the current batch. These mean features are computed at the end of the previous epoch. These details have been included in Section 3.4 of the revised manuscript.\\n\\n------\\n\\n**Q5**: The use of different backbones raises concerns about the fairness of comparisons.\\n\\n**A5**: Thank you for raising this concern. To ensure fairness, we provide additional results using ResNet-18 for mini-ImageNet,as shown in Table 1, and ResNet-18 and ResNet-20 for CIFAR100, detailed in Table 5 of the revised manuscript. These results demonstrate that our method improves both the final accuracy and average accuracy on the same backbone compared to other methods. \\n\\n------\\n\\n[1] Neural Collapse Inspired Feature-Classifier Alignment for Few-Shot Class-Incremental Learning. ICLR2023.\"}", "{\"title\": \"To Reviewer 6ye5\", \"comment\": \"Dear Reviewer 6ye5,\\n\\nThank you for your careful review and thoughtful feedback! We have updated our submission based on your suggestions and provided detailed responses to the newly raised concerns. As the discussion phase is nearing its conclusion, could you please check our response to see if we address your concerns? Please let us know if you have any other questions! We would be glad to answer them during the discussion period.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Response to Reviewer aBEC\", \"comment\": \"Thanks for your constructive comments! Below, we address your concerns point by point.\\n\\n---\\n\\n**Q1**: While the paper presents reproducible results across three datasets, some existing techniques implemented in the program are not mentioned in the manuscript. For example, the rotation technique appears to draw from previous work [1]. \\n\\n**A1**: Due to space constraints, we did not include the detailed augmentation strategies in the manuscript, but we will add them in the revised version. The augmentation strategies used in our work are consistent with those in prior studies.\\n\\n---\\n\\n**Q2**: A more detailed discussion of the novelty in combining existing techniques is needed. The paper should elaborate on how these techniques contribute to improved performance and reduced forgetting, highlighting any unique modifications or interactions.\\n\\n**A2**: Our MetaAdapter framework introduces a novel combination of techniques specifically designed to balance plasticity and stability in few-shot class-incremental learning (FSCIL). To enhance plasticity, we leverage **Meta-Initialized Adapters**, which provide task-agnostic initialization for adapting to new tasks, and **Feature Compactness Loss (FCL)**, which compacts both inter-class and intra-class distances during the base session to reserve embedding space for future incremental tasks. To improve stability, we incorporate **Sharpness-Aware Minimization (SAM)** to locate flatter local minima during backbone training. This improves the generalization of the learned representation and reduces forgetting, particularly when adapter parameters are merged back into the backbone for final inference.\\n\\n---\\n\\n**Q3**: The benchmark comparisons should include detailed network architecture specifications for all compared methods. Additionally, model parameter sizes should be listed, as MetaAdapter introduces additional parameters. \\n\\n**A3**: Detailed network architecture specifications for all compared methods have been included in Appendix (Table 7). Additionally, we provide the parameter sizes for our method to illustrate its efficiency:\\n\\n- **ResNet-18**: 11.57M total parameters, 0.29M trainable parameters for adapters\\n- **ResNet-12**: 12.81M total parameters, 0.32M trainable parameters for adapters\\n\\nThe additional parameters introduced by MetaAdapter constitute less than 3% of the total parameter count.\\n\\n---\\n\\n[1] Learning with Fantasy: Semantic-Aware Virtual Contrastive Constraint for Few-Shot Class-Incremental Learning. CVPR2023.\\n\\n[2] Few-Shot Class-Incremental Learning via Training-Free Prototype Calibration. NeurIPS2023.\\n\\n[3] Few-Shot Incremental Learning with Continually Evolved Classifiers. CVPR2021.\"}", "{\"title\": \"General Response To All Reviewers (2/2)\", \"comment\": \"**Table 1: Final Base Accuracy Across Datasets for Varying $w_{fcl}$ Values ($w_{kd} = 1.0$)**\\n\\n|Dataset|$w_{fcl}$|0.25|0.5|0.75|1.0|1.5|2.0|2.5|3.0|NC-FSCIL [8]|\\n|:-----------:|:-------:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:----------:|\\n|mini-ImageNet|Base|79.02|79.37|79.68|79.68|78.30|78.90|78.57|77.88|75.77|\\n|CIFAR100|Base|76.41|76.90|76.86|75.17|73.02|73.86|73.78|73.22|73.98|\\n|CUB200|Base|77.79|77.57|77.36|76.86|75.85|75.94|76.37|75.84|76.19|\\n\\nTable 1 shows that base accuracy is relatively stable across varying $w_{fcl}$ values for all datasets. However, excessively high values (e.g., $w_{fcl}= 3.0$) lead to a slight drop in performance due to over-compactness in the embedding space, which can harm the representation of base classes.\\n\\n**Table 2: Final Incremental Accuracy Across Datasets for Varying $w_{fcl}$ Values ($w_{kd} = 1.0$)**\\n\\n|Dataset|$w_{fcl}$|0.25|0.5|0.75|1.0|1.5|2.0|2.5|3.0|NC-FSCIL [8]|\\n|:-----------:|:---------:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:----------:|\\n|mini-ImageNet|Incremental|29.60|30.25|32.22|33.00|33.33|30.17|29.80|29.38|31.77|\\n|CIFAR100|Incremental|31.05|31.25|31.53|35.25|36.07|35.57|35.25|31.15|29.30|\\n|CUB200|Incremental|45.39|46.09|47.09|47.11|47.22|47.45|46.89|46.83|43.07|\\n\\nAs presented in Table 2, incremental accuracy benefits from moderate $w_{fcl}$ values (e.g., $w_{fcl}=1.0$), as this balances compactness and separation. Extremely high or low $w_{fcl}$ values reduce performance due to excessive compactness or dispersion in the feature space. \\n\\nAs presented in Table 2, incremental accuracy benefits from moderate $w_{fcl}$ values (e.g., $w_{fcl}=1.0$), as this balances compactness and separation. Extremely high or low $w_{fcl}$ values reduce performance due to excessive compactness or dispersion in the feature space.\\n\\n**Table 3: Relative Angular Disparity $T(f_\\\\theta)$ for New-Class Adaptation Across Datasets for Varying $w_{fcl} $ Values ($w_{kd} = 1.0 $)**\\n\\n|$w_{fcl}$|mini-ImageNet|CIFAR100|CUB200|\\n|:--------:|:-----------:|:------:|:----:|\\n|0.25|0.8212|0.7649|0.7243|\\n|0.5|0.8388 |0.7492|0.7420|\\n|0.75|0.8635|0.7579|0.7645|\\n|1.0|0.9015|0.7791|0.7931|\\n|1.5|0.9353|0.8229|0.8124|\\n|2.0|0.8267|0.6631|0.7420|\\n|2.5|0.8234|0.6626|0.7136|\\n|3.0|0.7940|0.5968|0.7016|\\n\\nAs presented in Table 3, the $T(f_\\\\theta)$ metric improves with moderate $w_{fcl}$, indicating that reducing the angular separation among base-class features appropriately supports better adaptation to new classes. However, excessively high $w_{fcl}$ compresses base-class features too much and reduces the model's ability to adapt to new-class representations effectively.\\n\\n**Table 4: Inter-class Angular Distance Among Base-Class Prototypes Across Datasets for Varying $w_{fcl} $ Values ($w_{kd} = 1.0 $)**\\n\\n\\n| $w_{fcl} $ | mini-ImageNet | CIFAR100 | CUB200 |\\n| :--------: | :-----------: | :------: | :----: |\\n|0.25|0.4252|0.4613|0.6375|\\n|0.5|0.3952|0.4568|0.5561|\\n|0.75|0.3826|0.4389|0.5255|\\n|1.0|0.3588|0.4367|0.4983|\\n|1.5|0.3238|0.3944|0.4730|\\n|2.0|0.3255|0.3254|0.4496|\\n|2.5|0.3323|0.3353|0.4404|\\n|3.0|0.3055|0.3132|0.4304|\\n\\nFrom Table 4, Inter-class angular distance among base-class prototypes decreases as $w_{fcl}$ increases, which shows that FCL effectively compacts the embedding space.\\n\\n\\n\\nWe hope that our response could relieve your concerns. Please let us know if you have any further questions or suggestions. \\n\\n\\n\\n[1] Forward compatible few-shot class-incremental learning. CVPR2022.\\n\\n[2] Few-Shot Class-Incremental Learning from an Open-Set Perspective. ECCV2022.\\n\\n[3] Learning with Fantasy: Semantic-Aware Virtual Contrastive Constraint for Few-Shot Class-Incremental Learning. CVPR2023.\\n\\n[4] Momentum contrast for unsupervised visual representation learning. CVPR2020.\\n\\n[5] Margin-based few-shot classincremental learning with class-level overfitting mitigation. NeurIPS2022.\\n\\n[6] Neural collapse inspired attraction--repulsion-balanced loss for imbalanced learning. Neurocomputing2023.\\n\\n[7] CLOSER: Towards Better Representation Learning for Few-Shot Class-Incremental Learning. ECCV2024.\\n\\n[8] Neural Collapse Inspired Feature-Classifier Alignment for Few-Shot Class-Incremental Learning. ICLR2023.\\n\\n------\"}", "{\"title\": \"Response to Reviewer ma3t (2/2)\", \"comment\": \"**Q5**: It is inaccurate to state \\u201crandomly initialize the adapter parameters for the j-th task,\\u201d as mentioned in L197-198.\\n\\n**A5**: Thank you for pointing this out. We agree that our statement in Lines 197-198 was unclear. The adapter parameters $\\\\theta_a$ are initialized once with a shared random initialization for all tasks, rather than separately for each task. We have revised the manuscript to clarify this.\\n\\n------\\n\\n**Q6**: Equation (4) and the dimensionality of $P_{concat}$ are unclear, especially regarding the application of softmax.\\n\\n**A6**: To clarify, 'c_pseudo' represents prototypes for classes not present in the current batch, derived from the mean feature vectors of these classes from the previous epoch. The batch means 'c_batch', are prototypes for each class in the current batch, and $\\\\mathbf{p}$ represents the feature vectors of all samples in the current batch. For example, if 'c_batch' has a shape of (N, d), 'c_pseudo' has a shape of (K, d), and $\\\\mathbf{p}$ has a shape of (M, d), then 'p_concat' will have a shape of ((N + K + M), d) after concatenation. Regarding the cosine similarity, we compute it between all pairs of vectors in 'p_concat', which results in a similarity matrix of shape ((N + K + M), (N + K + M)). The softmax is applied row-wise to these cosine similarities (excluding self-similarities on the diagonal) to convert them into a probability distribution, which is then used in our loss function.\\n\\n---\\n\\n**Q7**: The objective of FCL is to reduce the distances among base classes, but it is unclear if this impacts performance on base classes. \\n\\n**A7**: As shown in the **General Response**, we conducted experiments on base accuracy across varying $w_{fcl}$ values (Table 1). As can be seen from Table 1, the base accuracy is relatively stable across varying $w_{fcl}$ values for all datasets. However, excessively high values (e.g., $w_{fcl}= 3.0$) lead to a slight drop in performance due to over-compactness in the embedding space, which can harm the representation of base classes.\\n\\n---\\n\\n**Q8**: How is the knowledge encoded in adapters ensured to be task-agnostic, as claimed in L67? \\n\\n**A8**: As explained in **Section 3.2**, we achieve task-agnostic knowledge by using meta-learning. In the first phase, we create few-shot tasks by randomly sampling instances from base classes and then use the Reptile algorithm to train the adapters. This approach helps the adapters acquire generalizable parameters that can quickly adapt to new tasks.\\n\\n---\\n\\n**Q9**: The process for generating pseudo-targets is unclear. \\n\\n**A9**: We would like to clarify that the process of generating pseudo-targets is explained in detail in **Section 3.2** of the manuscript. Briefly, we partition the base label space $Y_0$ into non-overlapping subsets $ \\\\hat{Y}_1, \\\\hat{Y}_2, \\\\ldots, \\\\hat{Y}_C $. For each subset $ \\\\hat{Y}_i $, we randomly sample $ \\\\hat{K} $ examples to form an $ \\\\hat{N} $-way, $ \\\\hat{K} $-shot support set $ \\\\mathcal{S}^i $. These support sets $ \\\\mathcal{S}^1, \\\\mathcal{S}^2, \\\\ldots, \\\\mathcal{S}^C $ are then used to train the adapters using the Reptile algorithm. This approach ensures consistency with the few-shot learning setup and enables efficient task adaptation.\\n\\n------\\n\\n[1] A Survey on Few-Shot Class-Incremental Learning. Neural Networks 2024.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for providing such detailed responses and a thorough revision.\\nI think the revised manuscript explains the proposed method more effectively than the previous version, and the fairness issues in the experimental section have been fully addressed.\\nAs a result, I have decided to increase the rating to 'marginally below the acceptance threshold.'\\nI still hesitate to give a positive rating because it seems that the experimental results do not sufficiently demonstrate the effectiveness of the proposed method.\\nIn particular, regarding 'Performance Decrease,' the proposed method is often outperformed by other approaches, indicating vulnerability to catastrophic forgetting.\\nSince mitigating catastrophic forgetting is one of the primary goals of FSCIL, I do not believe the proposed method is effective in the FSCIL scenario.\"}", "{\"title\": \"Many thanks! We will improve our paper accordingly.\", \"comment\": \"Dear Reviewer 7Rcy,\\n\\nThank you very much for your kind reply! We will continue to strive to improve our paper according to the constructive reviews.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Follow-up comments for the Authors\", \"comment\": \"Thank you for your detailed response. While I appreciate the effort, some of my concerns remain only partially addressed. I would like to highlight a few points for further clarification:\\n\\n1. Plasticity vs. Base Performance\\nIn your response, you mention that the proposed method improves the tradeoff between stability and plasticity by freezing the backbone. However, it has not been experimentally validated that the proposed method demonstrates better plasticity. Based on the reported results in Table 1 and Table 5, the improvements observed in terms of (base, last, and average performance) (+12.08, +11.82, +11.9) and (+9.55, +9.23, +8.94) over MetaFSCIL seem primarily driven by better performance on the base classes. This suggests that the gains may not be related to improved plasticity, but rather to the bias introduced by the base classes, as pointed out by Reviewer 6ye5. I believe this discrepancy needs further clarification.\\n\\n2. Comparison with Prior Work\\nAs noted in the initial review, the comparison with relevant prior works is not fully incorporated in the revised version of the paper. Without this, it is challenging to assess how your approach fits within the existing body of research. I encourage you to include a more comprehensive discussion of prior work to strengthen the paper's context.\\n\\n3. Training Sequence and Knowledge Misalignment\\nRegarding the training sequence, I remain unconvinced by the response. The pre-training data (e.g., ImageNet) may already include base classes or similar categories to those found in datasets like Mini-ImageNet or CIFAR100. Consequently, the first step cannot be entirely focused on training unseen classes, as this could introduce knowledge misalignment between the adapters and the representations. This concern, which I raised in my initial review, has not been fully addressed. I believe it would be helpful for the authors to provide further justification or clarity on this point.\\n\\n4. Pseudo-Targets and Confusion in Terminology\\nThere seems to be some confusion regarding the pseudo-targets concept. As mentioned in Sec. 3.3 (rather than Sec. 3.2), I would like to clarify that my comment referred to the pseudo-targets for unseen categories, which are derived from the average feature representations computed in the previous epoch. If we are treating the current batch as unseen, I am unclear how the pseudo-targets from the previous epoch can be determined in this context. Further explanation would be helpful here.\\n\\n5. Fairness with Different Backbones\\nThe discrepancy between different backbones (ResNet-18 and ResNet-12) in terms of fairness, as pointed out by Reviewer 6ye5, has not been adequately addressed. I strongly encourage the authors to carefully consider this issue and revise the paper accordingly, either in this submission or in future iterations.\\n\\nIf these concerns can be thoroughly addressed, I would be happy to reconsider the paper's evaluation.\"}", "{\"title\": \"To Reviewer x4wA\", \"comment\": \"Dear Reviewer x4wA,\\n\\nThank you for your careful review and thoughtful feedback! We have provided detailed responses to the newly raised concerns. As the discussion phase is nearing its conclusion, could you please check our response to see if we address your concerns? Please let us know if you have any other questions! We would be glad to answer them during the discussion period.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"summary\": \"This paper introduces a novel framework for few-shot class incremental learning (FSCIL) that addresses the challenges of plasticity and overfitting through meta-initialized adapters. The approach consists of three phases: meta-training adapters during the base session to obtain generalizable initial parameters, backbone pretraining with feature compactness loss to prevent feature space dispersion, and few-shot adaptation in incremental sessions where adapters are fine-tuned while preserving backbone knowledge. The framework demonstrates state-of-the-art performance across multiple benchmark datasets.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. Novel architectural design that combines meta-learning with adapter modules in a way that enhances both plasticity and stability.\\n\\n2. Three-phase training strategy that systematically addresses key FSCIL challenges. The strategy integrates meta-learning for initialization, feature space management during pretraining, and adaptation during incremental sessions.\\n\\n3. Thorough empirical validation across multiple benchmark datasets with ablation studies. The experimental results demonstrate consistent performance improvements across different scenarios and provide insights into the contribution of each component.\", \"weaknesses\": \"1. While the paper presents reproducible results across three datasets, some existing techniques implemented in the program are not mentioned in the manuscript. For example, the rotation technique utilized in the implementation appears to draw from previous work [1]. The authors should include all the implementation details in the paper and illustrate how these designs facilitate their method.\\n\\n[1] Learning with Fantasy: Semantic-Aware Virtual Contrastive Constraint for Few-Shot Class-Incremental Learning\\n\\n2. A more detailed discussion of the novelty in the existing techniques combination should be made (meta-learning [2], SAM [3], and feature compactness loss in FSCIL [4][5][6]...). The paper should elaborate on how the specific combination and adaptation of these techniques contribute to improved performance and reduced forgetting in the FSCIL context, for example, highlighting any unique modifications or interactions between these designs.\\n\\n[2] On First-Order Meta-Learning Algorithms;\\n[3] Sharpness-Aware Minimization for Efficiently Improving Generalization;\\n[4] Forward Compatible Few-Shot Class-Incremental Learning;\\n[5] Dycr: a dynamic clustering and recovering network for few-shot class-incremental learning;\\n[6] Few-Shot Class-Incremental Learning from an Open-Set Perspective;\\n...\\n\\n3. The benchmark comparisons should include detailed network architecture specifications for all compared methods. In addition, the model parameter size should also be listed since additional parameters are introduced in MetaAdapter. This additional information would facilitate a better understanding of the relative complexity and computational requirements across different approaches.\", \"questions\": \"Please refer to the weaknesses of the paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"Nil\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response To All Reviewers (1/2)\", \"comment\": \"We would like to sincerely thank all the reviewers for their valuable feedback and constructive suggestions. Below, we provide an explanation of our feature compactness loss (FCL) and new experimental results to address your concerns.\\n\\n------\\n\\n**Q**: While the feature compactness loss (FCL) demonstrates notable performance improvements, its motivation and comparisons with existing methods need further clarification and exploration. \\n\\n**A**: To clarify, we first summarize the core strategies of existing methods: \\n\\n1. **FACT [1]** FACT utilizes manifold mixup during the base session to create virtual new classes, thereby making room for new categories in subsequent incremental learning stages.\\n2. **ALICE [2]**: This method employs an angular penalty to simultaneously minimize the distance between intra-class feature vectors while maximizing the distance between inter-class vectors. \\n3. **SAVC [3]**: SAVC leverages the MoCo [4] framework for self-supervised contrastive learning to enhance feature representation, with the goal of increasing inter-class distance and reduce intra-class distance during the base session.\\n\\nAs CLOM [5] suggests, while increasing inter-class distances may improve base-class performance, it often leads to overfitting on base classes, which can harm generalization to few-shot new classes. Additionally, this strategy overlooks the issue of **Minority Collapse [6]**, where few-shot class features cluster too tightly in imbalanced scenarios, potentially resulting in performance degradation for new classes. Our feature compactness loss (FCL) takes a different approach by compacting both inter-class and intra-class distances during the base session. This design prevents the embedding space from becoming overly dispersed, effectively enabling the model to better adapt to new tasks in future incremental sessions.\\n\\nTo substantiate the effectiveness of our feature compactness loss (FCL), we have conducted additional experiments as follows:\\n\\n- Base accuracy for different datasets across varying $w_{fcl}$ values.\\n- Incremental accuracy for different datasets across varying $w_{fcl}$ values.\\n- Relative angular disparity $T(f_{\\\\theta})$ [7] for different datasets across varying $w_{fcl}$ values. This metric measures the relative angular disparity between new-class samples and base-class prototypes to assess adaptability for new categories.\\n- Inter-class angular distance among base-class prototypes for different datasets across varying $w_{fcl}$ values.\"}", "{\"title\": \"Final response to authors\", \"comment\": \"Thank you to the authors for their detailed responses. Most of my concerns have been addressed, and I have raised my score to 6: marginally above the acceptance threshold. However, considering the performance of the method in the ablation study and its potential to harm classification, I still believe that the stability and generalizability of the method could be improved.\"}", "{\"comment\": \"Thank you to the authors for their efforts in the rebuttal. While some of my concerns have been addressed, several issues still remain:\\n 1. Regarding the $ W_{t-1} $ and $ W_t $ mentioned in Q6, I am actually curious that although their dimensions are different, both $ z^{t-1} $ and $ z^t $ have $ c^t $ weights in Eq.10. Is there perhaps some intermediate steps missing here?\\n 2. According to the newly added results in Table 2 of the article, using only FCL yields better performance than SAM+FCL and is comparable to MIS+FCL. Could the authors please provide further explanation on this?\\n 3. In Table 1 of the General Response, the results don't seem very stable. Together with Table 2, they show that under reasonable parameter values, FCL can enhance the model's generalization ability with minimal harm to its classification performance. This is a somewhat clever method, but I'm not quite sure about its stability and robustness. In addition, Could the authors provide the baseline performance in Tables 1 and 2 of the General Response, so that we can observe the range of $ w_{fcl} $ where FCL brings improvements?\"}", "{\"title\": \"To Reviewer ma3t\", \"comment\": \"Dear Reviewer ma3t,\\n\\nThank you for your careful review and thoughtful feedback! We have updated our submission based on your suggestions and provided detailed responses to the newly raised concerns. As the discussion phase is nearing its conclusion, could you please check our response to see if we address your concerns? We are happy to address further questions you might have.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Post-Rebuttal Review\", \"comment\": \"Most of my concerns have been adequately addressed in the rebuttal. I am willing to raise my recommendation to a 6: marginally above the acceptance threshold.\"}", "{\"title\": \"Response to Reviewer 7Rcy (1/2)\", \"comment\": \"Thanks for your constructive comments! Below, we address your concerns point by point.\\n\\n---\\n\\n**Q1**: Limited Novelty in Meta-Learning Approach. The application of meta-learning for adapter initialization resembles existing methods. Clarification on how this approach differs from established frameworks would strengthen the contribution.\\n\\n**A1:** Existing methods based on meta-learning, such as MetaFSCIL [1], focus on meta-testing simulation by sampling sequences of incremental tasks from base classes during the base phase. However, the inherent discrepancies in data distributions make it difficult to use base session data to accurately simulate real incremental sessions. Instead of mimicking the incremental evaluation process, we use meta-learning to obtain meta-initialized adapters that serve as a generalizable starting point for expanding and refining feature representations. This design helps reduce overfitting and enhances the model's plasticity for adapting to new tasks efficiently. \\n\\n------\\n\\n**Q2**: Training Complexity. The three-phase training process, including feature compactness loss and sharpness-aware minimization, adds complexity. This may pose challenges for implementation in resource-constrained environments.\\n\\n**A2**: Our training process is divided into three phases:\\n\\n1. **Phase 1 (Adapter Meta-Training)**: During the base session, we first construct few-shot tasks by randomly sampling instances from each base class and then train the adapters using meta-learning algorithms to obtain generalizable initial parameters. \\n\\n2. **Phase 2 (Backbone Pretraining)**: In this phase, we focus on training the backbone. We introduce the feature compactness loss (FCL) to bring feature representations closer together and apply sharpness-aware minimization (SAM) to find flatter minima.\\n\\n3. **Phase 3 (Few-Shot Adaptation)**: We only fine-tune the lightweight adapters for each new task to expand the current representations to encompass new class features.\\n\\nPhase 1 is computationally efficient because it involves only the limited size of few-shot tasks. And the feature compactness loss and sharpness-aware minimization are applied solely during Phase 2 to train the backbone. In Phase 3, we fine-tune the adapters with just 1-5 iterations, which enables rapid adaptation for few-shot new classes.\\n\\n---\\n\\n**Q3**: Base Task Performance. The framework appears to underperform in base classification tasks compared to other methods using the same backbone. Understanding the reasons for this discrepancy is crucial, as base task accuracy is vital for incremental learning stability.\\n\\n**A3:** We would like to clarify that our framework achieves comparable performance to NC-FSCIL [2] and outperforms other compared methods on base classification tasks. In the context of FSCIL, while base task performance is important, the most critical challenge is mitigating catastrophic forgetting. Simply optimizing for base session performance may not lead to better results in the final session, as overfitting to base classes can reduce adaptability to new tasks. \\n\\nFurthermore, the improved base task performance can be attributed to several factors. We utilize a prototype-based classifier with cosine similarity, where we re-scale the normalized features using a preset scale factor $\\\\tau$. The choice of $\\\\tau$ controls the separation between classes, which improves feature discrimination. For CIFAR100 and miniImageNet, we set $\\\\tau = 64$, while for CUB200, we use $\\\\tau = 32$ to better adapt to the characteristics of each dataset. Furthermore, we incorporate sharpness-aware minimization (SAM) to find flatter local minima by adding gradient-based perturbations to the parameters, which enhances the model\\u2019s generalization in base classification tasks.\\n\\n------\"}", "{\"title\": \"Response to Reviewer 6ye5\", \"comment\": \"Thanks for your constructive comments! Below, we address your concerns point by point.\\n\\n------\\n\\n**Q1**: The paper should be written in a way that is self-explanatory and easy to follow without requiring additional explanation.\\n\\n**A1**: Thank you for the suggestion. In the revised manuscript, we elaborate on the motivation behind the Feature Compactness Loss (FCL) and provide a clearer explanation of its implementation. \\n\\n------\\n\\n**Q2**: The fairness issue remains inadequately addressed. \\n\\n**A2**: Thank you for raising this concern. To ensure fairness, we provide additional results using ResNet-18 for mini-ImageNet, as shown in Table 1, and ResNet-18 and ResNet-20 for CIFAR100, detailed in Table 5 of the revised manuscript. These results demonstrate that our method improves both the final accuracy and average accuracy on the same backbone compared to other methods.\\n\\n------\\n\\n**Q3**: The superior results in the last session appear to rely heavily on the strong performance in the base session. It would be better to include an additional metric, such as performance drop (PD) across incremental sessions.\\n\\n**A3**: Thank you for raising this concern. NC-FSCIL [1] is a strong recent baseline, and we observe consistent improvements with our method across multiple datasets. On mini-ImageNet, our method achieves a 0.1% improvement in the base session and an average improvement of 2.70% across all sessions. On CIFAR100, we achieve a 1.53% improvement in the base session and an average improvement of 3.09% across all sessions. On CUB200, we see a 0.18% improvement in the base session and an average improvement of 2.26% across all sessions. These results mean that the advantage of our method indeed gets larger during the incremental training. Moreover, as shown Table 3 of the manuscript and Tables 1 and 2 of the General Response, our method achieves superior performance in both base accuracy and incremental accuracy compared to NC-FSCIL. This indicates that the observed improvements are not solely due to the strong performance on base classes but also reflect better adaptability to incremental sessions. To provide a clearer view of stability across sessions, we also include the performance drop (PD) metric in Table 1,5 and 6 in the revised manuscript. For mini-ImageNet, the PD results show that our method outperforms other methods on ResNet-12 and is comparable to recent methods on ResNet-18. For CIFAR100, the PD results indicate that we achieve superior performance on ResNet-12 and remain comparable to other methods on ResNet-18 and ResNet-20. These findings demonstrate that our approach effectively enhances plasticity while maintaining competitive stability.\\n\\n------\\n\\n[1] Neural Collapse Inspired Feature-Classifier Alignment for Few-Shot Class-Incremental Learning. ICLR2023.\"}", "{\"title\": \"Response to Reviewer 7Rcy (2/2)\", \"comment\": \"**Q4**: Limited Comparison with Recent Methods. The paper compares MetaAdapter with only one method from 2024. Including a broader range of recent methods would provide a more comprehensive evaluation of its performance.\\n\\n**A4**: We incorporated CEC+ [3] and OrCo [4] as baselines across all three datasets (mini-ImageNet, CIFAR100, and CUB200), while comparisons with C-FSCIL [5] were conducted on mini-ImageNet and CIFAR100, as the original work did not evaluate on CUB200. These updated results have been included in the experimental results of the manuscript. As can be seen from the tables, MetaAdapter performs better than other methods across all three datasets.\\n\\nOn mini-ImageNet\\n\\n| Methods | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |\\n| :---------: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\\n| CEC+ | 82.65 | 77.82 | 73.59 | 70.24 | 67.74 | 64.82 | 61.91 | 59.96 | 58.35 |\\n| C-FSCIL | 76.40 | 71.14 | 66.46 | 63.29 | 60.42 | 57.46 | 54.78 | 53.11 | 51.41 |\\n| OrCo | 83.30 | 70.80 | 66.90 | 64.32 | 62.28 | 60.46 | 58.40 | 58.02 | 58.08 |\\n| MetaAdapter | 84.12 | 79.95 | 75.97 | 72.61 | 69.68 | 66.88 | 64.12 | 62.39 | 61.01 |\\n\\nOn CIFAR100\\n\\n| Methods | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |\\n| :---------: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\\n| CEC+ | 81.25 | 77.23 | 73.30 | 69.41 | 66.69 | 63.93 | 62.16 | 59.62 | 57.41 |\\n| C-FSCIL | 77.47 | 72.40 | 67.47 | 63.25 | 59.84 | 56.95 | 54.42 | 52.47 | 50.47 |\\n| OrCo | 80.08 | 71.46 | 64.95 | 58.65 | 57.60 | 56.68 | 56.16 | 54.62 | 52.19 |\\n| MetaAdapter | 84.05 | 78.86 | 75.16 | 71.64 | 68.29 | 65.31 | 63.54 | 61.52 | 59.20 |\\n\\nOn CUB200\\n\\n| Methods | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |\\n| :---------: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\\n| CEC+ | 79.46 | 76.11 | 73.12 | 69.31 | 67.97 | 65.86 | 64.50 | 63.83 | 62.20 | 62.00 | 60.97 |\\n| OrCo | 75.59 | 72.74 | 64.58 | 60.12 | 60.16 | 58.04 | 58.41 | 57.96 | 56.97 | 57.99 | 57.93 |\\n| MetaAdapter | 80.63 | 76.85 | 73.62 | 69.75 | 69.13 | 66.23 | 65.67 | 64.51 | 62.29 | 62.58 | 61.70 |\\n\\n------\\n\\n[1] MetaFSCIL: A Meta-Learning Approach for Few-Shot Class Incremental Learning. CVPR2022.\\n\\n[2] Neural Collapse Inspired Feature-Classifier Alignment for Few-Shot Class-Incremental Learning. ICLR2023.\\n\\n[3] Improved Continually Evolved Classifiers for Few-Shot Class-Incremental Learning. TCSVT2023.\\n\\n[4] Hersche et al., Constrained few-shot class-incremental learning, CVPR2022.\\n\\n[5] OrCo: Towards Better Generalization via Orthogonality and Contrast for Few-Shot Class-Incremental Learning. CVPR2024.\"}", "{\"summary\": \"This paper proposes to use meta-adapter to address the few-shot class incremental learning problem. Additionally, a feature compactness loss is introduced to help the accommodation of new categories. By leveraging the generalization capabilities of meta-learning training methods, the proposed method enhances the performance of few-shot class incremental tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n2. The figures in the paper are concise and clear.\\n3. The idea of feature compactness loss and adapter integration is impressive.\\n4. The ablation studies in Table 2 and Figure 4 are extensive.\", \"weaknesses\": \"1. Two highly related works are not compared [1,2], which have the same task setting.\\n2. Although the feature compactness loss (FCL) is impressive and shows good performance improvement, the analysis of FCL is insufficient:\\n - Why don't more similar inter-class feature representations affect classification?\\n - Need further proof or experiment to demonstrate that more similar inter-class feature representations reserve embedding space. Additionally, will more similar inter-class feature representations possibly lead to the overall embedding space shrinking, similar to collapse in contrastive learning?\\n - Need to prove that it is the reservation space that helps future tasks and not others. For example, reducing the number of novel categories in the few-shot adaptation task can also reserve space. Will this also improve performance?\\n - The results using only FCL are missing in Table 2.\\n - FCL is an interesting idea that I think needs further discussion.\\n3. It would be better to state how to expand $W_n^{t-1}$, Figure 2(b) only shows $W_n^{t}$.\\n4. typo in L95: \\\"using this knowledge to improve leanring efficiency\\\"\\n\\n\\n[1] Improved Continually Evolved Classifiers for Few-Shot Class-Incremental Learning. TCSVT2023. \\\\\\n[2] Rethinking Few-shot Class-incremental Learning: Learning from Yourself. ECCV2024.\", \"questions\": \"1. Please address the Weaknesses.\\n2. Since the ViT backbones are receiving more attention, I wonder if meta-adapter and FCL could be migrated to methods with ViT backbone?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
88TC1AWV27
PICASO: Permutation-Invariant Context Composition with State Space Models
[ "Tian Yu Liu", "Alessandro Achille", "Matthew Trager", "Aditya Golatkar", "Luca Zancato", "Stefano Soatto" ]
Providing Large Language Models with relevant contextual knowledge at inference time has been shown to greatly improve the quality of their generations. This is often achieved by prepending informative passages of text, or 'contexts', retrieved from external knowledge bases to their input. However, processing additional contexts online incurs significant computation costs that scale with their length. State Space Models (SSMs) offer a promising solution by allowing a database of contexts to be mapped onto fixed-dimensional states from which to start the generation. A key challenge arises when attempting to leverage information present across multiple contexts, since there is no straightforward way to condition generation on multiple independent states in existing SSMs. To address this, we leverage a simple mathematical relation derived from SSM dynamics to compose multiple states into one that efficiently approximates the effect of concatenating raw context tokens. Since the temporal ordering of contexts can often be uninformative, we enforce permutation-invariance by efficiently averaging states obtained via our composition algorithm across all possible context orderings. We evaluate our resulting method on WikiText and MSMARCO in both zero-shot and fine-tuned settings, and show that we can match the strongest performing baseline while enjoying on average $5.4\times$ speedup.
[ "State Space Models", "Composition", "Retrieval" ]
Accept (Poster)
https://openreview.net/pdf?id=88TC1AWV27
https://openreview.net/forum?id=88TC1AWV27
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wc8WQRh8aw", "t3EbeZQHAd", "rCyYdR6Xph", "mTS73QUBk2", "kKuKAhSlAH", "jEj2XNSdAT", "j9oJ5YUDtv", "h3YhxtQQxh", "gozZissSbB", "buyj1lOaeR", "a1yZfcf1zF", "WPT3fgGncC", "WG9cGYFX6s", "VXDt27dB7g", "UDR6JbtuoW", "SenL8vWxpj", "SKjxYC0eOT", "S8bMV3HhQB", "RxZ8XyXvll", "RcrFBdIHGB", "P17AY3EXf9", "Ovg75wHsrV", "MeBJs4tEAJ", "L2QxXuMqy2", "HjjI4XWrmB", "Ax8eeJRJYo", "Ai0jp40Oda", "8X21JgNfJp", "6kxASE6XHY", "57zebaC5Eh", "07gF29G62E" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "meta_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733211892262, 1731472470716, 1732422357483, 1730606973947, 1731473232799, 1737523566305, 1731479013809, 1734933594259, 1730629697789, 1731471928374, 1730782585247, 1733203584959, 1732404435699, 1731475212616, 1732942566475, 1732040797694, 1732942638345, 1732042022825, 1732041271376, 1732041110971, 1732422277609, 1732041205422, 1732942551025, 1733207303399, 1732042077285, 1733207964630, 1733205180131, 1731281762000, 1733206784887, 1733205884617, 1733209222438 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3271/Authors" ], [ "ICLR.cc/2025/Conference/Submission3271/Reviewer_h4ZY" ], [ "ICLR.cc/2025/Conference/Submission3271/Authors" ], [ "ICLR.cc/2025/Conference/Submission3271/Reviewer_h4ZY" ], [ "ICLR.cc/2025/Conference/Submission3271/Reviewer_2fh2" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3271/Reviewer_h4ZY" ], [ "ICLR.cc/2025/Conference/Submission3271/Area_Chair_u611" ], [ "ICLR.cc/2025/Conference/Submission3271/Reviewer_2fh2" ], [ "ICLR.cc/2025/Conference/Submission3271/Reviewer_2fh2" ], [ "ICLR.cc/2025/Conference/Submission3271/Reviewer_FYyF" ], [ "ICLR.cc/2025/Conference/Submission3271/Reviewer_uYJU" ], [ "ICLR.cc/2025/Conference/Submission3271/Reviewer_h4ZY" ], [ "ICLR.cc/2025/Conference/Submission3271/Authors" ], [ "ICLR.cc/2025/Conference/Submission3271/Authors" ], [ "ICLR.cc/2025/Conference/Submission3271/Authors" ], [ "ICLR.cc/2025/Conference/Submission3271/Authors" ], [ "ICLR.cc/2025/Conference/Submission3271/Authors" ], [ "ICLR.cc/2025/Conference/Submission3271/Authors" ], [ "ICLR.cc/2025/Conference/Submission3271/Authors" ], [ "ICLR.cc/2025/Conference/Submission3271/Authors" ], [ "ICLR.cc/2025/Conference/Submission3271/Authors" ], [ "ICLR.cc/2025/Conference/Submission3271/Authors" ], [ "ICLR.cc/2025/Conference/Submission3271/Reviewer_2fh2" ], [ "ICLR.cc/2025/Conference/Submission3271/Authors" ], [ "ICLR.cc/2025/Conference/Submission3271/Authors" ], [ "ICLR.cc/2025/Conference/Submission3271/Authors" ], [ "ICLR.cc/2025/Conference/Submission3271/Reviewer_uYJU" ], [ "ICLR.cc/2025/Conference/Submission3271/Authors" ], [ "ICLR.cc/2025/Conference/Submission3271/Reviewer_uYJU" ], [ "ICLR.cc/2025/Conference/Submission3271/Reviewer_uYJU" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for the clarification.\\n\\nIndeed there might be some misunderstandings, the first of which we believe stems from distinguishing between (a) intuition for our claim on the desirability of permutation-invariance among retrieved documents, and (b) empirical evidence in support of this claim.\\n\\n(a): This comes simply from the fact that retrieved documents are conditionally independent of one another, by construction. Hence, similar to how ordering in multiple choice questions should not influence the final answer, neither should the relative ordering among independently retrieved documents.\\n\\n(b): PIConcat / PICASO significantly outperform their ordered counterparts Concat / Soup respectively in all our experiments, providing strong empirical evidence on the advantages of incorporating permutation-invariance.\\n\\n> fundamental assumption ... (that) independent documents can work as well\\n\\n> there is no clear evidence to show this is a good approximation to how the model actually works by using the contextual information.\\n\\nWe believe this to be another main source of misunderstanding. We do not compose documents independently of one another. As an example, given documents A and B, we do not compose {A,B}. Instead, PIConcat / PICASO efficiently composes {A$\\\\cdot$B , B$\\\\cdot$A}, where $\\\\cdot$ denotes exact / approximate concatenation respectively. As such, contextual information is incorporated within both A$\\\\cdot$B and B$\\\\cdot$A, while at the same time, the composed state remains invariant to document ordering.\"}", "{\"title\": \"Meant for the transformer baseline comparison\", \"comment\": \"Thanks for your comment. I think the comment is not clear.\\n\\nThe authors did compare with a transformer baseline, and the kv cache question is regarding that transformer baseline number.\"}", "{\"comment\": \"Thank you for reviewing our response. We would greatly appreciate it if you could share any remaining feedback or suggestions. Otherwise if we have satisfactorily addressed your concerns, we hope that you would consider increasing your score to support our paper's acceptance.\"}", "{\"summary\": \"This paper proposes a way to speed up inference for RAG use cases for state space models (SSMs). The speed up comes from feeding in states of the retrieved chunks instead of doing inference from raw tokens to best utilize the properties of SSMs. To address the problem of combinatorial number of possibility in concatenation the authors proposed two ways of computing all the permutations -- exhaustive and cyclic. The authors demonstrate the proposed approach on two datasets, which shows improvements over naive concatenation of states. The performance compared to baseline concatenation is still worse but faster, the gap closes after finetuning.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The proposed method shows significant speed up over the naive baseline\", \"weaknesses\": \"The writing can be improved, in particular in the experiment section. It is not clear what the size of the chunk, what is the inference token size (how many in context and how many for inference), and how the evaluation is setup. For wikitext, it is not clear what is used as the query and what is used as the candidate pool for performing retrieval.\\n\\nThe evaluation feels incomplete. It seems that for both WikiText and msmarco are evaluated based on the test set loss. Since the proposed method fits in the RAG setting, it would be great to show the actual QA performance instead of the cross entropy as the result.\\n\\nIt is not clear what the speed baseline setup is. Does it use KV cache?\", \"questions\": \"See questions in the above section\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Good defense. But if it does not use KV cache, it would scale in O(n^3). Anyway, my point is clear. I do not think you read the paper carefully.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thanks for the response.\\n\\nMaybe kv cache is the wrong word. What i meant is that since this method retrieves pre-computed states, one can compare with a similar transformer baseline, where the kv caches for documents are pre-computed and then can be directly loaded into the model for generation. If the original time already follows this setup then this question is addressed.\"}", "{\"metareview\": \"PICASO proposes a novel method for efficiently composing and generating from multiple retrieved document chunks using State Space Models (SSMs). The key claims are that it can match the performance of document concatenation while achieving a 5.4x speedup, and that it enables permutation-invariant composition of document states. The paper's main strengths include addressing an important practical problem (efficient RAG with SSMs), strong theoretical foundations with detailed analysis, and comprehensive empirical validation showing significant speedups while maintaining performance. The initial weaknesses included: insufficient experimental validation on real RAG applications beyond perplexity metrics, missing baseline comparisons with similar approaches, lack of clarity around retriever choices and document statistics, and incomplete analysis of the order-invariance assumption. However, during rebuttal, the authors added substantial new experiments and analyses that addressed many of these concerns, including QA task evaluations, ablations with different retrievers, and detailed document statistics. The scores (5, 6, 8, 5) suggest a borderline paper, but the thorough technical contribution combined with strong rebuttal responses addressing core concerns merit acceptance as a poster.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised several significant technical concerns that sparked extensive discussion. Reviewer uYJU questioned the fundamental assumption of order-invariance and requested more real RAG experiments beyond perplexity metrics - the authors responded by adding QA task evaluations and clarifying their theoretical justification for order-invariance, though some disagreement remained about the strength of this justification. Reviewer FYyF raised concerns about scaling beyond 10 documents and accuracy gaps - the authors pointed to experiments showing stable performance up to 50 documents. Reviewer 2fh2 identified technical issues with figures and baseline clarity, which were addressed through revisions and additional equations. Reviewer h4ZY questioned experimental setup details and requested QA evaluations - the authors added document statistics and QA results in response. The discussion was particularly active around the order-invariance assumption, with multiple back-and-forth exchanges between Reviewer uYJU and the authors debating the theoretical and empirical support for this key claim. While not all reviewers were fully convinced (particularly regarding order-invariance), the authors' thorough responses and additional experiments addressed most major concerns, leading one reviewer to increase their score.\"}", "{\"summary\": \"The paper addresses the challenge of efficiently incorporating multiple documents into the generation process of LLMs. Traditionally, concatenating documents leads to significant computational costs that scale with the number and size of document chunks. State Space Models (SSMs) offer a faster approach by encoding documents into fixed-size state vectors, but composing multiple states is not straightforward. The authors introduce PICASO, a method for permutation-invariant composition of document states using SSMs. PICASO efficiently retrieves and combines pre-computed states to condition the generation of high-quality outputs without the need for online document processing. It enforces invariance to the order of document presentation, which is desirable when the temporal ordering is uninformative.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. The issue is of significant importance. Generating content from multiple processed documents without re-preprocessing is a crucial requirement for both long context application and agentic memory systems.\\n2. The State Space Model constitutes a suitable architectural framework for the targeted problem, thereby rendering the study meaningful.\\n3. The introduced methodologies are technically robust. And they are presented clearly mostly (with a few exceptions).\\n3. The experiments are meticulously designed. Both performance and time complexity are analyzed. I particularly appreciate the experiments depicted in Figure 2, which elucidate the improvement brought about by the proposed method and provide mechanistic insights for the field.\", \"weaknesses\": \"I will write the weaknesses in the order of the willingness of raising my score after addressing these weaknesses.\\n\\n1. Figure 3,4 (left) has an error. There are seven legends and only six curves. The main approach PICASO-R is missing from the figure. Though I infer from the context of different places (e.g., PIConcat can not run for 10 chunks) that it is a mistake that PIConcat-R's curve should actually be PICASO-R? But such a mistake seriously lowers the quality of the paper.\\n\\n2. The differences between BPTC and BP2C are very ambiguous (And I actually do not know the difference). Equations are prefered. And stop-gradient notation can be used if you need to clarify the difference.\\n\\n3. Clarify usefulness outside of Mamba-1. The PICASO-S relies on the commutativity of A_i, and the PICASO-R relies on the invertible property of A_i. Despite that Mamba-1 clearly holds these properties. It is not very clear whether other SSMs (e.g., RWKV, RetNet, Mamba-2, etc.) hold these properties. This is not a very important limitation of this paper, since work can also be done for other architectures, but it would be interesting to see the authors' analysis in this paper.\\n\\n4. It seems that CASO is also the original proposal of the paper. Since I am not very clear about these baselines (But I do know RAG, Mamba, etc., clearly). I think it would be beneficial to mention methods similar to it more.\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"This reviewer is not reading paper carefully.\", \"comment\": \"Putting other things aside, it's really interesting to see you asking about KV cache in an SSM paper (even if the state is analogous to KV cache). If you haven't read the paper, please lower your confidence.\"}", "{\"summary\": \"The paper presents PICASO, a method to enhance generation capabilities in Large Language Models by composing document states in a permutation-invariant manner using State Space Models (SSMs). This approach addresses the inefficiency and high computational cost of concatenating multiple document tokens by pre-processing document chunks into states. PICASO leverages a permutation-invariant composition strategy that enables LLMs to utilize multiple documents' information efficiently, achieving computational speed-ups and scalability, especially in retrieval-augmented tasks. Evaluation on datasets such as WikiText and MSMARCO shows that PICASO can match or closely approximate the performance of document concatenation with significantly reduced processing time.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. PICASO offers an impressive 5.4x speed-up in processing time over traditional concatenation, a practical advantage for real-world applications that require high-speed document retrieval and composition.\\n2. The model effectively maintains performance without relying on the order of documents, which is crucial when temporal or logical ordering is irrelevant. This represents a thoughtful design that accommodates various real-world scenarios.\\n3. Extensive experiments demonstrate PICASO\\u2019s effectiveness, comparing multiple composition methods and showing zero-shot and fine-tuned settings. The authors provide a comprehensive view of PICASO\\u2019s strengths and limitations across several benchmarks.\", \"weaknesses\": \"1. While PICASO achieves near-concatenation performance in zero-shot scenarios, there is still a minor but noticeable gap in accuracy. This limitation could impact its suitability for applications where slight accuracy improvements are critical.\\n2. Although PICASO performs well with up to 10 document chunks, it is unclear how it scales with larger document sets. This potential bottleneck is worth further investigation.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"1\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the detailed responses. They have addressed many confusion and questions I had. I will keep my current score, since there are still some concerns. For example, supporting the underlying assumption of order invariance in retrieval, which is the basis to propose the method (I do understand the results in section 6 partially shows that the proposed method does not degrade performance much, but that is more of some post-hoc reasoning), and some not-well experimental execution and evaluations. However, I really like the research idea and technical contribution of state compression for retrieval efficiency. So while I am on the fence for the rating, I would be happy to see the paper improved and demonstrate a strong contribution to our field.\"}", "{\"comment\": \"Thanks fore the response, I have updated my score accordingly\"}", "{\"title\": \"Quick Clarification\", \"comment\": \"While we are currently preparing our official rebuttal, we would like to promptly address this concern in view of the increased attention it has received. Indeed, we tried to make our timing comparisons with transformers as fair as possible. We refer to L75-76, where we state\\n\\n> These timings are measured using the official Mamba benchmarking code, which includes optimizations such as quantization and CUDA graphs for SSMs, and flash attention for Transformers\\n\\nThis naturally includes using the KV cache for generation, and we will update the relevant lines to make this explicit.\"}", "{\"comment\": \"Given the approaching rebuttal deadline, we wish to follow up on our response to your insightful feedback. We hope that our responses, revised paper draft, and additional experiments have addressed the areas of weakness you identified. We would be grateful if you could consider providing stronger support for our paper's acceptance, and thank you for your very valuable feedback and suggestions!\"}", "{\"comment\": \"We thank the reviewer for their detailed feedback and constructive suggestions. We address individual concerns below:\\n\\n> This is similar to in-context retrieval [1] (seems a missing reference). However, there are no real RAG applications studied in the paper. Appendix B.4 presents some additional results on \\u201clanguage modeling\\u201d (which I don\\u2019t think is a proper description), but not so much details are provided such as database and retrieval.\\n\\n> The evaluation metric mostly focuses on log-perplexity or loss, while it is also desirable to demonstrate generation capabilities of the proposed method for real tasks.\\n\\nWhile RAG is indeed our main motivation, the paper focuses on designing a method to quickly retrieve and integrate information in the state of the model. We found that perplexity-based measurements provide a sufficiently robust metric of success in this respect. However, we agree that exploring this aspect of evaluation is also important, and have added Appendix B.6. evaluating accuracy for the OpenbookQA task under the retrieval-augmented setting. In particular, we show that the same trends hold as compared to perplexity metrics -- that augmented generation benefits performance, while PICASO provides similarly boosted performance with around $8\\\\times$ reduced computational costs.\\n\\nWe also thank the reviewer for the reference [1] which we have added, and have changed \\\"language modeling\\\" to \\\"LLM evaluation tasks\\\".\\n\\n> Furthermore, there could be certain baselines missing, including similar approaches proposed in the previous literature, such as the closely related work described in Section 2.\\n\\nTo the best of our knowledge, our work is the first SSM-specific method developed for RAG-like settings. We believe we have included all applicable baselines for composing documents with SSMs. However, we realized that we did not explicitly label their sources in Sec 6.2, and have corrected that in the revision. Thank you for pointing this out, and we are also happy to include additional appropriate baselines that the reviewer recommends. \\n\\n\\n> Other ablations could include the effect of retrieval accuracy, the impact of document or chunk lengths to be composed together, etc.\\n\\nTo address the reviewer's concerns, we have added an ablation study on retriever choice in Appendix B.5. Figure 6 of the Appendix also demonstrates what happens when we scale beyond the training context length of the model. We have also provided a histogram of document chunk statistics in Appendix B.7. to provide greater clarity on the distribution of chunk lengths considered.\\n\\n> Not enough background information is provided for RAG applications with SSMs, as most of the RAG applications are built with Transformers. This is also related to the lack of experimental studies mentioned above, where more comprehensive comparisons on other tasks/methods could be beneficial.\\n\\nTo the best of our knowledge, our work is the first to introduce an approach towards RAG that is specific to SSMs. We have included [1] in our discussion on related works and the baseline method (concatenation) in the revision to better frame our work in the context of transformer-based applications. \\n\\n> The method is based on some strong assumptions such as the order of the retrieved document chunks does not matter, so that an average state can be used for inference. There is no clear evidence provided.\\n\\nIn our experiments in Sec. 6, we compare our permutation invariant method with ordering the documents by relevance (i.e., most relevant documents are closer to the answer), which we found empirically to perform best. While, to the reviewers point, such ordering is better than random ordering, we still observe that incorporating permutation invariance via PIConcat / PICASO significantly outperforms their ordered counterparts Concat / Soup respectively.\"}", "{\"comment\": \"Since the rebuttal deadline is quickly approaching, we wish to follow up on our response to your review. We would be very grateful if you could share any further insights or suggestions you may have, otherwise if you feel we have satisfactorily addressed your concerns, we kindly ask if you could lend stronger support for our paper's acceptance. Thank you for your time and consideration!\"}", "{\"comment\": \"We thank the reviewer for their detailed feedback and constructive suggestions, which we have incorporated in our revision.\\n\\n> Figure 3,4 (left) has an error. There are seven legends and only six curves. The main approach PICASO-R is missing from the figure. Though I infer from the context of different places (e.g., PIConcat can not run for 10 chunks) that it is a mistake that PIConcat-R's curve should actually be PICASO-R? But such a mistake seriously lowers the quality of the paper.\\n\\nThere is no mistake in the figure, but the curves of PICASO-R and PICASO-S actually overlap and are hence hard to see. We have made this explicit in the revised caption, and we thank the reviewer for noticing the readability issue.\\n\\n> The differences between BPTC and BP2C are very ambiguous (And I actually do not know the difference). Equations are prefered. And stop-gradient notation can be used if you need to clarify the difference.\\n\\nWe thank the reviewer for their suggestion and notation advice, and have updated our revision with the equations in Sec. 5 to provide better clarity.\\n\\nWe copy the definitions below, where we replace $\\\\boldsymbol{u}_i$ with $v_i$ due to issues with rendering in Openreview\\n\\n$$\\\\mathcal L_{BPTC}(\\\\theta) = \\\\sum_{(v_i,u_i,S_i) \\\\in \\\\mathcal{D}}L_{\\\\rm CE}(f_\\\\theta(v_i, x^{PICASO}(S_i)), u_i)$$\\n\\nand\\n\\n$$\\\\mathcal L_{BP2C}(\\\\theta) = \\\\sum_{(v_i,u_i,S_i) \\\\in \\\\mathcal D} L_{\\\\rm CE}(f_\\\\theta(v_i, \\\\operatorname{sg}\\\\left[x^{\\\\rm PICASO}(S_i)\\\\right]), u_i)$$, where $\\\\operatorname{sg}$ is the stopgradient operator. \\n\\n\\n\\n> Clarify usefulness outside of Mamba-1. The PICASO-S relies on the commutativity of A_i, and the PICASO-R relies on the invertible property of A_i. Despite that Mamba-1 clearly holds these properties. It is not very clear whether other SSMs (e.g., RWKV, RetNet, Mamba-2, etc.) hold these properties. This is not a very important limitation of this paper, since work can also be done for other architectures, but it would be interesting to see the authors' analysis in this paper.\\n\\nOur method works for both Mamba-1 and Mamba-2 since as rightfully pointed out by the reviewer, the $A_i$ matrices are diagonal hence commutative/invertible. Our experiments are performed using Mamba-2 2.7B. These properties are often satisfied by several recurrent mechanisms in order to make them computationally efficient, including RWKV (via element-wise scaling) and RetNet (where $A$ is parameterized as a diagonalized matrix).\\n\\n> 4. It seems that CASO is also the original proposal of the paper. Since I am not very clear about these baselines (But I do know RAG, Mamba, etc., clearly). I think it would be beneficial to mention methods similar to it more.\\n\\nThank you for the suggestion, we added links to relevant literature when introducing our baselines in Sec 6.2. which we hope addresses the reviewer's concern.\"}", "{\"comment\": \"We thank the reviewer for their feedback, and address points of concern below.\\n\\n> While PICASO achieves near-concatenation performance in zero-shot scenarios, there is still a minor but noticeable gap in accuracy. This limitation could impact its suitability for applications where slight accuracy improvements are critical.\\n\\nOur method is indeed targeted towards deployment of LLMs, where slight performance trade-offs are well justified by the substantial speed-up obtained. However, we note that trade-off can be easily avoided via our proposed fine-tuning method, obtaining the \\\"best-of-both-worlds\\\". Indeed in situations where fine-tuning is not an option and inference-time is a non-issue, concatenation remains the paragon.\\n\\n> Although PICASO performs well with up to 10 document chunks, it is unclear how it scales with larger document sets. This potential bottleneck is worth further investigation.\\n\\nThis is a good point, and we have addressed this in Figure 6 of our main paper. While concatenation stops working as number of documents increase, due to the model becoming unstable once context length is exceeded, the performance of PICASO remains relatively stable even when composing up to 50 document chunks.\"}", "{\"comment\": \"> It seems the gain of the proposed approach mostly comes from the inference efficiency side, as the log-perplexity (demonstrated in Table 1) is not improved over the baseline of document concatenation. However, the complete pipeline includes preprocessing document chunks for their SSM states (Figure 7 also shows this), and it also includes an additional retrieval model with a separate set of embeddings that need to be stored in the database. The data used for experiments are small. The considered number of chunks are smaller than 10. It is unclear how the proposed method performs when brought to larger scales requiring bigger storage and more computation in real applications.\\n\\nIn Figure 6 of the Appendix, we evaluate the composition of up to 50 documents. We do not include this in the main paper because even for concatenation, results not only saturate after 10 documents chunks, but performance actually decreases due to exceeding context sizes seen during training. \\n\\nWhen using PICASO, pre-processing document chunks is a one-time cost that can be amortized over multiple queries. This is unlike other methods which would need to reprocess the retrieved documents at each query, resulting in a significantly higher\\u00a0latency than PICASO (see Figs 1,3,4). The retrieval cost (based on an external model) is constant and the same for all methods considered. While we do agree that there are additional areas of the RAG pipeline, such as retrieval and data compression, that can be further optimized, these are beyond the scope of our work focusing on model inference for SSMs.\\n\\n> 1. In line 084-085, \\u201crelative ordering is often uninformative\\u201d: this is a very strong assumption. Any evidence? Especially in applications where retrieval meets Mamba. In fact, many previous studies show the position of information matters [2].\\n\\nWe present strong empirical evidence in the situations we consider, since retrieved documents are independent of one another (see response to above weakness). [2] finds that positional bias indicates \\\"that current language models do not robustly make use of information in long input contexts\\\", which precisely *supports* our claim. The position of relevant information in the context affects the model outputs, even though this positional bias is uninformative / noise for the task at hand. \\n\\n> For Mamba models of multiple layers, what hidden states and which layer parameters did you use such as the A and B matrices? Is there any difference depending on what you use?\\n\\nWe compose the hidden states for all layers. Composing only a subset of layers would require picking certain \\\"default\\\" states for the others (for example chosen randomly from the documents), which goes against the goal of permutation-invariance composition. Furthermore, since composition with our method is fast enough to have almost negligible cost, there is no need to limit composition only to specific chosen layers. \\n\\n> In Figure 2, can you explain more on how to read the figure? In particular, how does the figure show that the CASO states are closer to one another?\\n\\nWhile Proposition 4 concerns the Euclidean distance between CASO states, Figure 2 instead visualizes CASO states within the model's loss landscape. Note that both left and right contour plots are of the same scale (see bar on the right of each figure), qualitatively showing that CASO states can be more meaningfully interpolated to yield lower losses than interpolating states of individual chunks. \\n\\n> For experiments on val/test sets, what are the chunk databases? Do they include chunks in the training data, or just the chunks of corresponding val/test sets?\\n\\nIn all experiments, we test our method on a domain that has not been seen during pretraining.\\u00a0We separate the test datasets into a pool of chunks that can be used for retrieval, and a pool of chunks (queries and continuations) that will be used to evaluate perplexity.\\n\\n> Since you use a different retrieval model, I assume you have to store the chunk vectors based on the retrieval model as well, on top of the SSM hidden states. Is that right? What is the effect on retrieval accuracy (can be ablated by using different retrievers) on the proposed approach? \\n\\nYes we store both embedding vectors and pre-processed SSM states in the database. We refer the reviewer to Appendix B.5. in our revision for the requested ablation study.\\n\\n> And moreover, can you directly use the SSM states for context matching for retrieval such as in non-parametric language modeling?\\n\\nThat is a great question, and we have explored this in our initial experiments. We found that naive retrieval of SSM states based on cosine similarity (or other distances) performs significantly worse than a sentence embedding model. We believe using a learned projection or distance function may improve. However, to ensure fair comparison between all methods, we decided to use the same external retrieval mechanism for all.\"}", "{\"comment\": [\"Given that it is nearing the end of the discussion period, we hope that the reviewers can let us know if we have adequately addressed their concerns and questions. We present an overall summary of our rebuttal below:\", \"We are happy to hear that our work is well-received by the reviewers with respect to the practicality and \\\"significant importance\\\" of the problem (uYJU, FYyF, 2fh2), the \\\"thoughtful design\\\" and technical soundness/robustness of our approach (uYJU, FYyF, 2fh2), and our \\\"meticulously designed\\\" and \\\"extensive experiments\\\" (FYyF, 2fh2) showcasing the inference efficiency and \\\"impressive 5.4x\\\" speed-up obtained by our method (all reviewers).\", \"We are very grateful to the reviewers for their valuable feedback, and have updated our revision to incorporate their suggestions for further improvement our paper. We summarize our overall major changes below, and detail smaller changes based on each reviewer's recommendations in the replies to individual comments.\", \"Added experiments based on various retriever choices in Appendix B.5.\", \"Added experiments on QA tasks under the retrieval setting in Appendix B.6.\", \"Added histogram plots of document statistics in Appendix B.7.\", \"Updated Section 5 with equations detailing our fine-tuning objectives\"]}", "{\"comment\": \"> Log-perplexity (loss) is used for language modeling performance. Why not just report perplexity? I think it is more common for language modeling and easier for illustration, such as in Table 1.\\n\\nWe note that both log-perplexity and perplexity are equally valid and reported in existing literature. Our current choice of log-perplexity is simply due to personal preferences, but are willing to change it if necessary.\\n\\n> For Table 1, what is the number of retrieved document chunks for the results?\\n\\nWe use all provided passages associated with each example in the MSMARCO dataset. This specific number varies with the example (often between 5 to 10). This set of results also demonstrate what happens when using a weak retriever, since most of these provided passages are irrelevant distractors.\\n\\n> Also for Table 1, it seems training on different domains of data does not help the perplexity. In many real applications, in-domain training data is hard to collect. This indicates that it is better to just apply the method at inference mainly for speedup? Model quality would not be improved here without training in-domain.\\n\\nOur method indeed also works well in the zero-shot setting. Table 1 is meant to show that training can only help, rather than deteriorate, performance from document decomposition. However, we disagree with the premise that \\\"In many real applications, in-domain training data is hard to collect\\\". Databases of documents or text are plentiful, and instruction-tuning datasets (which can be combined with retrieval for our training method) alone already provide strong coverage of possible LLM use-cases.\\n\\n> Line 492-493, Appendix B.4 and \\u201clanguage modeling\\u201d: these are not language modeling tasks, which are what you did in the main experiments and measure with perplexity. While I can understand what you mean, it is better to change the wording for clarity. In addition, can you provide more details on how these tasks are set up with retrieval and their evaluation?\\n\\nWe have changed the wording, thank you for the suggestion. As described in B.4., these tasks evaluate the performance of models without retrieval. The goal of this section is to show that performance on existing tasks do not deteriorate after fine-tuning with our proposed methods.\\n\\n> The proposed method can \\u201ccomposition of information contained in up to 10 documents in a manner that is order-invariant.\\u201d Do you see this as a limitation? What happens after 10 documents? And is there any effect of the length of each document in the efficacy of composition? An ablation study would be ideal.\\n\\nWe refer to Figure 6 of the Appendix for scaling up to 50 documents. Our proposed method actually greatly outperforms concatenation under such situations. While concatenation stops working when exceeding training context lengths, the performance of PICASO (and other state composition methods) remain relatively stable.\\n\\n> In the abstract or introduction, how is the name PICASO composed for what it stands for?\\n\\nAdmittedly CASO/PICASO is not a faithful acronym of our method (L18, abstract), it is simply a stylistic preference over the alternative (CSSM/PICSSM).\\n\\n> In proposition 3, explain the notation of\\u00a0`Id`\\u00a0and\\u00a0`[i]_n`?\\n\\nWe have updated proposition 3 to clarify these notations, thank you.\\n\\n> Line 292-293, \\u201cWe perform simple averaging to combine these tokens from different documents\\u201d: do you mean average states at the same token positions across different documents? So we still result in 4 states at the last 4 token positions after averaging.\\n\\nYes, the resulting (4) token values are simply their average across that obtained from different documents.\\n \\n> Line 304-305, \\u201con average improves only 8.5% compared to the baseline\\u201d: specify what is the evaluation metric or task briefly for better understanding since the experiments are not talked about yet.\\n\\nWe have updated this in our revision, thank you for pointing this out.\\n\\n> Line 350-352, in the training objective, the equation computes loss on the single token prediction after the retrieved documents. For sequence predictions, is the intention to use the same formula but with different\\u00a0ui,ui\\u00a0and the same\\u00a0Si\\u00a0at different sequential positions?\\n\\nThe notation reflects the \\\"teacher forcing\\\" training objective, where given an input sequence $u_0 u_1 u_2$ , the model is trained to predict $u_1 | u_0$ and $u_2 | u_1 u_0$ as independent training objectives.\"}", "{\"comment\": \"Given the approaching rebuttal deadline, we wish to follow up on our initial response. We hope that our new experiments, paper revision, and clarifications have satisfactorily addressed your initial concerns and questions, and if so, we kindly ask if you could consider increasing your score to support our paper's acceptance. Thank you for your time and consideration, we are very grateful for your insightful suggestions and valuable feedback!\"}", "{\"comment\": \"Thanks for your response. I take a close look at the paper and find that there is still one concern that BP2C still lags behind BPTC by about 0.01 in loss. But I think my previous concern has been addressed. I think the direction of this paper is worth exploring in the future need of inference scaling. Therefore I decide to encourage the authors by raising my score.\"}", "{\"comment\": \"We thank the reviewer for their constructive feedback.\\n\\n> The writing can be improved, in particular in the experiment section. It is not clear what the size of the chunk, what is the inference token size (how many in context and how many for inference), and how the evaluation is setup. For wikitext, it is not clear what is used as the query and what is used as the candidate pool for performing retrieval.\\n\\nTo address the reviewer's concern, we have added statistics on the document chunks used in our WikiText experiments in Appendix B.7, which should help clarify these implementation details regarding sizes of retrieval and query chunks. Our evaluation setup is detailed in Sec 6.1. -- the goal is to predict the second half of a document from its first, by leveraging retrieved knowledge from other document chunks. \\n\\n\\n> The evaluation feels incomplete. It seems that for both WikiText and msmarco are evaluated based on the test set loss. Since the proposed method fits in the RAG setting, it would be great to show the actual QA performance instead of the cross entropy as the result.\\n\\nWe thank the reviewer for their suggestion. To address their concern regarding more evaluation methods, we have added section B.6. in the Appendix where we evaluate accuracy on a multiple-choice task. We observe the same trends (benefits of augmented generation, and strong performance of PICASO), as that when using loss/perplexity as our evaluation metric.\\n\\n> It is not clear what the speed baseline setup is. Does it use KV cache?\\n> Since this method retrieves pre-computed states, one can compare with a similar transformer baseline, where the kv caches for documents are pre-computed and then can be directly loaded into the model for generation.\\n\\nFig 1 measures the necessary pre-processing time, which includes the creation of the KV cache for a transformer or the creation/composition of the state for SSM, along with the inference time starting from the processed cache/state. \\n\\nWe highlight that pre-computation of the KV-cache does not work here, since KV caches cannot be composed (concatenating them is not valid). On the other hand, we can pre-process these states for PICASO, since our method provides a way to compose them.\"}", "{\"comment\": \"Thank you for your support, and for updating your score. We sincerely appreciate your time and valuable feedback in reviewing our work!\"}", "{\"comment\": \"Thank you for your response, we are glad to hear that our response has adequately addressed many of your questions.\\n\\n> Supporting the underlying assumption of order invariance in retrieval, which is the basis to propose the method (I do understand the results in section 6 partially shows that the proposed method does not degrade performance much, but that is more of some post-hoc reasoning)\\n\\nWe disagree that our empirical results constitute post-hoc reasoning. Removing the position bias among documents, which are conditionally independent of one another given the query *by construction*, is the primary motivation for our method. The fact that this is reflected in our experiments is simply an empirical validation of this key observation.\\n\\n> some not-well experimental execution and evaluations\\n\\nWe believe that our updated paper draft and experiments have resolved the initial concerns previously mentioned by the reviewer. If there are remaining or further points of weaknesses, we kindly ask if you can detail them so we may address them.\\n\\nWe are very grateful for the reviewer's insightful and detailed overall feedback, and their appreciation for our \\\"research idea\\\" and \\\"technical contribution of state compression for retrieval efficiency\\\". We hope that the reviewer can reconsider their rating after our additional clarifications to support our paper's acceptance. Thank you!\"}", "{\"summary\": \"The paper focuses on the inference efficiency with state space models (SSMs) when conditioning on multiple document chunks in retrieval augmentation setups. It proposes a method to compose hidden states of different contexts into a single state for SSM generation, which is similar to compressing KV cache in Transformers (but for SSMs inference is only based on a single hidden state for one conditioning context). In particular, for applications that require retrieval, multiple document chunks could be provided as additional contexts, limiting the inference efficiency. Instead of the standard way of concatenating multiple document chunks in the context and run SSMs, the paper proposes to 1) preprocess individual chunks to store their SSM states; 2) retrieval the document chunks (with an additional retrieval model) and their pre-computed states based on a query; 3) compute an aggregated state from the different chunk states (which is not by simple averaging); and 4) continue generation based on the single aggregated state representing multiple document chunks.\\n\\nThe core contribution lies in step 3), where the authors derive an equivalent form (based on a single layer of SSM, so in general it is a heuristic) of state composition based on SSM computations to that of computed by naively concatenating multiple document chunks. Since this composition is reliant on a particular document chunk order, the paper further proposes to aggregate the composed states from different orders, assuming the order does not matter for the downstream generation. Efficient algorithms for the permutation-invariant aggregation are derived. Fine-tuning SSM by incorporating the aggregated states is also applied to further enhance the inference quality.\\n\\nExperiments are conducted on language modeling tasks with the Mamba-2 2.7B model on two datasets, WikiText-V2 and MAMARCO. Results show that the proposed approach can achieve comparable perplexity to the baseline of directly concatenating multiple document chunks, while enjoying on average 5.4X speedup.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper presentation is mostly clear. The motivation, background, and techniques are well explained, also in good connection with relevant research.\", \"The paper studies an interesting problem, which is to speed up inference in the context of retrieval augmentation, specifically focusing on SSM models with their unique state dependencies. SSMs do not need KV cache like Transformers thus reducing memory and computation requirements. The paper further improves inference efficiency by delegating the document state computation to the preprocessing time, and only compute state compositions for inference.\", \"The technicality is sound (I did not check the math in every detail but overall they seem correct) and interesting. Algorithms of efficient computation of document state composition are derived, as well as for fine-tuning SSMs with the state composition that utilizes SSM model parameters.\", \"Experimental studies support the expected results of maintaining language modeling performance, measured in log-perplexity, while speeding up the generation from the composite SSM states from multiple document chunks.\"], \"weaknesses\": \"1. While the problem and methodology is interesting, I find the experimental studies are somewhat not adequate.\\n- The authors conduct experiments with simplified settings of retrieving from WikiText and MSMARCO document chunks to improve next token prediction perplexity. This is similar to in-context retrieval [1] (seems a missing reference). However, there are no real RAG applications studied in the paper. Appendix B.4 presents some additional results on \\u201clanguage modeling\\u201d (which I don\\u2019t think is a proper description), but not so much details are provided such as database and retrieval. \\n- The evaluation metric mostly focuses on log-perplexity or loss, while it is also desirable to demonstrate generation capabilities of the proposed method for real tasks.\\n- Furthermore, there could be certain baselines missing, including similar approaches proposed in the previous literature, such as the closely related work described in Section 2.\\n- Comprehensive ablation studies are missing, making it difficult to understand the experimental details such as what parameters in multi-layer SSMs are used and how that matters. Other ablations could include the effect of retrieval accuracy, the impact of document or chunk lengths to be composed together, etc.\\n\\n2. Not enough background information is provided for RAG applications with SSMs, as most of the RAG applications are built with Transformers. This is also related to the lack of experimental studies mentioned above, where more comprehensive comparisons on other tasks/methods could be beneficial.\\n\\n3. The method is based on some strong assumptions such as the order of the retrieved document chunks does not matter, so that an average state can be used for inference. There is no clear evidence provided.\\n\\n4. Some of the descriptions are unclear. For example, some math notations are undefined, some details in Figure illustrations are missing. See more below in my questions.\\n\\n5. It seems the gain of the proposed approach mostly comes from the inference efficiency side, as the log-perplexity (demonstrated in Table 1) is not improved over the baseline of document concatenation. However, the complete pipeline includes preprocessing document chunks for their SSM states (Figure 7 also shows this), and it also includes an additional retrieval model with a separate set of embeddings that need to be stored in the database. The data used for experiments are small. The considered number of chunks are smaller than 10. It is unclear how the proposed method performs when brought to larger scales requiring bigger storage and more computation in real applications.\\n\\n*[1] Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2023. In-Context Retrieval-Augmented Language Models. Transactions of the Association for Computational Linguistics, 11:1316\\u20131331.*\\n\\nOverall I think the paper presents an interesting method and study on Mamba states composition, but the empirical justifications are somewhat flawed. I am willing to increase my score after getting more insights from the authors regarding my concerns.\", \"questions\": \"1. In line 084-085, \\u201crelative ordering is often uninformative\\u201d: this is a very strong assumption. Any evidence? Especially in applications where retrieval meets Mamba. In fact, many previous studies show the position of information matters [2].\\n\\n*[2] Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2024. Lost in the Middle: How Language Models Use Long Contexts. Transactions of the Association for Computational Linguistics, 12:157\\u2013173.*\\n\\n2. For Mamba models of multiple layers, what hidden states and which layer parameters did you use such as the A and B matrices? Is there any difference depending on what you use?\\n\\n3. In Figure 2, can you explain more on how to read the figure? In particular, how does the figure show that the CASO states are closer to one another? \\n\\n4. For experiments on val/test sets, what are the chunk databases? Do they include chunks in the training data, or just the chunks of corresponding val/test sets?\\n\\n5. Since you use a different retrieval model, I assume you have to store the chunk vectors based on the retrieval model as well, on top of the SSM hidden states. Is that right? What is the effect on retrieval accuracy (can be ablated by using different retrievers) on the proposed approach? And moreover, can you directly use the SSM states for context matching for retrieval such as in non-parametric language modeling?\\n\\n6. Log-perplexity (loss) is used for language modeling performance. Why not just report perplexity? I think it is more common for language modeling and easier for illustration, such as in Table 1.\\n\\n7. For Table 1, what is the number of retrieved document chunks for the results?\\n\\n8. Also for Table 1, it seems training on different domains of data does not help the perplexity. In many real applications, in-domain training data is hard to collect. This indicates that it is better to just apply the method at inference mainly for speedup? Model quality would not be improved here without training in-domain.\\n\\n9. Line 492-493, Appendix B.4 and \\u201clanguage modeling\\u201d: these are not language modeling tasks, which are what you did in the main experiments and measure with perplexity. While I can understand what you mean, it is better to change the wording for clarity. In addition, can you provide more details on how these tasks are set up with retrieval and their evaluation?\\n\\n10. The proposed method can \\u201ccomposition of information contained in up to 10 documents in a manner that is order-invariant.\\u201d Do you see this as a limitation? What happens after 10 documents? And is there any effect of the length of each document in the efficacy of composition? An ablation study would be ideal.\", \"some_minor_comments_on_typos_and_suggestions\": \"11. In the abstract or introduction, how is the name PICASO composed for what it stands for?\\n\\n12. In proposition 3, explain the notation of `Id` and `[i]_n`?\\n\\n13. Line 292-293, \\u201cWe perform simple averaging to combine these tokens from different documents\\u201d: do you mean average states at the same token positions across different documents? So we still result in 4 states at the last 4 token positions after averaging.\\n\\n14. Line 304-305, \\u201con average improves only 8.5% compared to the baseline\\u201d: specify what is the evaluation metric or task briefly for better understanding since the experiments are not talked about yet.\\n\\n15. Line 350-352, in the training objective, the equation computes loss on the single token prediction after the retrieved documents. For sequence predictions, is the intention to use the same formula but with different $\\\\mathbf{u}_i, u_i$ and the same $S_i$ at different sequential positions?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the quick response and clarification!\\n\\n> First, it does not seem to support the claim of order invariance in retrieval. It found that information at certain places matter more than other places.\\n\\nIt is indeed the fact that \\\"information at certain places matter more than other places\\\" that is undesirable under the setting in which we retrieve multiple *conditionally independent* documents. Generation should depend on the informativeness of each document, rather than the (arbitrary) position at which it was concatenated in. \\n\\n> Second, the study was on Transformer models. For Mamba based models, we need evidence to support whatever claim we want to assume.\\n\\nOur reply was originally made in response to citation [2], which is indeed on Transformer models. For Mamba-based models, this property (that model output is order-dependent) actually holds *by construction*, since SSMs are built on the assumption of temporal stationarity (see Eqn 1). Consequently, permutation-invariance needs to be enforced explicitly, hence motivating our SSM-specific approach.\\n\\nWe sincerely thank the reviewer for their continued engagement, please let us know if you have any remaining concerns.\"}", "{\"comment\": \"Thanks. I am confused by \\\"Removing the position bias among documents, ...\\\".. In other words, I find the following claim\\n\\n> ... [2] finds that positional bias indicates \\\"that current language models do not robustly make use of information in long input contexts\\\", which precisely supports our claim ...\\n\\nconfusing. First, it does not seem to support the claim of order invariance in retrieval. It found that information at certain places matter more than other places. Second, the study was on Transformer models. For Mamba based models, we need evidence to support whatever claim we want to assume.\"}", "{\"comment\": \"I still find the claims not convincing. There might be some misunderstanding. \\\"Generation should depend on the informativeness of each document\\\" does not mean it is actually the case. We need empirical evidence since that is the fundamental assumption the proposed method is relying on: the generation performance does not depend on the orders of the retrieved documents, and independent documents can work as well. This is not the case for Mamba-based models as the authors recognized (actually Mamba depends on the orders more), and there is no clear evidence to show this is a good approximation to how the model actually works by using the contextual information.\"}" ] }
88Qm4fGWzX
Event-Customized Image Generation
[ "Zhen Wang", "Yilei JIANG", "Dong Zheng", "Jun Xiao", "Long Chen" ]
Customized Image Generation, generating customized images with user-specified concepts, has raised significant attention due to its creativity and novelty. With impressive progress achieved in subject customization, some pioneer works further explored the customization of action and interaction beyond entity (i.e., human, animal, and object) appearance. However, these approaches only focus on basic actions and interactions between two entities, and their effects are limited by insufficient ''exactly same'' reference images. To extend customized image generation to more complex scenes for general real-world applications, we propose a new task: event-customized image generation. Given a single reference image, we define the ''event'' as all specific actions, poses, relations, or interactions between different entities in the scene. This task aims at accurately capturing the complex event and generating customized images with various target entities. To solve this task, we proposed a novel training-free event customization method: FreeEvent. Specifically, FreeEvent introduces two extra paths alongside the general diffusion denoising process: 1) Entity switching path: it applies cross-attention guidance and regulation for target entity generation. 2) Event transferring path: it injects the spatial feature and self-attention maps from the reference image to the target image for event generation. To further facilitate this new task, we collected two evaluation benchmarks: SWiG-Event and Real-Event. Extensive experiments and ablations have demonstrated the effectiveness of FreeEvent.
[ "Customized Image Generation", "Diffusion Model" ]
Reject
https://openreview.net/pdf?id=88Qm4fGWzX
https://openreview.net/forum?id=88Qm4fGWzX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ygQ5seiu60", "xdS55sl0ph", "vUTAWySLb8", "shSWNB2myz", "qhOXhyIPER", "q9KsuV0AT2", "oRV4m776XH", "miRuzUf3mi", "mazqPrdh9m", "lQ0pQvtOvS", "jtoQnwQ2u5", "jZkeb2eqf3", "jRN9rPJaNA", "bqjKtIsa9Y", "bef69pZzQQ", "aJWjnOxh5x", "ZPgnNNUQBN", "XuLxwGtdAW", "XYiAhGUnbL", "KyHOgyGkeQ", "Kw90AhntE5", "HUBVbqP0tL", "B0TS5FzVeD", "9A2jITsYgk", "8me9qycGgF", "2lOtf0EDIL" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732799251167, 1732378191965, 1733041198265, 1733041273494, 1732642996621, 1732379888145, 1730530112850, 1732378558540, 1730644270417, 1732627800147, 1733041252679, 1734605994597, 1732378808645, 1732799314380, 1732379917212, 1732378620297, 1737523572368, 1730476521984, 1732379816106, 1732379946109, 1733028646408, 1733040883582, 1732379749576, 1732378704911, 1732379719244, 1730629059049 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3372/Authors" ], [ "ICLR.cc/2025/Conference/Submission3372/Authors" ], [ "ICLR.cc/2025/Conference/Submission3372/Authors" ], [ "ICLR.cc/2025/Conference/Submission3372/Authors" ], [ "ICLR.cc/2025/Conference/Submission3372/Authors" ], [ "ICLR.cc/2025/Conference/Submission3372/Authors" ], [ "ICLR.cc/2025/Conference/Submission3372/Reviewer_Pbo8" ], [ "ICLR.cc/2025/Conference/Submission3372/Authors" ], [ "ICLR.cc/2025/Conference/Submission3372/Reviewer_2kAh" ], [ "ICLR.cc/2025/Conference/Submission3372/Reviewer_Pbo8" ], [ "ICLR.cc/2025/Conference/Submission3372/Authors" ], [ "ICLR.cc/2025/Conference/Submission3372/Area_Chair_E21Q" ], [ "ICLR.cc/2025/Conference/Submission3372/Authors" ], [ "ICLR.cc/2025/Conference/Submission3372/Authors" ], [ "ICLR.cc/2025/Conference/Submission3372/Authors" ], [ "ICLR.cc/2025/Conference/Submission3372/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3372/Reviewer_1gh9" ], [ "ICLR.cc/2025/Conference/Submission3372/Authors" ], [ "ICLR.cc/2025/Conference/Submission3372/Authors" ], [ "ICLR.cc/2025/Conference/Submission3372/Reviewer_2kAh" ], [ "ICLR.cc/2025/Conference/Submission3372/Authors" ], [ "ICLR.cc/2025/Conference/Submission3372/Authors" ], [ "ICLR.cc/2025/Conference/Submission3372/Authors" ], [ "ICLR.cc/2025/Conference/Submission3372/Authors" ], [ "ICLR.cc/2025/Conference/Submission3372/Reviewer_Gzkr" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewer,\\n\\nThank you again for your valuable feedback and thoughtful comments. We would like to kindly remind you that the deadline of the discussion period is approaching. If you have any additional questions, concerns, or clarifications you would like us to address, we would be more than happy to provide prompt responses.\\n\\nThank you for your attention, and we look forward to hearing from you!\"}", "{\"title\": \"General Response to All Reviewers\", \"comment\": \"We thank all reviewers for recognizing the presentation of our paper is **clear** (Reviewer 2kAh, Gzkr, Pbo8), **easy to follow** (Reviewers 2kAh), and **commendable for the writing quality and readability** (Reviewer 1gh9). Meanwhile, they have **acknowledged our contributions** in proposing a new but meaningful task and new benchmarks (Reviewer Gzkr). Besides, our proposed method is **easy to adopt** (Reviewer 2kAh), **correct** (Reviewer Gzkr), **resource-efficient** (Reviewer 1gh9) with **good motivations** (Reviewer Pbo8). And it has demonstrated its effectiveness with **satisfying results** (Reviewer 2kAh).\\n\\nWe appreciate their suggestions and comments and carefully revise our paper accordingly. Our major revisions include the following four aspects:\\n\\n1. In the Introduction section, we added the clarification of the measurement of event complexity (Line 107), and updated the last sample in Figure 1(c).\\n\\n2. In the Related Work section, we included the discussion and comparison with the prior work (Lines 161 - 163).\\n\\n3. In the Experiments section, we provided more quantitative comparisons with diverse evaluation metrics (Lines 351 - 370, and Table 1). We also updated the qualitative comparisons with more diverse visualization samples (Figure 4). We updated the new results for the event-subject combination (Line 484, Lines 514 - 517, and Figure 6). We further updated the user study with 20 more samples for a comprehensive evaluation (Lines 525 - 526, and Table 2).\\n \\n4. In the Appendix:\\n \\n - We moved the limitation and potential negative societal impact into Sec. D.\\n\\n - We added the exploration on attribute generation in Sec. C.\\n\\n - We moved more qualitative comparison results to Sec. E.\\n\\n\\nPlease note that we colorized (blue) the revisions in the new version of the paper.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nAs the deadline for the author-reviewer discussion period approaches, we would like to confirm whether our response has adequately addressed your concerns. If there are any remaining issues or if you require further clarification, please do not hesitate to let us know.\\n\\nThank you!\"}", "{\"comment\": \"Dear Reviewer,\\n\\nAs the deadline for the author-reviewer discussion period approaches, we would like to confirm whether our response has adequately addressed your concerns. If there are any remaining issues or if you require further clarification, please do not hesitate to let us know.\\n\\nThank you!\"}", "{\"comment\": \"Thanks for your concern. We are willing to address all the mentioned questions.\\n\\n## Q1: About event complexity\\n\\n**A1:** We want to emphasize that currently, there is no standard definition or quantification for the complexity of a \\\"event\\\". When conceptualizing an \\\"event\\\" as a graph, it comprises nodes (entities) and edges (representing the various roles and multiple relationships between entities). In this context, acknowledging that both the number of nodes and edges influence the overall complexity of the graph, we used the number of entities to measure the event complexity as a preliminary exploration.\\n\\nHowever, the absence of ground truth data at the graph level for each image \\u2014 along with the lack of annotations detailing entity roles, relationships, and interactions \\u2014 makes it challenging to prove or evaluate the absolute correlation between the number of entities and event complexity.\\n\\n\\nMoreover, compared with existing action or interaction customization works that only focus on one or two entities, the number of entities serves as an intuitive starting point for exploring the event complexity, instead of a definitive measure.\\n\\nWe have provided more qualitative comparison results in **Appendix E**. Specifically, all samples are sorted by the number of entities. As the number of entities increases, we observe more pronounced appearance leakage and failures in generating relationships or interactions within baseline models. In contrast, our FreeEvent method continues to maintain the quality of customization, further demonstrating its effectiveness. We hope these examples and results offer valuable insights.\\n\\n\\n## Q2: About the handling of entity.\\n\\n**A2:** We need to first emphasize that the task settings of the two methods are totally different.\\n\\n\\n1. For event customization, the entity nouns are directly given by the users as input prompt. For example, as shown in Figure 1(c), given the reference image with \\\"a woman and a man are boxing\\\", if the users want to customize it into \\\"a Spiderman and a Batman are boxing\\\", then they can give the target prompt as \\\"Spiderman, Batman\\\". This is also similar with the setting of other action or relation customization works as shown in Figure 1(b), the users direct give the target entity they want to generate as nouns, e.g., panda, monkey. \\n\\n2. For multi-modal image generation, it takes different input modalities (e.g., language, audio, and vision). Specifically, to better model each modality, ImgAny proposes to represent each multi-modal as entity nouns for further feature extraction. For example, given the input audio \\\"meow\\\", ImgAny retrieved the audio feature with text features of all the entity nouns in the vocabulary to obtain the most pertinent entity word -- \\\"cat\\\". It then use the text feature of \\\"cat\\\" for further feature extraction and fusion.\\n\\n\\nIn summary, for FreeEvent, the entity nouns are provided as input by the users. In contrast, for ImgAny, the entity nouns are obtained as intermediate output during a specific step of the process. Therefore, the two branches for entity nouns serve completely different purposes and contexts, which is inappropriate for comparison.\\n\\n\\nWe hope this answers your questions. Thank you again for your valuable feedback, and please don\\u2019t hesitate to let us know if there are follow-up questions.\"}", "{\"title\": \"Response to Reviewer 1gh9 (1/3)\", \"comment\": \"Thank you for the detailed comments. We are willing to address all the mentioned weaknesses and questions.\\n\\n## Q1: Overclaim of the task.\\n> The manuscript introduces the 'event-customized image generation task' as a novel contribution. However, this task appears to have been previously addressed in the work titled \\\"Learning Disentangled Identifiers for Action-Customized Text-to-Image Generation\\\" presented at CVPR 2024.\\n\\n\\n**A1:** Thanks for your concerns. Actually, we have already discussed the mentioned action customization work in our introduction section, noting its significant limitations in addressing the ``event-customized image generation task\\\". Below, we aim to further emphasize and analyze its primary limitations.\\n\\n1. **Simplified Customization and Unconvincing Evaluations.** This action customization work only focuses on the basic actions of a single person. And it only provided 8 actions for evaluation, which is far from proving its capability to cover a wide range of actions in the real world. Besides, it does not explore or provide any results of more complex and diverse actions that involve multiple humans, let alone the interactive actions between humans, animals, and objects. Thus, based on the narrow focus on action customization and evaluation results, coupled with the absence of publicly available code for further validation, we believe it faces significant limitations in addressing complex actions or interactions between multiple humans, animals, and objects.\\n\\n2. **Insufficient Data.** Further considering its proposed method, it learns identifier tokens to represent specific actions. However, for each action, its training-based process requires a set of reference images (e.g., 10 images) paired with corresponding textual descriptions across different entities. Unfortunately, each action is highly unique and distinctive, i.e., gathering images that depict the exact same action is challenging. As shown in **Figure 1(b)**, there are still significant differences in the same action (e.g., *handstand*) between different reference images, which thus compromises the accuracy of learned tokens, leading to inconsistencies in action between generated images. As shown, the generated *handstand* pose of \\\"Spiderman\\\" and \\\"panda\\\" are different. Meanwhile, it will be more difficult to gather the example images for more complex events, e.g., the reference images shown in **Figure 1(c)**. This insufficient data issue for identical action has severely limited the practicality and generalizability of this method. \\n\\nTherefore, considering the setting, evaluation, and methodology of this work, it still faces significant limitations in addressing complex actions or interactions, both in terms of effectiveness and practicality. \\n\\nIn contrast, our \\\"event-customized image generation task\\\" focuses on the customization of \\\"event\\\", including diverse actions, poses, relations, and interactions over different entities (e.g., humans, animals, and objects). And it only needs one single reference image, which also eliminates the need for collecting \\\"exactly the same\\u201d example images. Thus, our proposed task addresses the limitations of existing action customization by optimizing both the scope and settings of customization. This advancement enables customized image generation to extend to more complex real-world scenes, making it a novel and well-motivated contribution.\"}", "{\"summary\": \"This paper introduces a new task, event-customized image generation, which aims at accurately capturing the complex task and generate customized images with various target entities. Meanwhile, a training-free event customization method, FreeEvent is proposed to solve the event-customized image generation task. The FreeEvent consists of two paths along side the general diffusion denoising process, i.e., the entity switching path and the event transferring path. The entity switching path applies cross-attention guidance and regulation for target entity generation while the event transferring path injects the spatial feature and self-attention maps from the reference image to the target image for event generation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Clear limitation analysis of existing works, i.e., simplified customization and insufficient data.\\n2. Good motivations for addressing the proposed task.\", \"weaknesses\": \"1. Unclear Definition of \\\"Event-Customized Image Generation\\\": The paper\\u2019s definition of \\\"event-customized image generation\\\" lacks clarity, especially regarding the complexity and scope of events. Although the paper explains entity interactions, it does not address attributes adequately. Additionally, there is no quantitative measure for defining the complexity level that qualifies as an event, leaving ambiguity in how \\\"event\\\" is operationalized.\\n\\n2. Lack of Comparison with Related Work: The training-free framework and emphasis on entity and attribute handling appear to align closely with prior work, specifically the ImageAnything framework [A]. The paper fails to compare itself with ImageAnything in terms of motivation, methodology, and structural framework, missing an opportunity to clarify its novelty and improvements over similar approaches.\\n\\n3. Limited Information on Similarity Metrics: The similarity metric used for assessments in Table 1 is not specified, leaving readers uncertain about the criteria for evaluation. Without this information, the results may be hard to interpret, limiting the reproducibility and transparency of the evaluation.\\n\\n4. Insufficient Performance Metrics: The paper could enhance its assessment by including standard image generation metrics, such as FID (Fr\\u00e9chet Inception Distance) and CLIP scores, for a more comprehensive comparison. Relying on a limited set of metrics may not provide a well-rounded evaluation, which could affect the perceived robustness of the proposed method.\", \"questions\": \"1. The definition of the \\\"event-customized image generation\\\" is somehow not clear. \\\"Given a single reference image, we define the event as all actions and poses of each single entity, and their relations and interaction between different entities.\\\" The entity part is fully addressed, however, how about the attribute part? Is there any quantitative definition for how complex can leads to an event?\\n2. The training free framework and the focus on entity and attribute is somehow similar to a prior work, Imageanything[A]. Please give clear discussion and comparison with this work regarding motivation, methodology, and frameworks.\\n[A] Lyu Y, Zheng X, Wang L. Image anything: Towards reasoning-coherent and training-free multi-modal image generation[J]. arXiv preprint arXiv:2401.17664, 2024.\\n3. What kind of similarity metric is used to assess the methods in Tab.1?\\n4. Could the author include more metrics to assess the proposed FreeEvent and the existing methods, such as FID, CLIP score?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 2kAh (1/3)\", \"comment\": \"Thank you for the detailed comments. We are willing to address all the mentioned weaknesses and questions.\\n\\n## Q1: The task setting hinders the diversity of generated images.\\n> The task setting of event-customized image generation raises the following concerns: The poses of each entity, as well as the overall spatial configuration of the generated image is restricted to maintain identical with the reference image, which hinders the diversity of generated images.\\n\\n**A1:** Thanks for your concerns. While the task setting of event-customized image generation aims to capture the identical poses and spatial configuration from the reference image, the diversity of the generated images can be ensured from different aspects:\\n1. Various target entities. The target images can be generated with various combinations of diverse target entities (e.g., animals, objects, and characters). We provided more visualization results in **Appendix E**, and reorganized the sample order to show the diversity of generated images based on the same reference image.\\n2. Different backgounds the styles. As the ablation results are shown in **Figure 5(b)**, the target images can be generated with extra content for the background and style by changing the target prompt. \\n3. Combination of subject customization. The event customization can be further combined with subject customization to generate target entities with user-specified concepts. We updated and provided more visualization results in **Figure 6**, which includes the subject customization of diverse concepts (e.g., celebrities, regular objects, and background images).\\n\\n\\n\\n## Q2: Interaction semantic and poses preservation are influenced in some cases.\\n> Further, even if the poses are successfully maintained, the interaction semantic may be influenced. In the example of ``skeleton, statue\\\" in Figure (1), when the laptop is changed to a book, the interaction semantic between human and object is changed. Is it against the proposed definition of task setting?\\n\\n> In some cases, the interaction semantic and poses perservation requirement is conflicted, as mentioned in weaknesses. Can the author explain the priority of such requirements?\\n\\n**A2:** Thanks for your concerns. During event customization, the reference entities and their corresponding target entities may sometimes exhibit semantic differences (e.g., changing ''laptop'' to ''book'' in Figure 1). In such scenarios, in order to generate satisfying target entities, the preservation of interaction semantics and poses may sometimes be influenced. We need to emphasize that this is not a conflict or against the proposed definition of the task setting. Instead, it ensures the balance between transferring refenrece event and generating reasonable entities when dealing with more complex and diverse target entities. Here we take more examples for detailed clarification:\\n\\n- As the reference image with ''a cat painted on a rock\\\" shown in Appendix E, Figure 11, when changing the ''rock\\\" to a ''book\\\" (row 1), the interaction semantic between ''book\\\" and ''sheep\\\" was preserved, while the structures were influenced since the books are usually rectangular in shape.\\n\\n- As the reference image with ''a woman holding two apples\\\" shown in Appendix E, Figure 12, when changing the ''woman\\\" into a ''robot\\\" (row 8), the pose was preserved, while the interaction semantic between ''robot\\\" and ''cake\\\" was influenced since the robot is equipped with mechanical claw instead of human fingers. \\n\\nAlthough the interaction semantics or structures in both two cases are affected, their overall customization effectiveness remains intact. This also ensures that the generated target entities are more reasonable and align better with common sense. Therefore, we do not explicitly restrict the priority of preserving interaction semantics and poses. Furthermore, these cases also highlight the robustness of our method, which ensures the trade-off between event preservation and entity generation to generate more diverse and interesting target images.\\n\\n\\n## Q3: Applications of the new-proposed task.\\n> Can the author list some specific applications in reality of the new-proposed task setting to convince me of its value?\\n\\n**A3:** Thanks for pointing this out. Event customization can facilitate many valuable applications like artistic creation and advertisement production. Specifically:\\n\\n- Customized movie and animation making. For example, based on the reference comic (e.g., the story of Romeo and Juliet), create attractive target comics with various combinations of new characters (e.g., Spiderman and Batman). \\n\\n- Personilzed photo production. For example, replace King Kong and Godzilla in the ''King Kong vs. Godzilla\\\" movie poster with your own dog and cat. Or create a photo with your best friend in the same pose as your childhood group photo, even if you can't see each other now.\"}", "{\"summary\": \"This paper proposes a new task called Event-Customized Image Generation, which aims to not only control subjects, but also customize all specific actions, poses, relations, or interactions between different entities in the scene. Then it designs a training-free approach, by alternating the cross-attention within U-Net to eanble target subject generation, and utilizing origin spatial feature and self-attention to enale entity transfer. Experiments on two data-sets validated the effectiveness.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The proposed training-free approach is easy to adopt and can generate satisfying results.\", \"The paper is clear and easy to follow.\"], \"weaknesses\": \"1. The task setting of event-customized image generation raises following concerns: The poses of each entity, as well as the overall spatial configuration of generated image is restricted to maintain identical with reference image, which hinders the diversity of generated images. Further, even if the poses are successfully maintained, the interaction semantic may be influenced. In the example of \\u201cskeleton, statue\\u201d in Figure(1), when the laptop is changed to a book, the interaction semantic between human and object is changed. Is it against the proposed definition of task setting? Further, can this approach be integrated with other components to enable generation on specific backgroud image?\\n2. For Method design : (1) the spatial features and self-attention maps of reference images are adopted to inject event information, how to ensure such direct injection can prevent the leakage of subject information of reference image. (2) The auther claims that by equipping with subject-customized image generation approaches, it can generate entity-subject customized images by injecting target concept identifier tokens, can this approach be integrated with more advanced customization approaches to enable more flexible customization of regular concepts rather than just some celebrities visualized in Figure 6. \\n3. In Experiment, (1) The task setting for quantitative evaluation is confusing. How to reproduce the reference image if all the entities within the image stay the same pose as input condition. Providing one visualized example would be helpful to understand it. (2) Quantitative evaluation only adopts a retrieval-based experiment, which is not convicing enough, and the user study for qualitative evaluation only adopts 30 samples. (3) Further, the interaction semantic of generated image is not verified across all the experiments. Considering that HICO-DET and SWIG both contains annotations of interaction, the generated images should also evaluate corresponding interaction detection performance.\", \"questions\": \"1. Can the author list some specific applications in reality of the new-proposed task setting to convince me of its value?\\n\\uf0b7In some cases, the interaction semantic and poses perservation requirement is conflicted, as mentioned in weaknesses. Can the author explain the priority of such requirements?\\n2, Can the proposed approach enable more flexible content generation, like specify the background image? Can the entity-subject customization ability be expand to more diverse subject customization other than only some celebrities?\\n3. Can the evaluation setting of retrieval-based experiment be more clearly explained with some visualized examples?\\n4. The author should provide more experiments to validate the effectiveness of approach, like report the performance on interaction detection task.\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety']\", \"details_of_ethics_concerns\": \"The generated image may arise security and safety concern like abusing celebrity information.\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the extra information and experimental results provided by the authors! The responses address some of my concerns, however, there are still some questions:\\n\\n1. \\\"Therefore, in this paper, we primarily measure the event complexity using the total number of entities.\\\" Could the authors further give some instances or results to prove that different numbers of entities influence the event complexity?\\n\\n2. \\\"1) Although both methods have a unique branch for handling the entity, for ImgAny, the entity nouns are retrieved from the vocabulary to represent the multi-modal input. For FreeEvent, the entity nouns are directly given by the target prompt as the generation target. \\\" Which one should be the better choice? Can the authors give more discussion on this problem?\"}", "{\"comment\": \"Dear Reviewer,\\n\\nAs the deadline for the author-reviewer discussion period approaches, we would like to confirm whether our response has adequately addressed your concerns. If there are any remaining issues or if you require further clarification, please do not hesitate to let us know.\\n\\nThank you!\"}", "{\"metareview\": \"This work obtains two positive and two negative scores. After checking the paper, the AC is still concerned about the limited technical novelty of the proposed application.\\n\\nAs the author said, \\\"While these observations have been widely recognized in previous works, we are the first to integrate them to address this new task in a training-free manner. This approach demonstrates a thoughtful analysis of the task and a strategic application of existing technologies.\\\" This indicates that this work seems to be an extension of the existing techniques for the new proposed task. \\n\\nMoreover, the definition of event-customized is unclear. \\n\\nEven though the AC acknowledges this paper's well-written and sufficient experiments, this work is not enough for top conference papers.\", \"additional_comments_on_reviewer_discussion\": [\"## Points Raised by Reviewers\", \"1. **Unclear Definition of \\\"Event-Customized Image Generation\\\"**:\", \"Ambiguity in defining \\\"event complexity\\\" and lack of clarity in scope.\", \"Missing quantitative metrics for measuring complexity levels.\", \"2. **Methodological Novelty**:\", \"Claims of novelty in the proposed task overlap with prior works on action customization (e.g., CVPR 2024 paper).\", \"Event Switching Path and Event Transferring Path are perceived as adaptations of established methods like Attend-and-Excite and MasaCtrl.\", \"3. **Comparative Analysis and Metrics**:\", \"Limited comparisons with related works (e.g., ImageAnything framework).\", \"Missing standard metrics like FID, CLIP scores, and interaction detection performance.\", \"4. **Qualitative and Quantitative Validation**:\", \"Concerns over insufficient diversity in qualitative examples.\", \"Limited evaluation metrics and small-scale user studies.\", \"5. **Real-world Applications**:\", \"Unclear practical utility of maintaining fixed spatial configurations.\", \"Questions about the flexibility of event customization for diverse scenarios.\", \"## Author Responses and Revisions\", \"1. **Clarification of \\\"Event Complexity\\\"**:\", \"Defined complexity in terms of entity count and their interactions.\", \"Revised the introduction (Line 107) and provided examples in Appendix C.\", \"2. **Methodological Justifications**:\", \"Highlighted the novelty in integrating spatial features and attention maps for event customization.\", \"Improved cross-attention regulation to reduce appearance leakage.\", \"3. **Enhanced Metrics and Comparisons**:\", \"Added benchmarks with metrics like FID, CLIP-I, and CLIP-T scores.\", \"Conducted verb detection experiments using GSRTR, showing superior interaction semantics preservation.\", \"4. **Expanded Validation**:\", \"Increased user study samples to 50 and added more qualitative results in Appendix E.\", \"Demonstrated robustness in handling diverse target entities and complex scenarios.\", \"5. **Practical Applications**:\", \"Illustrated use cases like customized comic creation and personalized poster design.\", \"Showcased results with specified background and subject customization.\", \"## Final Decision Rationale\", \"**Strengths**:\", \"Clear task definition with reasonable scope for further exploration.\", \"Training-free approach demonstrates efficiency and practicality.\", \"Extensive revisions addressed most concerns.\", \"**Weaknesses**:\", \"Methodological novelty remains incremental, relying heavily on existing techniques.\", \"Persistent ambiguity in task definition and real-world utility.\", \"Limited diversity in results and small-scale experiments impact generalizability.\", \"Despite the revisions, concerns about originality and practical contributions outweighed the improvements. The decision was to **reject**, as the work requires further innovation and validation for acceptance.\"]}", "{\"title\": \"Response to Reviewer Gzkr\", \"comment\": \"Thank you for the detailed comments. We are willing to address all the mentioned weaknesses and questions.\\n\\n## Q1: Whether the technical contributions are limited.\\n> The major issue to argue in this paper is the usage of existing technical contributions for this new task. No theoretical justifications and no further big novel ideas. Whether the technical contributions are limited is worth discussion. The experimental results proved the evaluations. The new task and datasets are also worth reporting.\\n\\n**A1:** Thanks for your concerns. We made three folds of contributions in this paper: 1) The new and meaningful event-customized\\nimage generation task. 2) The first training-free method for event customization. 3) Two evaluation benchmarks for event-customized image generation. We appreciate your affirmation of our contribution on the new task and benchmarks. Specifically, for our training-free method FreeEvent, we provide more discussion below.\\n\\n- **Motivation.** Based on the two main components of the reference image, \\\\ie, entity and event, we proposed to decompose the event customization into two parts: 1) Switching the entities in the reference image to target entities. 2) Transferring the event from the reference image to the target image. Inspired by the observation that the spatial features and attention maps have been utilized to control the layout, structure, and appearance in text-to-image generation, we further designed the two corresponding paths to address the two parts. While these observations have been widely recognized in previous works, we are the first to integrate them to address this new task in a training-free manner. This approach demonstrates a thoughtful analysis of the task and a strategic application of existing technologies. \\n\\n- **Improvements.** We also made several specific improvements to better address the event customization task. 1) For entity switching, besides the cross-attention guidance, we further regulate the cross-attention map of each entity to avoid the appearance leakage between each target entity. 2) For event transferring, in contrast to previous works [A, B] that perform DDIM inversion on reference images, we directly use forward diffusion. This further reduces the appearance leakage from the reference image and saves the inversion cost and additional model inference time.\\n\\nWhile FreeEvent does incorporate some existing methods, its design is rooted in a thoughtful analysis of the new task and a strategic application of existing insights. Furthermore, we also introduced specific improvements, enabling it to address this new task more effectively and efficiently. FreeEvent has proved its effectiveness and efficiency in a wide range of experiments, beating existing controllable generation, image editing, and customization works. As the first work in this direction, we hope our method can unveil new possibilities for more complex customization, meanwhile serving as a challenging baseline for future works.\\n\\n[A] N Tumanyan, et al. Plug-and-play diffusion features for text-driven image-to-image translation. CVPR, 2023.\\n\\n[B] M Cao, et al. Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing. ICCV, 2023.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you again for your valuable feedback and thoughtful comments. We would like to kindly remind you that the deadline of the discussion period is approaching. If you have any additional questions, concerns, or clarifications you would like us to address, we would be more than happy to provide prompt responses.\\n\\nThank you for your attention, and we look forward to hearing from you!\"}", "{\"title\": \"Response to Reviewer 1gh9 (2/3)\", \"comment\": \"## Q2: Lack of Methodological Novelty.\\n\\n> The two pathways proposed in the paper appear to primarily combine existing methods. The Entity Switching Path employs a strategy similar to Attend-and-Excite for controlling content generation in specific locations, while the Event Transferring Path largely follows approaches resembling MasaCtrl. These methods are widely established and have been extensively discussed across numerous publications, which may limit the perceived innovation in the paper\\u2019s methodology.\\n\\n**A2:** Thanks for your concerns. We want to first emphasize that we made three folds of contributions in this paper: 1) The new and meaningful event-customized image generation task. 2) The first training-free method for event customization. 3) Two evaluation benchmarks for event-customized image generation. Specifically, for our training-free method FreeEvent, we provide more discussion below.\\n\\n- **Motivation.** Based on the two main components of the reference image, i.e., entity and event, we proposed to decompose the event customization into two parts: 1) Switching the entities in the reference image to target entities. 2) Transferring the event from the reference image to the target image. Inspired by the observation that the spatial features and attention maps have been utilized to control the layout, structure, and appearance in text-to-image generation, we further designed the two corresponding paths to address the two parts. While these observations have been widely recognized in previous works, we are the first to integrate them to address this new task in a training-free manner. This approach demonstrates a thoughtful analysis of the task and a strategic application of existing technologies. \\n\\n- **Improvements.** We also made several specific improvements to better address the event customization task. 1) For entity switching, besides the cross-attention guidance, we further regulate the cross-attention map of each entity to avoid the appearance leakage between each target entity. 2) For event transferring, in contrast to previous works [A, B] that perform DDIM inversion on reference images, we directly use forward diffusion. This further reduces the appearance leakage from the reference image and saves the inversion cost and additional model inference time. \\n\\nWhile FreeEvent does incorporate some existing methods, its design is rooted in a thoughtful analysis of the new task and a strategic application of existing insights. Furthermore, we also introduced specific improvements, enabling it to address this new task more effectively and efficiently. FreeEvent has proved its effectiveness and efficiency in a wide range of experiments, beating existing controllable generation, image editing, and customization works. As the first work in this direction, we hope our method can unveil new possibilities for more complex customization, meanwhile serving as a challenging baseline for future works.\\n\\n[A] N Tumanyan, et al. Plug-and-play diffusion features for text-driven image-to-image translation. CVPR, 2023.\\n\\n[B] M Cao, et al. Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing. ICCV, 2023.\"}", "{\"title\": \"Response to Reviewer 2kAh (2/3)\", \"comment\": \"## Q4: How to prevent the leakage of subject information of reference image.\\n> For Method design : (1) the spatial features and self-attention maps of reference images are adopted to inject event information, how to ensure such direct injection can prevent the leakage of subject information of reference image.\\n\\n**A4:** Thanks for your concerns. We prevent the leakage of subject information of reference images from two aspects.\\n\\n1. Firstly, we only perform the spatial feature injection in the first decoder of U-Net and perform self-attention map injection in the early time steps. These configurations can help to obtain rich image layout and structure information meanwhile mitigating the leakage of subject information.\\n\\n2. Secondly, in contrast to previous works that perform DDIM inversion to obtain $z^{R}_t$, we directly use forward diffusion. This also further reduces the appearance leakage from the reference image. As shown in Figure 4 and Appendix E, the inversion-based methods PnP and MAG-Edit all struggled with appearance leakage while our method successfully prevented it.\\n\\n## Q5: More flexible customization of regular concepts.\\n> For Method design : (2) The author claims that by equipping with subject-customized image generation approaches, it can generate entity-subject customized images by injecting target concept identifier tokens, can this approach be integrated with more advanced customization approaches to enable more flexible customization of regular concepts rather than just some celebrities visualized in Figure 6.\\n\\n> Can the entity-subject customization ability be expand to more diverse subject customization other than only some celebrities?\\n\\n**A5:** Thanks for your suggestion. We have provided more results in **Figure 6**. There's no doubt that our method can be combined with subject customization to more regular concepts. And we took the subject customization model Break-A-Scene [A] to make an exploration. Specifically, Break-A-Scene can extract multiple concepts from a single image, also denoted by concept identifier tokens. As the updated results are shown in Figure 6, our method can enable entity-subject customization to diverse regular concepts (e.g., the cup, shell, and panda), and these regular concepts can also be combined with the celebrity concepts to generate creative images.\\n\\n[A] Avrahami O, et al. Break-a-scene: Extracting multiple concepts from a single image. SIGGRAPH Asia 2023.\\n\\n\\n## Q6: Generation on specific background image.\\n> Further, can this approach be integrated with other components to enable generation on specific background image?\\n\\n> Can the proposed approach enable more flexible content generation, like specify the background image?\\n\\n**A6:** Thanks for your suggestion. We have provided more results in **Figure 6**. We made the exploration of specifying background images by taking it as a special concept in subject customization. We also use the Break-A-Scene [A] to learn identifier tokens for the background images. As the updated results are shown in Figure 6, our method successfully enabled the generation of specific background images. And such entity-subject customization can also further combine the concept of the background image (e.g., the beach) with other regular concepts (e.g., the panda) to generate more flexible and diverse content.\\n\\n[A] Avrahami O, et al. Break-a-scene: Extracting multiple concepts from a single image. SIGGRAPH Asia 2023.\\n\\n\\n## Q7: The task setting for quantitative evaluation is confusing.\\n> In Experiment, (1) The task setting for quantitative evaluation is confusing. How to reproduce the reference image if all the entities within the image stay the same pose as input condition. Providing one visualized example would be helpful to understand it.\\n\\n> Can the evaluation setting of retrieval-based experiment be more clearly explained with some visualized examples?\\n\\n**A7:** Thanks for your concerns. We provided the details of the retrieval-based experiment in **Appendix B**, including the visualized example of the SWiG-Event sample in Figure 7(a) and the evaluation process of image generation and image retrieval in Figure 7(b).\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper presents FreeEvent, a novel approach to customized image generation, targeting complex event-specific scenes rather than just entity appearances or basic interactions. FreeEvent tackles event-customized image generation, where an \\\"event\\\" includes detailed actions, poses, and relationships between entities. FreeEvent introduces two innovative pathways: the Entity Switching Path for entity-specific guidance and the Event Transferring Path for spatial feature and attention transfer.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"Clarity and Readability: The writing quality of the manuscript is commendable, making the intent and message of the paper easy to comprehend.\", \"Resource-Efficient Methodology: The proposed method is training-free, which is notably advantageous in terms of computational resource requirements, making it accessible and feasible even in resource-constrained environments.\"], \"weaknesses\": [\"Overclaim: The manuscript introduces the 'event-customized image generation task' as a novel contribution. However, this task appears to have been previously addressed in the work titled \\\"Learning Disentangled Identifiers for Action-Customized Text-to-Image Generation\\\" presented at CVPR 2024.\", \"Lack of Methodological Novelty: The two pathways proposed in the paper appear to primarily combine existing methods. The Entity Switching Path employs a strategy similar to Attend-and-Excite for controlling content generation in specific locations, while the Event Transferring Path largely follows approaches resembling MasaCtrl. These methods are widely established and have been extensively discussed across numerous publications, which may limit the perceived innovation in the paper\\u2019s methodology.\", \"Suboptimal Qualitative Results: The qualitative results presented show room for improvement. Specifically, I noticed that the retention of person identifiers is weak in Figures 1c and 6, which raises questions about the underlying cause; the authors should clarify this aspect further. Additionally, the prompt \\u201cP: skeleton, statue, monkey, book\\u201d appears four times throughout the paper. Reducing the repetition of this sample would likely enhance the diversity and impact of the results shown.\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Pbo8 (3/3)\", \"comment\": \"## Q3: Limited Information on Similarity Metrics.\\n> The similarity metric used for assessments in Table 1 is not specified, leaving readers uncertain about the criteria for evaluation. Without this information, the results may be hard to interpret, limiting the reproducibility and transparency of the evaluation.\\n\\n> What kind of similarity metric is used to assess the methods in Tab.1?\\n\\n**A3:** Thanks for your concerns. As mentioned in Sec 4.2, we used the CLIP score. Specifically, we extracted the image feature of each image through the CLIP visual encoder and calculated the cosine similarities for image retrieval. We have specified this more clearly in the **new manuscript (Line 353 - 355)**.\\n\\n## Q4: Insufficient Performance Metrics.\\n> The paper could enhance its assessment by including standard image generation metrics, such as FID (Fr\\u00e9chet Inception Distance) and CLIP scores, for a more comprehensive comparison. Relying on a limited set of metrics may not provide a well-rounded evaluation, which could affect the perceived robustness of the proposed method.\\n\\n> Could the author include more metrics to assess the proposed FreeEvent and the existing methods, such as FID, CLIP score?\\n\\n**A4:** Thanks for your suggestions. We have provided more metrics to validate the effectiveness of our methods.\\n\\n| | | | | | | |\\n|------------|-----|-----|------|--------|--------|-----|\\n| Model | Top-1 $\\\\uparrow$| Top-5 $\\\\uparrow$ | Top-10 $\\\\uparrow$ | CLIP-I $\\\\uparrow$| CLIP-T $\\\\uparrow$| FID $\\\\downarrow$|\\n| ControlNet | 10.66 | 23.98 | 31.28 | 0.6009 | 0.2198 | 70.45 |\\n| BoxDiff | 5.58 | 14.52 | 19.42 | 0.5838 | 0.2153 | 68.49 |\\n| **FreeEvent** | **34.10** | **62.04** | **71.82** | **0.7044** | **0.2238** | **29.05** |\\n\\n- First, for standard image generation metrics, we reported the FID (Fr\\u00e9chet Inception Distance) score, the CLIP-I score, and the CLIP-T score. We use the CLIP-I score to evaluate the image alignment of generated images with their reference images. And use the CLIP-T score to evaluate the text alignment of the generated images with text prompts. As shown in the above table, our FreeEvent achieves superior performance over baselines across all metrics, which indicates our method can generate images with better qualities and alignment with both the reference images and texts.\\n\\n- Furthermore, we also reported the verb detection performance to validate the interaction semantics of the generated images (the Top-K represents the top-k detection accuracy). Specifically, we utilized the verb detection model GSRTR [B] which was trained on the SWIG dataset to detect the verb class of each generated image, and then calculated the detection accuracy based on the annotations of the reference images (i.e., whether the generated images and their reference images have the same verb class). As shown in the above table, our FreeEvent achieves superior performance over baselines, which indicates our method can better preserve the interaction semantics of the generated images.\\n\\nWe hope these metrics can provide a thorough evaluation of our method. We have revised our **quantitative evaluation section in Sec 4.2** to add the above results.\\n\\n[B] Junhyeong Cho, et al. Grounded Situation Recognition with Transformers. BMVC, 2021.\"}", "{\"title\": \"Response to Reviewer 1gh9 (3/3)\", \"comment\": \"## Q3: Suboptimal Qualitative Results.\\n\\n> The qualitative results presented show room for improvement. Specifically, I noticed that the retention of person identifiers is weak in Figures 1c and 6, which raises questions about the underlying cause; the authors should clarify this aspect further. Additionally, the prompt \\u201cP: skeleton, statue, monkey, book\\u201d appears four times throughout the paper. Reducing the repetition of this sample would likely enhance the diversity and impact of the results shown.\\n\\n**A3:** Thanks for your concerns. We have updated the results in **Figure 1(c)** and **Figure 6**. For event-subject customization, we combine our framework with subject customization methods to generate target entities with user-specified subjects, i.e., represented by identifier tokens. Generally, two main elements contribute to promising subject customization: 1) Enough example images (e.g., 5 images) for the given subject. 2) An effective training process for learning the corresponding identifiers. Since we only used one example image for each subject and took the preliminary customization work DreamBooth as an easy exploration, the learned identifier tokens are not \\\"strong\\\" enough to represent the characteristics of each subject, thus leading to the suboptimal results shown in previous Figure 1(c) and Figure 6.\\n\\nNaturally, this can be further improved by employing more advanced subject customization methods. We took the Break-A-Scene [A] model to make an improvement. Specifically, Break-A-Scene introduces enhanced training processes for learning better identifier tokens, and it can extract multiple concepts from a single image. As the updated results are shown in **Figure 1(c)** and **Figure 6**, we effectively achieved the Event-Subject customization with better subject customization results: 1) For the person subjects, we can now better preserve their characteristics (e.g., facial features, hairstyles, clothing textures, and colors). 2) We also enable more flexible customization of diverse regular concepts (e.g., the cup, shell, and panda) and the concept of the background image (e.g., the beach). Meanwhile, we can also combine these different concepts to generate more diverse and creative images. Notably, we do not modify any part of our event customization framework. In summary, the effectiveness of subject customization depends on the methods used for subject customization itself. Employing more advanced methods naturally yields better results, further demonstrating the strong practicality of our proposed framework. It enables seamless plug-and-play integration with state-of-the-art subject customization techniques, facilitating a more diverse and personalized generation.\\n\\nAdditionally, we have modified **Figure 4** to reduce the repeated samples and show more diverse samples. We also provided a wide range of events and results in **Appendix E**.\"}", "{\"comment\": \"Thank you for providing the additional information and experimental results. The responses address some of my concerns; however, there are still a few questions that remain:\\n\\n1. While both the subject and background can be replaced, the spatial configuration is fixed. It seems crucial to identify a reference image that fully satisfies the specific spatial configuration requirements. Alternatively, could the authors provide a concrete example to illustrate the practical applications of generating fixed spatial configurations? While I understand that this approach has the potential to generate diverse and interesting content, I would appreciate further clarification on the practical value of maintaining a fixed spatial configuration.\\n\\n2. The explanation regarding the conflict between pose and action is not convincing. In the introduction, the authors highlight the limitations of the action-customized method, yet the final approach does not seem to consistently preserve interactive semantics, which feels somewhat inconsistent. I would suggest revisiting the definition of the event-customized generation task to resolve the conflict between pose and action.\"}", "{\"title\": \"Responses to additional questions\", \"comment\": \"Thanks for your concerns. We are willing to address all the mentioned questions.\\n\\n## Q1: Practical applications of generating fixed spatial configurations.\\n\\n\\n**A1:** Here we present two specific examples regarding customized comic making and personalized photo production.\\n\\n1. Using reference comic books, such as the story of Romeo and Juliet, we can customize it to feature characters like Spiderman and Batman. Specifically, the spatial configurations of each comic page remain fixed (e.g., the two characters are talking, dancing, or walking), while users can switch the characters to create interesting combinations.\\n\\n2. When designing a poster for a school singing competition, we can take a reference poster, such as the one for the movie \\\"The Avengers,\\\" where all spatial configurations are fixed (e.g., the locations and poses of each hero). In this case, we can replace each hero with the participating singers to create an eye-catching poster.\\n\\n\\n\\n## Q2: The conflict between interaction semantic and poses preservation.\\n\\n\\n**A2:** It cannot be denied that the ideal results of event customization is to consistently preserve all interactive semantics while maintaining the same actions and poses. And our approach does face limitations in certain cases, resulting in imperfect results.\\n\\nWe would like to emphasize that these \\\"conflict results\\\" often occur when there are too many entities involved, or when the reference entities and their corresponding target entities exhibit semantic differences. In such scenarios, the preservation of interaction semantics and poses may be compromised in order to generate satisfactory target entities.\\n\\nDespite these challenges, FreeEvent has demonstrated its effectiveness and efficiency across a wide range of experiments, outperforming existing methods in controllable generation, image editing, and customization. This includes quantitative comparisons based on both interaction semantic evaluations and standard image quality assessments. Additionally, as shown in the qualitative results in Appendix E, our approach can effectively preserve actions and poses with various target entities.\\n\\nAs the first work in this direction, it's not perfect, however, we hope our method can unveil new possibilities for achieving better results, meanwhile serving as a challenging baseline for future works.\"}", "{\"title\": \"Response to Reviewer Pbo8 (2/3)\", \"comment\": \"## Q2: Lack of Comparison with Related Work.\\n> The training-free framework and emphasis on entity and attribute handling appear to align closely with prior work, specifically the ImageAnything framework [A]. The paper fails to compare itself with ImageAnything in terms of motivation, methodology, and structural framework, missing an opportunity to clarify its novelty and improvements over similar approaches.\\n\\n> The training free framework and the focus on entity and attribute is somehow similar to a prior work, Image anything[A]. Please give clear discussion and comparison with this work regarding motivation, methodology, and frameworks. \\n\\n**A2:** Thanks for your suggestions. While Image Anything (ImgAny) also focuses on diffusion-based training-free image generation, there are key differences among the motivation, methodology, and frameworks compared with our FreeEvent.\\n\\n- **Motivation.** ImgAny aims at taking different input modalities (\\\\eg, language, audio, and vision) for multi-modal image generation. Thus, the target of ImgAny is to generate reasonable content corresponding to the input modalities. For example, given the input audio \\\"meow\\\", the input text \\\"green eye\\\" and the input image \\\"bed\\\", ImgAny aims to generate an image that contains \\\"a cat\\\" with \\\"green eyes\\\" in a \\\"bed\\\". Meanwhile, it will not specify the specific pose or layout of the \\\"cat\\\" and \\\"bed\\\". Differently, we takes fixed input modalities (the reference image and target prompt). The target for FreeEvent is to capture the event from the reference image to generate new images with entities based on target prompt. For example, given reference image \\\"a cat is lying on a bed\\\", FreeEvent aims to capture the specific pose, spatial layout of the cat and bed, and the interaction between them. We can then customize this event with new entities to generate \\\"a tiger is lying on a desk\\\" with the same pose and interactions by giving target prompt \\\"tiger, desk\\\". Or \\\"a dinosaur is lying on a mountain\\\" by \\\"dinosaur, mountain\\\". To summarize, ImgAny focuses on modeling the complex combinations of different input modalities. Our FreeEvent focuses on modeling the complex input event and combinations of target entities.\\n\\n- **Methodology.** For ImgAny, it aims to extract the fused multi-modal feature of the input modalities as conditions. Specifically, it extracts the fused text feature of entity nouns and attributes words as the multi-modal feature to condition Stable Diffusion (SD). That is, based on the pre-trained text-to-image SD model, replacing the general text embedding with the fused multi-modal feature for denoising. Instead of treating different input modalities with text features, we utilized spatial features and attention maps to achieve event customization during the denoising process. Specifically, we transfer the event by injecting the spatial features and self-attention maps, and guide the generation of the target entity by modifying the cross-attention maps and latent. Besides, we take the general text embedding of target prompt as input conditions for SD. To summarize, ImgAny and FreeEvent have distinct methodologies for both feature extraction and generation processes.\\n\\n- **Framework.** Notably, both ImgAny and FreeEvent proposed two branches to address their target from two aspects, i.e., entity and attribute for ImgAny, event and entity for FreeEvent. However, their frameworks have distinct differences. 1) Although both methods have a unique branch for handling the entity, for ImgAny, the entity nouns are retrieved from the vocabulary to represent the multi-modal input. For FreeEvent, the entity nouns are directly given by the target prompt as the generation target. Besides, ImgAny's entity branch focuses on extracting the text feature of the entity nouns, while FreeEvent's entity path focuses on utilizing the cross-attention maps of each entity noun. 2) The attribute branch of ImgAny focuses on extracting the text feature of the attribute words to represent the multi-modal input. As mentioned in Q1, we does not explicitly model the attribute part, and the event path of FreeEvent focuses on injecting spatial features and self-attention maps from reference image. These two branches are also totally different. 3) The whole ImgAny is operated **before** the denoising process of SD, i.e., replacing the general text embedding with the fused multi-modal feature before each denoising step. In contrast, the FreeEvent is operated **during** the general denoising process of SD, i.e., performing feature injection and attention guidance alongside the denoising step. To summarize, the frameworks of ImgAny and FreeEvent differ markedly in their detailed design, purpose, and operation.\\n\\nWhile these differences make it inappropriate to use ImgAny as baseline, there's no doubt that ImgAny acts as a groundbreaking work in diffusion-based training-free image generation. We have included a discussion and comparison of ImgAny in the **related work section**.\"}", "{\"title\": \"Response to Reviewer 2kAh (3/3)\", \"comment\": \"## Q8: More experiments to validate the effectiveness of the approach.\\n> In Experiment, (2) Quantitative evaluation only adopts a retrieval-based experiment, which is not convicing enough, and the user study for qualitative evaluation only adopts 30 samples. (3) Further, the interaction semantic of generated image is not verified across all the experiments. Considering that HICO-DET and SWIG both contains annotations of interaction, the generated images should also evaluate corresponding interaction detection performance.\\n\\n> The author should provide more experiments to validate the effectiveness of approach, like report the performance on interaction detection task.\\n\\n**A8:** Thanks for your suggestions. We have provided more experiments to validate the effectiveness of our methods.\\n\\n1. More Quantitative Experiments\\n\\n| | | | | | | |\\n|------------|-----|-----|------|--------|--------|-----|\\n| Model | Top-1 $\\\\uparrow$| Top-5 $\\\\uparrow$ | Top-10 $\\\\uparrow$ | CLIP-I $\\\\uparrow$| CLIP-T $\\\\uparrow$| FID $\\\\downarrow$|\\n| ControlNet | 10.66 | 23.98 | 31.28 | 0.6009 | 0.2198 | 70.45 |\\n| BoxDiff | 5.58 | 14.52 | 19.42 | 0.5838 | 0.2153 | 68.49 |\\n| **FreeEvent** | **34.10** | **62.04** | **71.82** | **0.7044** | **0.2238** | **29.05** |\\n\\n\\n- For quantitative evaluation, as shown in the above table, we reported the verb detection performance to validate the interaction semantics of the generated images (the Top-K represents the top-k detection accuracy.). Specifically, we utilized the verb detection model GSRTR [B] which was trained on the SWIG dataset to detect the verb class of each generated image, and then calculated the detection accuracy based on the annotations of the reference images (i.e., whether the generated images and their reference images have the same verb class). Our FreeEvent achieves superior performance over baselines, which indicates our method can better preserve the interaction semantics of the generated images.\\n\\n- We further reported more standard image generation metrics for a more comprehensive comparison, including the FID (Fr\\u00e9chet Inception Distance) score, the CLIP-I score, and the CLIP-T score. We use the CLIP-I score to evaluate the image alignment of generated images with their reference images. And use the CLIP-T score to evaluate the text alignment of the generated images with text prompts. As shown in the above table, our FreeEvent achieves superior performance over baselines across all metrics, which indicates our method can generate images with better qualities and alignment with both the reference images and texts.\\n\\n[B] Junhyeong Cho, et al. Grounded Situation Recognition with Transformers. BMVC, 2021.\\n\\n2. More User Study\\n\\n\\n| Model | Ours | ControlNet | BoxDiff | PnP | MAG-Edit | DreamBooth | ReVersion |\\n|-------|------|------------|---------|-----|----------|------------|-----------|\\n| Human Judgement | **48** | 19 | 2 | 31 | 13 | 1 | 0 |\\n\\n- For the user study, we prepared 20 more trials and invited the same 10 experts as before. With the 30 samples we have already collected, this ended in a total of 50 samples. As shown in the above table, FreeEvent achieves better performance on human judgments (HJ) compared with all the baseline models.\\n\\n\\nWe have revised our **quantitative evaluation section in Sec 4.2** and the **user study section in Sec 4.5** to add the above results.\"}", "{\"title\": \"Response to Reviewer Pbo8 (1/3)\", \"comment\": \"Thank you for the detailed comments. We are willing to address all the mentioned weaknesses and questions.\\n\\n\\n## Q1: Unclear Definition of ''Event-Customized Image Generation\\\"\\n> The paper\\u2019s definition of \\\"event-customized image generation\\\" lacks clarity, especially regarding the complexity and scope of events. Although the paper explains entity interactions, it does not address attributes adequately. Additionally, there is no quantitative measure for defining the complexity level that qualifies as an event, leaving ambiguity in how \\\"event\\\" is operationalized.\\n\\n> The definition of the \\\"event-customized image generation\\\" is somehow not clear. \\\"Given a single reference image, we define the event as all actions and poses of each single entity, and their relations and interaction between different entities.\\\" The entity part is fully addressed, however, how about the attribute part? Is there any quantitative definition for how complex can leads to an event?\\n\\n**A1:** Thanks for your concerns. The key to ''event-customized image generation\\\" lies in capturing the actions, poses, relations, and interactions among the reference entities to generate new entities, so we mainly focus on the transferring of events and the switching of entities (i.e., given by entity nouns). Thus, in this paper, we didn't explicitly model the attributes. However, as the results are shown in **Figure 5(b)** since we can generate extra content for background and style by giving corresponding text descriptions, we thus tried to model the attributes by giving extra adjectives to the target prompt as an easy and natural exploration. Meanwhile, to ensure the accurate generation of the attributes, we applied the cross-attention guidance and regulation on each attribute using the mask of the entity they describe. As the results shown in **Appendix C** and **Figure 8**, our method successfully addresses the attributes of the corresponding entity (e.g., colors, materials, and ages). After all, while the attribute part is not the primary focus of our work, our approach shows potential and effectiveness in addressing it, and we would be happy to conduct further research in our future work.\\n\\nFor the quantitative definition of the event complexity level, we have the following discoveries: \\n1) The complexity of actions and interactions in an image increases as the number of entities increases. Taking the SWiG and HICO-DET datasets we use as an example, these datasets have been widely used in verb and relation detention tasks. The accuracy of detection significantly decreases as the number of entities in the image increases. \\n2) The event complexity among different types of entity also differs, e.g., the interaction of ''three humans fighting\\\" is generally more complex than ''a man holding two apples\\\". Thus, we can first quantitatively measure the complexity level of the event by the total number of entities. For images with the same total number of entities, we further count the distribution of different categories of entities, i.e., the respective numbers of humans, animals, and objects. We can then further quantify their complexity based on the rule of ''human > animal > object\\\", e.g., the event with ''2 humans and 1 animal\\\" is more complex than ''1 human and two objects\\\". However, such rules or settings may not be applicable to all images in the real world. \\n\\nTherefore, in this paper, we primarily measure the event complexity using the total number of entities.\\n\\nWe have made the corresponding revision in the **Introduction (Line 107)** to clarify the measurement of event complexity. And provided the exploration of attribute generation in **Appendix C**.\"}", "{\"summary\": \"This paper proposed a new task of extending the customized image generation to more complex scenes for general real-world applications. The event incorporates specific actions, poses, relations and interactions between different entities in the scene. Free-event the method is proposed to solve the task by adding two extra paths along the diffusion denoising process, entity switching path, and event transferring paths. In addition, for evaluation, the paper proposed two more benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The method seems correct\", \"The writing and organization seem clear\", \"The efforts to formalize a new benchmark and task, although from existing datasets\"], \"weaknesses\": [\"The major issue to argue in this paper is the usage of existing technical contributions for this new task. No theoretical justifications and no further big novel ideas. Whether the technical contributions are limited is worth discussion. The experimental results proved the evaluations. The new task and datasets are also worth reporting.\"], \"questions\": \"see weakkness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"n/a\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
88JJjsLtqr
Less is More: Masking Elements in Image Condition Features Avoids Content Leakages in Style Transfer Diffusion Models
[ "Lin Zhu", "Xinbing Wang", "Chenghu Zhou", "Qinying Gu", "Nanyang Ye" ]
Given a style-reference image as the additional image condition, text-to-image diffusion models have demonstrated impressive capabilities in generating images that possess the content of text prompts while adopting the visual style of the reference image. However, current state-of-the-art methods often struggle to disentangle content and style from style-reference images, leading to issues such as content leakages. To address this issue, we propose a masking-based method that efficiently decouples content from style without the need of tuning any model parameters. By simply masking specific elements in the style reference's image features, we uncover a critical yet under-explored principle: guiding with appropriately-selected fewer conditions (e.g., dropping several image feature elements) can efficiently avoid unwanted content flowing into the diffusion models, enhancing the style transfer performances of text-to-image diffusion models. In this paper, we validate this finding both theoretically and experimentally. Extensive experiments across various styles demonstrate the effectiveness of our masking-based method and support our theoretical results.
[ "Text-to-Image Diffusion Models", "Style Transfer", "Content Leakage" ]
Accept (Poster)
https://openreview.net/pdf?id=88JJjsLtqr
https://openreview.net/forum?id=88JJjsLtqr
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xv1ckbTogT", "vd1Ctw9Epg", "vG6OAxgUvt", "uCVGMKKtfO", "sfdouv5iwy", "qbKsI8R84b", "pmYd7tG6qf", "pXp86Rmu4N", "pLm9uyS7FU", "nzxpeWUm6I", "jSNGhhxE3p", "jHluy7h58c", "iqAsA8WWQZ", "erTszlkPDC", "eKmgpjyTQs", "bnYRl2lDnB", "bLRZGZmJMj", "Yh162dpVNX", "VMppuw0X2d", "IZKlHmcUPC", "HOtVciPpE2", "DJBROkF0e4", "CJSj1mbRGd", "BjA26mUg7m", "BYU82DVZwv", "As5LoRqtH0", "6RXKSyyEFK", "4Bxa6hJLNB" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730776116977, 1732611288344, 1732582559002, 1732585169679, 1732360103891, 1732400461146, 1730630093595, 1732408354705, 1732485067131, 1732637191093, 1737523712558, 1732362423826, 1732585782125, 1732542772063, 1732361756166, 1732362231883, 1732359356281, 1734620956584, 1730344270663, 1732362023054, 1732536602753, 1732536828164, 1732361318483, 1732527525369, 1730173765975, 1732556074775, 1732637275345, 1732611083710 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5538/Reviewer_eaaS" ], [ "ICLR.cc/2025/Conference/Submission5538/Reviewer_eaaS" ], [ "ICLR.cc/2025/Conference/Submission5538/Authors" ], [ "ICLR.cc/2025/Conference/Submission5538/Reviewer_SrjP" ], [ "ICLR.cc/2025/Conference/Submission5538/Authors" ], [ "ICLR.cc/2025/Conference/Submission5538/Reviewer_nfUg" ], [ "ICLR.cc/2025/Conference/Submission5538/Reviewer_SrjP" ], [ "ICLR.cc/2025/Conference/Submission5538/Authors" ], [ "ICLR.cc/2025/Conference/Submission5538/Reviewer_nfUg" ], [ "ICLR.cc/2025/Conference/Submission5538/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5538/Authors" ], [ "ICLR.cc/2025/Conference/Submission5538/Authors" ], [ "ICLR.cc/2025/Conference/Submission5538/Authors" ], [ "ICLR.cc/2025/Conference/Submission5538/Authors" ], [ "ICLR.cc/2025/Conference/Submission5538/Authors" ], [ "ICLR.cc/2025/Conference/Submission5538/Authors" ], [ "ICLR.cc/2025/Conference/Submission5538/Area_Chair_jRrp" ], [ "ICLR.cc/2025/Conference/Submission5538/Reviewer_gkxk" ], [ "ICLR.cc/2025/Conference/Submission5538/Authors" ], [ "ICLR.cc/2025/Conference/Submission5538/Area_Chair_jRrp" ], [ "ICLR.cc/2025/Conference/Submission5538/Reviewer_gkxk" ], [ "ICLR.cc/2025/Conference/Submission5538/Authors" ], [ "ICLR.cc/2025/Conference/Submission5538/Authors" ], [ "ICLR.cc/2025/Conference/Submission5538/Reviewer_nfUg" ], [ "ICLR.cc/2025/Conference/Submission5538/Reviewer_nfUg" ], [ "ICLR.cc/2025/Conference/Submission5538/Authors" ], [ "ICLR.cc/2025/Conference/Submission5538/Reviewer_eaaS" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents a style transfer method that can preserve style in the style reference image while ignored the content injection for the final style transfer result. The key idea of the proposed method is based on IP-adpator while masked out the some image tokens from the reference image. The masking strategy is first clustering the product feature of style image and content image and then filtering out the feature tokens of high means in style features. To approve the masked strategy is effective, the authors also provided several theoretical justifications.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The key strengths in this paper are the insights of analyzing different style transfer methods including IP-adpator and Instant Style. With those observations, the proposed method provide a masking solution to demonstrate that removing certain tokens especially the token with high correlations between content and style will result a high fidelity stylized images.\", \"weaknesses\": \"There are several weaknesses in this paper:\\n\\n1. The masked image feature is questionable. Although the proof demonstrate the divergency in a theoretical way, it is clear that it only demonstrate the divergence by comparing with InstantStyle rather than the method itself. Image token selection is based on the product between content and style feature, such computation is more likely filtering about the foreground style feature. Thus why not only encode the background or non-object related style patches? \\n\\n2. The visual comparison is not in a fair comparison. Lots of the results are in a cherry pick manner. For example, StyledDrop and StyleShot focus more in tradition painting like style transfer, while the experiments showing more like photo as a reference image, which not makes much sense. Moreover, the Figure 8 compare the results with InstantSyle but not StyleShot is also not fair since the setting is more like the traditional style transfer.\", \"questions\": \"Please double check the experiments. For example, I screenshot one style reference image and use the official styleshot demo, I could generate better results shown in the paper.\\n\\nAnother fair comparison is leveraging existing benchmark (the images used in styledrop and styleshot) and shown more proposed results on that. \\n\\nThere are also some questions on the visual results. For example, the Figure 1 shows that the proposed method also could not generate visual plausible results especially preserving the style in style reference image. In Figure 8, it is clear to see that some cases InstantStyle gives better results. As it preserves the content while generates less artifacts. Thus more results on the existing benchmark maybe a good justification.\", \"flag_for_ethics_review\": \"['Yes, Discrimination / bias / fairness concerns']\", \"details_of_ethics_concerns\": \"The paper clearly showing some weaknesses but lack of such discussion, especially on human related stylization.\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"A follow up question is the human related generation always have strong distortions. It may have some concerns for ethics. Please discuss the risk especially sharing more examples of text prompts contain different gender, skin color , ages and etc,. Otherwise, it will forbid for better ratings.\"}", "{\"title\": \"Thanks for your valuable time and suggestions\", \"comment\": \"Thank you for your recognition and support of our work. We sincerely appreciate the time and effort you have devoted to considering our response. Your constructive suggestions have greatly enhanced the quality of our manuscript. We will continue to refine and improve its presentation.\"}", "{\"comment\": \"Thank you for the detailed responses, which have addressed my concerns. I would like to raise my scores after reading the responses and the revised version.\"}", "{\"title\": \"Reply to questions\", \"comment\": \"**Reply to ``StyleShot's better results than those shown in the paper'':**\\n\\nThanks for your suggestion!\\nIt is likely that the image generation results of the diffusion model exhibit high randomness. To account for this, we compare the style transfer results by reporting the average results of four generations for each combination of style reference and target text prompt.\\n\\n**We also provide multi-sample generalization results for each combination of style reference and target text prompt at** https://drive.google.com/file/d/1XUCFhPFsrgD49uuQosNrU323zWxAqbbX/view?usp=drive_link. Content leakage and loss of text fidelity are marked for the one-to-one image generation results in Figure 20 of the revised manuscript. **To avoid the influence of randomness, we ensure that all model configurations remain consistent,** including the random seed, guidance seed, denoising steps, and other parameters. From the one-to-one comparison, we observe that our method significantly reduces content leakage and alleviates loss of text fidelity, consistently refining StyleShot\\u2019s results across all combinations.\\n\\n**Reply to `` Comparison with existing benchmark'':**\\n\\nIn Section 4.2, we conducted experiments on the existing benchmark (the StyleBench dataset used in StyleShot) to demonstrate the effectiveness of our method. Visual comparisons were provided in Figure 8 and Figure 12 of the paper.\\n\\n**Reply to ``Questions on the visual results'':**\\n\\nThe visual results in Figure 1 are based on the StyleBench dataset from StyleShot. Additional results are provided in Figure 6 and Figure 15 in the paper. Regarding the results in Figure 1, our method effectively avoids content leakage while exhibiting less style degradation compared to the previous method.\\nIn Figure 8(a), InstantStyle\\u2019s results tend to disrupt the style information extracted by StyleShot\\u2019s style encoder due to image-text misalignment. In Figure 8(b), InstantStyle also experiences style disruption, while our method alleviates this issue by using fewer appropriately-selected conditions.\"}", "{\"comment\": \"It seems both attached links in the reponse to weakness 1 are the same, maybe a typo?\"}", "{\"summary\": \"The paper focuses on the content leakage issues in the text-to-image diffusion model for style transfer, which aims to distangle the content and style characteristics of the style-reference images for generating outputs combining text content and visual styles. It proposes a simple and training-free method to decouple the content from style in style-reference images. By masking specific content-related elements within the image features, the proposed method prevents unwanted content information from influencing the output. The proposed method was evaluated on CIFAR-10 dataset and demonstrates good results.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The motivation of this paper is well elaborated, and the limitations of previous methods are clearly described. Therefore, potential readers can easily understand the core problem in style transfer.\\n2. The structure of the paper is well-organized and the presentation is easy to follow. \\n3. It proposes a masking-based technique to decouple the content and style in the reference images. Fig.3 clearly demonstrates the difference between the proposed method and previous methods.\", \"weaknesses\": \"1. The novelty of the proposed method is limited. 1). Compared with IP-adaptor and InstantStyle, the contribution of sampling masking features in the feature space is not significant. 2). Introducing a masking mechanism is effective for manually synthesizing high-quality images, which has been demonstrated in previous studies [1-3].\\n\\n2. The proposed method aims to decouple the content and style characteristics of the reference images, but this problem is not formally formulated in the paper. Therefore, it is hard to understand why the masking mechanism can achieve this goal.\\n\\n3. The proposed method needs to carefully select specific features for generating desirable styles, but the selection criteria are not clearly described.\\n\\n4. The paper claims that it proposes an efficient method to decouple the content and style of the reference images in the introduction part, but the paper does not show significant evidence to demonstrate its efficiency compared with the previous methods.\\n\\n5. The proposed method is only evaluated on CIFAR-10 dataset, and measured with subjective metrics that are not defined clearly. Therefore, existing experiment results are insufficient to demonstrate the proposed method's advantages.\\n\\n[1] Gao, Shanghua, Pan Zhou, Ming-Ming Cheng, and Shuicheng Yan. \\\"Masked diffusion transformer is a strong image synthesizer.\\\" In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 23164-23173. 2023.\\n[2] Xiao, Changming, Qi Yang, Feng Zhou, and Changshui Zhang. \\\"From text to mask: Localizing entities using the attention of text-to-image diffusion models.\\\" Neurocomputing 610 (2024): 128437.\\n[3] Couairon, Guillaume, Jakob Verbeek, Holger Schwenk, and Matthieu Cord. \\\"Diffedit: Diffusion-based semantic image editing with mask guidance.\\\" ICLR, 2023.\", \"questions\": \"1. It is better to elaborate on the advantages and significant contributions of the proposed method for style transfer compared to previous approaches?\\n\\n2. What is the formal definition of content leakages? Additionally, how does the proposed method effectively address this issue?\\n\\n3. How does the proposed method decouple the content and style characteristics of the reference images?\\n\\n4. How are features selected during the style transfer process? Please provide detailed information on the selection criteria. Are the same selection criteria, including hyperparameters, applied consistently to all output images?\\n\\n5. What makes the proposed method efficient? Compared to previous approaches, does it require less inference time or fewer GPU resources?\\n6. It would be beneficial to include more experiments on different datasets. Specifically, how does the proposed method perform when processing high-resolution images?\\n\\n7. Including additional objective evaluation metrics, such as FID, LPIPS, and CLIP score, would be valuable. Since the fidelity score highly depends on the chosen classifier, how does the performance change when a different classifier is used?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We apologize for the typo. The links have been updated, and the results can now be accessed at:https://drive.google.com/file/d/1jjAgViV9Z7MTrES0eMhaBPlpS2ZtyEVE/view?usp=sharing and https://drive.google.com/file/d/1zQQa2XHlTBspOJ1IvC6VKQnRxEuu2Lvc/view?usp=drive_link.\"}", "{\"comment\": \"re 1: thanks for the addtional comparison. Please add the discussion to the main draft in a new revision, since they are more relevant to the leakage problem discussed in the paper.\", \"re_2\": \"please make the correction in a new reivision.\", \"re_3\": \"add reference to the figure where needed in the new revision.\", \"re_5\": \"This is an important ablation in the paper, make sure it is added to the paper.\", \"re_6\": \"I saw L913. I was asking what details of the binrary classification. e.g., which model did you use for binary classification? any threshold used? etc.\", \"re_9\": \"I saw the part you quote in the appendix. You have 10 content objects (from cifar10) and 21 image styles (L893), 50 generations for each prompt. so 21x50=1050 images evaluated for each class per rater, is this true?\"}", "{\"comment\": \"We sincerely appreciate your valuable comments and the time you have dedicated to considering our response. We have revised the visual comparison in Figure 1 and included additional traditional style transfer examples in Appendix A.9 in the manuscript.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Part 2/2\", \"comment\": \"**Reply to weakness 5:**\\n\\nWe ablate on cluster number $K$ in the text-driven style transfer based on the StyleBench dataset. We report the image alignment and text alignment results based on three different CLIP backbones in the following Table. It is shown that a smaller $K$, such as $K=2$ can lead to a slightly higher text alignment score since more content-related elements in the style reference are masked. Especially in the 3D model, Anime, and Baroque art style that contains more human-related images, smaller $K$ can lead to higher text alignment scores and more efficiently avoiding content leakage.\\n\\n | | ViT-B/32 | | ViT-L/14 | | ViT-H/14 |\\n | --------------- | -------- | ---- | ----- | ---- | -------- |\\n | Image Alignment | | | | | |\\n | K=2 | 0.657 | K=2 | 0.608 | K=2 | 0.403 |\\n | K=3 | 0.656 | K=3 | 0.611 | K=3 | 0.410 |\\n | K=4 | 0.657 | K=4 | 0.615 | K=4 | 0.415 |\\n | K=5 | 0.657 | K=5 | 0.614 | K=5 | 0.415 |\\n | Text Alignment | | | | | |\\n | K=2 | 0.265 | K=2 | 0.212 | K=2 | 0.258 |\\n | K=3 | 0.264 | K=3 | 0.211 | K=3 | 0.253 |\\n | K=4 | 0.265 | K=4 | 0.210 | K=4 | 0.252 |\\n | K=5 | 0.264 | K=5 | 0.210 | K=5 | 0.252 |\\n\\n | | 3D Model | | Anime | | Baroque |\\n | ------------------------------- | -------- | ---- | ----- | ---- | ------- |\\n | Image Alignment based on ViT-H/14 | | | | | |\\n | K=2 | 0.474 | K=2 | 0.372 | K=2 | 0.384 |\\n | K=3 | 0.478 | K=3 | 0.381 | K=3 | 0.393 |\\n | K=4 | 0.485 | K=4 | 0.390 | K=4 | 0.404 |\\n | K=5 | 0.487 | K=5 | 0.380 | K=5 | 0.411 |\\n | Text Alignment based on ViT-H/14 | | | | | |\\n | K=2 | 0.213 | K=2 | 0.234 | K=2 | 0.257 |\\n | K=3 | 0.206 | K=3 | 0.232 | K=3 | 0.253 |\\n | K=4 | 0.189 | K=4 | 0.231 | K=4 | 0.253 |\\n | K=5 | 0.188 | K=5 | 0.229 | K=5 | 0.252 |\\n\\n**Reply to weakness 6:**\\n\\nWe apologize for the confusion. We detailed this in Line 913. We perform binary classification on the generated images to differentiate between the reference's content object and the text prompt, computing the classification accuracy, which is referred to as the fidelity score. Therefore, the fidelity score primarily reflects the model\\u2019s ability to control text prompts.\\n\\n**Reply to weakness 7:**\\n\\nIn all experiments, we **do not** use any style descriptions in the prompt. The only input difference between our method and the baseline model, IP-Adapter, is the content description for the style reference, which we simply use a common template: \\\"person, animal, plant, or object in the foreground\\\" in experiments on the StyleBench benchmark. That is, the proposed masking-based method does not require content knowledge of the image reference; instead, we leverage the CLIP text feature of a common template to identify the elements that need to be masked.\\n\\n**Reply to weakness 8:**\\n\\nFollowing the metric used in Gao et al., we report the image and text alignment scores alongside training steps in Figure 5(a). Image alignment refers to the cosine similarity between the CLIP embeddings of the generated images and the style reference images, while text alignment measures the cosine similarity between the CLIP embeddings of the generated images and the target text prompts.\\n\\nInitially, the image alignment is high due to significant content leakage in the generated image. As the unwanted content decreases, the alignment between the generated image and the style reference gradually decreases.\\n\\n [1] Junyao Gao, Yanchen Liu, Yanan Sun, Yinhao Tang, Yanhong Zeng, Kai Chen, and Cairong Zhao. Styleshot: A snapshot on any style. arXiv preprint arXiv:2407.01414, 2024.\\n\\n**Reply to weakness 9:**\\n\\nWe asked 10 users from diverse backgrounds to evaluate the generated results in terms of text fidelity, content leakage, and style similarity, and to provide their overall preference for each composition of style reference and target text prompt, considering these three aspects. We sampled 50 generated images for each target text prompt and provided all results to them.\"}", "{\"title\": \"Thanks for your valuable time and suggestions\", \"comment\": \"Thank you for your recognition and support of our work. We sincerely appreciate the time and effort you have devoted to considering our response. Your constructive suggestions have greatly enhanced the quality of our manuscript.\"}", "{\"title\": \"Thanks for your valuable time and suggestions\", \"comment\": \"We sincerely thank you for dedicating your valuable time to review our response. Your insightful suggestions have played a pivotal role in enhancing the overall quality of our paper. We greatly appreciate your valuable comments once again.\"}", "{\"title\": \"Reply to questions\", \"comment\": \"**Reply to Q1:**\\n\\nThank you for your helpful suggestion! We will add a part to highlight the comparison between our method and the previous methods. For details about the comparison, please refer to the reply to weaknesses.\\n\\n**Reply to Q2:** \\n\\nThank you for your helpful suggestion. We will provide a formal definition of the problem in the revised paper. As for how our method addresses this issue, we have detailed our approach in Lines 212-231. The effectiveness of our method is supported by the highest energy of $\\\\mathcal{E}(c_2, x_t)$, achieved through the proposed masked element selection criteria, as shown in Proposition 1. This effectively reduces the likelihood of content in style reference, leading to superior performance in content removal. We also provide the simulation result in the reply to Reviewer gkxk, confirming the results in Proposition 1.\\n\\n**Reply to Q3:** \\n\\nWe detailed the proposed method in Section 3.1. The proposed masking strategy to decouple the content and style characteristics was illustrated in Lines 212-231.\\n\\n**Reply to Q4:** \\n\\nThe proposed masking strategy is detailed in Section 3.1. For the proposed masking-based method, we use the same selection criteria based on the K-means cluster and set the clustering number as 2.\\n\\n**Reply to Q5:** \\n\\nCompared to training-based methods, such as InST, the proposed masking-based method does not require a training process and instead manipulates the image condition of the IP-Adapter in a plug-and-play manner. By simply performing clustering on the element-wise product of $e_1 \\\\cdot e_2$ based on IP-Adapter, our method effectively mitigates the content leakage issue. While the clustering process introduces a slight increase in GPU resource usage and inference time compared to the vanilla IP-Adapter model, it offers an efficient training-free solution for content removal.\\n\\n\\n | | Baseline (IP-Adapter) | Ours |\\n | :-------------------------------- | --------------------- | ----- |\\n | GPU usage | 16420M | 16530M |\\n | Inference Time on NVIDIA 4090 Ti | 10s | 11s |\\n\\n\\n **Reply to Q6:** \\n\\nWe perform experiments on our constructed dataset (**not a real CIFAR-10 dataset**), which uses content objects of CIFAR-10 but with various styles. The images are generated using the MACE code, and each image has a resolution of 512x512. All relevant details have been discussed in Lines 352-358 in the main paper. More importantly, we also evaluate our method on the benchmark dataset StyleBench and report the corresponding results in Figure 6 and Figure 12.\\n\\n **Reply to Q7:** \\n\\nThe evaluation metrics, **FID** and **LPIPS**, are primarily used to assess the quality of generated images by measuring the similarity between real and generated images. However, a high similarity does not necessarily indicate low content leakage or high style similarity. Following your insightful suggestion, we report the image alignment and text alignment scores for our method and its counterpart, InstantStyle, using different CLIP classifiers, based on the StyleBench dataset. The results, shown in the table below, reveal that while InstantStyle achieves slightly higher text alignment, it significantly sacrifices image alignment with the style reference across various CLIP classifiers. As illustrated in Figure 6 and Figure 12, InstantStyle avoids content leakage at the cost of style degradation. By leveraging appropriately fewer conditions, our method achieves a better balance between target content and style, producing more effective style transfer results.\\n\\n\\n | | ViT-B/32 | | ViT-L/14 | | ViT-H/14 |\\n | --------------- | -------- | ------------ | -------- | ------------ | ----- |\\n | Image Alignment | | | | | |\\n | InstantStyle | 0.575 | InstantStyle | 0.579 | InstantStyle | 0.352 |\\n | ours | 0.657 | ours | 0.608 | ours | 0.403 |\\n | Text Alignment | | | | | |\\n | InstantStyle | 0.275 | InstantStyle | 0.218 | InstantStyle | 0.263 |\\n | ours | 0.265 | ours | 0.212 | ours | 0.258 |\"}", "{\"title\": \"Part 1/2\", \"comment\": \"**Reply to weakness 1:**\\n\\nWe provide visual comparisons between our method, RB-Modulation, and CSGO. The results can be accessed at the following links: https://drive.google.com/file/d/1jjAgViV9Z7MTrES0eMhaBPlpS2ZtyEVE/view?usp=sharing and https://drive.google.com/file/d/1zQQa2XHlTBspOJ1IvC6VKQnRxEuu2Lvc/view?usp=drive_link. We present several key observations:\\n\\n 1. The CSGO method may suffer from style degradation or loss of text fidelity, showing inferior performance compared to our method.\\n 2. The RB-Modulation method proposes a framework that incorporates a style descriptor into a pre-trained diffusion model. With style descriptions, RB-Modulation can generate more satisfactory results than without them, preventing information leakage from the reference style and adhering more closely to the desired prompt.\\n 3. As pointed out in the original paper, ''The inherent limitations of the style descriptor or diffusion model might propagate into our framework''. Thus, RB-Modulation may fail to preserve the style of the reference when the style description does not align well with the image reference. For example, RB-Modulation\\u2019s results may experience style degradation on abstract, Orphism, or realism art styles.\\\"\\n 4. Building upon the impressive StyleShot and leveraging appropriately fewer conditions, our method successfully avoids content leakage while enhancing style, achieving better style transfer performance than recent competitive models.\\n\\n**Reply to weakness 2:**\\n\\nSorry for the confusion; m^i should be set to 0 to discard the content-related elements.\\n\\n**Reply to weakness 3:**\\n\\nWe apologize for the confusion. Figure 3(b) illustrates the Learning Paradigms introduced in Lines 305-318. In the Learning Paradigms, the style reference's image feature $e_1$ is subtracted by the content feature $\\\\psi(e_1)$ or $\\\\phi(e_2)$ to avoid the presence of content $c_2$. We will add an explanation for clarity. As pointed out in Line 361, the linear layers are trained through image reconstruction using mean squared error (MSE) loss to predict errors. The specific algorithm is provided in Algorithm 2 of Appendix A.3. The optimization objectives are illustrated in Lines 877 and 879 for the Text-Adapter (Baseline) and Image-Adapter (Ours), respectively. These optimization objectives are used to minimize the prediction error of noise in reconstructing style reference while maximizing the difference between the model conditioned on the style reference's content and that conditioned on the target prompt.\\n\\n**Reply to weakness 4:**\\n\\nThe ground-truth condition data distribution ($q(x|c_1, c_2, c_3)$) defines images that exhibit high style similarity with the style reference while avoiding content from the style reference while maintaining high fidelity with the target text prompt. Thus, \\\"smaller divergence\\\" indicates that the result distribution generated by our proposed method is closer to the ground-truth distribution, showing better style alignment, text fidelity, or less content leakage.\"}", "{\"title\": \"Reply to weaknesses\", \"comment\": \"**Reply to ``Demonstrate the divergence by comparing with InstantStyle rather than the method itself'':**\\n\\nIn Theorem 1, we compare the proposed method, which masks certain elements of the image condition, with the vanilla method. It should be noted that the vanilla method, without the masking operation, uses conditions $c_1$, $c_2$, and $c_3$, forming a family of models that includes the InstantStyle method.\\n\\n**Reply to ``Why not only encode the background or non-object related style patches'':**\\n\\nIt is true that we can encode the background or non-object-related style patches into the image encoder to prevent content leakage. To achieve this, we can use the GroundingDINO [1] and SAM [2] models to locate the non-object region in the image. However, compared to the proposed method, which performs masking in the latent space, masking patches in the image requires more computational resources and inference time.\\n\\n**Reply to `` the experiments showing more like photo as a reference image'':**\\n\\nIn this work, we focus on the issue of content leakage in style transfer, an important challenge that has been explored by several recent studies [3,4]. We have included a comparison between our method and these studies [3,4], along with visual comparisons available at the following links: https://drive.google.com/file/d/1jjAgViV9Z7MTrES0eMhaBPlpS2ZtyEVE/view?usp=sharing and https://drive.google.com/file/d/1zQQa2XHlTBspOJ1IvC6VKQnRxEuu2Lvc/view?usp=drive_link. A detailed discussion can be found in Reply 1 to Reviewer nfUg, and the related points have been incorporated into the revised manuscript.\", \"here_are_further_explanations_of_the_comparison_in_this_work\": \"1) To fully demonstrate the effectiveness of the proposed method in mitigating the leakage of various content from the image reference into the generated image, we create style references by combining 21 different styles and objects (from CIFAR-10). We analyze the experiment results in Section 4.1.\\n\\n2) In Section 4.2, we also conducted experiments on the standard benchmark dataset, StyleBench, proposed in StyleShot, which includes 73 different styles, comprising both non-object and object style references. As shown in the 5th and 6th rows of Figure 6, **when using non-object style references, both StyleDrop and StyleShot suffer from style degradation or loss of text fidelity.** **We provide additional results to compare the performance of our method with previous methods using non-object style references** at https://drive.google.com/file/d/1ZBISv9HsiSyw5yNoHPOdHamN8kqapoxG/view?usp=sharing\\n\\n**Reply to `` Figure 8 compare the results with InstantSyle but not StyleShot'':**\\n\\nWe provided visual comparisons between StyleShot and our method for image-driven style transfer in Figure 7. In Figure 8, we compare the proposed masking method with the feature subtraction approach of InstantStyle, **using the same model configuration, including the StyleShot style encoder**, random seed, guidance scale, and other parameters.\\n\\n[1] Liu S, Zeng Z, Ren T, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection[J]. arXiv preprint arXiv:2303.05499, 2023.\\n\\n[2] Ravi N, Gabeur V, Hu Y T, et al. Sam 2: Segment anything in images and videos[J]. arXiv preprint arXiv:2408.00714, 2024.\\n\\n[3] Rout L, Chen Y, Ruiz N, et al. RB-Modulation: Training-Free Personalization of Diffusion Models using Stochastic Optimal Control[J]. arXiv preprint arXiv:2405.17401, 2024.\\n\\n[4] Xing P, Wang H, Sun Y, et al. Csgo: Content-style composition in text-to-image generation[J]. arXiv preprint arXiv:2408.16766, 2024.\"}", "{\"metareview\": \"This paper tackles the style transfer problem aiming at text-to-image generation of stylized image by transferring the style from reference image. The fundamental contribution of this work is the proposed training-free masking-based method that decouples content from style, by simply masking specific style reference\\u2019s features preventing the content leakage from the stylized reference image. The masking is based on the clustering of element-wise product between image and text features and discard elements in the high-means cluster. The experiments validated the effectiveness of this proposed approach for style transfer.\\n\\nThis work received generally positive comments from the reviewers after rebuttal, and the final scores are 6, 6, 6, 8. The major strength of this work is on the training-free masking strategy. Considering the overall positive final recommendations of reviewers, the paper can be accepted, but requiring to incorporate the revisions into the final version.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer eaaS raised questions on the masking technique of image features and fair comparisons with InstantSyle and StyleShot. The authors provided link to more comparison results, and addressed the reviewer's concerns. Reviewer SrjP questioned on the limited novelty, reason of masking technique, and insufficient results. Reviewer gkxk asked the reason why clustering the element-wise product of\\u00a0e1\\u00a0and\\u00a0e2, and the inference speed. Reviewer nfUg gave a score of 8, and suggested on the comparison with more recent baselines, and asked questions on some details. The authors' rebuttal well solved these concerns, but the authors are suggested to include these suggestions/revisions in the final version.\"}", "{\"summary\": \"This paper proposes a simple but effective method to avoid content leakages and achieve better performance in style transfer. The main contribution is its innovative masking strategy, and extensive experiments demonstrate its superiority.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n2. The proposed masking strategy is novel and effective. \\n3. The authors provide both theoretical and experimental evidence to support their claims.\", \"weaknesses\": \"1. It remains unclear why clustering is performed on the element-wise product of $e_1$ and $e_2$. Is there a relationship between $e_1\\\\cdot e_2$ and the energy function?\\n2. The inference speed is slower than other methods, likely due to the additional time consumption introduced by the clustering algorithm. What is your inference time in practice? Is there a solution that avoids this additional time cost, or could the clustering algorithm be replaced to improve efficiency?\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Reply to Weakness1:**\\n\\nWe perform clustering on the element-wise product of $e_1$ and $e_2$. Compared to other clustering methods, such as clustering on the element-wise absolute difference between $e_1$ and $e_2$, our method achieves the highest energy score for content $c_2$ after masking elements in the high-mean cluster, as shown in Proposition 1. The high value of the energy function $\\\\mathcal{E}(c_2, x_t)$ indicates $x_t$ exhibits a low likelihood of content $c_2$, leading to superior performance in content removal.\\n\\n\\nWe conduct simulation experiments based on our constructed dataset (which we introduced in Sec 4.1) to demonstrate Proposition 1. Using the energy score proposed by Liu et al., we calculate the energy scores of the masked image features for two different masking approaches: one based on clustering the product of $e_1^i$ and $e_2^i$ ($e_1^i \\\\cdot e_2^i, i\\\\in\\\\{1, \\\\cdots d\\n\\\\}$) and the other based on clustering the absolute difference of $e_1^i$ and $e_2^i$ ($|e_1^i - e_2^i|, i\\\\in\\\\{1, \\\\cdots d\\\\}$). For both methods, we report the 0th, 25th, 50th, 75th, and 100th percentiles of the energy scores across various masking proportions. As shown in the table below, our method consistently generates higher energy scores when discriminating content $c_2$, confirming the results outlined in Proposition 1.\\n\\n | Masking Proportion | Method | 0 | 25 | 50 | 75 | 100 |\\n | ------------------ | -------------------------- | :--------- | --------- | --------- | --------- | -------- |\\n | 5% | **Element product (Ours)** | **-10.78** | **-6.59** | **-5.36** | **-4.08** | **1.64** |\\n | 5% | Absolute difference | -13.87 | -9.63 | -8.56 | -7.54 | -1.60 |\\n | 10% | **Element product (Ours)** | **-9.15** | **-5.15** | **-4.00** | **-2.77** | **2.02** |\\n | 10% | Absolute difference | -11.57 | -8.80 | -7.88 | -7.05 | -2.18 |\\n | 20% | **Element product (Ours)** | **-7.46** | **-3.46** | **-2.36** | **-1.35** | **2.66** |\\n | 20% | Absolute difference | -10.73 | -7.57 | -6.91 | -6.20 | -3.16 |\\n | 30% | **Element product (Ours)** | **-6.05** | **-2.62** | **-1.58** | **-0.63** | **2.87** |\\n | 30% | Absolute difference | -9.19 | -6.73 | -6.13 | -5.59 | -3.31 |\\n | 40% | **Element product (Ours)** | **-5.71** | **-2.23** | **-1.29** | **-0.43** | **2.70** |\\n | 40% | Absolute difference | -8.04 | -5.99 | -5.51 | -5.07 | -3.61 |\\n | 50% | **Element product (Ours)** | **-5.24** | **-1.94** | **-1.13** | **-0.37** | **2.69** |\\n | 50% | Absolute difference | -7.26 | -5.37 | -4.99 | -4.60 | -3.63 |\\n | 60% | **Element product (Ours)** | **-4.93** | **-1.72** | **-0.92** | **-0.22** | **2.72** |\\n | 60% | Absolute difference | -6.23 | -4.85 | -4.53 | -4.19 | -3.29 |\\n | 70% | **Element product (Ours)** | **-3.91** | **-1.25** | **-0.53** | **0.15** | **2.86** |\\n | 70% | Absolute difference | -5.62 | -4.32 | -4.06 | -3.77 | -2.98 |\\n | 80% | **Element product (Ours)** | **-3.11** | **-0.77** | **-0.14** | **0.53** | **2.20** |\\n | 80% | Absolute difference | -4.72 | -3.77 | -3.57 | -3.36 | -2.57 |\\n | 90% | **Element product (Ours)** | **-2.40** | **-0.67** | **-0.18** | **0.37** | **2.12** |\\n | 90% | Absolute difference | -4.00 | -3.32 | -3.15 | -3.01 | -2.55 |\\n\\n \\n [1]Liu W, Wang X, Owens J, et al. Energy-based out-of-distribution detection[J]. Advances in neural information processing systems, 2020, 33: 21464-21475.\\n\\n **Reply to Weakness 2:**\\n\\nCompared to the vanilla model, the clustering process required by our method incurs a slight increase in GPU resource usage and inference time. We report the inference time and GPU usage when the test batch size is set to 1, as follows:\\n\\n | | Baseline (IP-Adapter) | Ours |\\n | :-------------------------------- | --------------------- | ----- |\\n | GPU usage | 16420M | 16530M|\\n | Inference Time on NVIDIA 4090 Ti | 10s | 11s |\\n\\n\\n On the one hand, parallel computing can be employed to eliminate this additional time cost. On the other hand, instead of performing clustering to identify the masked elements, we can directly mask the top proportion (e.g., the top 5%) of the element-wise product of $e_1$ and $e_2$.\"}", "{\"title\": \"Please check the authors' responses\", \"comment\": \"Dear reviewers,\\n\\nCould you please check the authors' responses, and post your message for discussion or changed scores?\\n\\nbest,\\n\\nAC\"}", "{\"comment\": \"Thanks for the authors' detailed response and effort. My concerns have been addressed, and I will maintain my current score.\"}", "{\"title\": \"Reply to weaknesses\", \"comment\": \"**Reply to ``the contribution compared to IP-Adapter and InstantStyle is not significant'':**\", \"we_uncover_a_critical_yet_under_explored_principle\": \"guiding with appropriately selected fewer conditions can efficiently avoid unwanted content flowing into the diffusion model, enhancing the style transfer performances. In this paper, we introduce two strategies to appropriately select fewer conditions, i.e., the training-free masking method and the training-based Image-Adapter (as illustrated in Lines 312-317 in the paper). We demonstrate the superiority of both the training-based and training-free methods theoretically and experimentally. **Compared to the previous masking-based methods, we propose a novel marked-element selection method which masks (zero-out) the elements in the cluster with high means. This superiority of the proposed masking strategy in content removal is backed by Proposition 1.**\\n\\n**Reply to `` Introducing a masking mechanism is effective has been demonstrated in previous studies'':**\\n\\nAlthough several studies have explored the effectiveness of masking mechanisms, our method differs from these approaches in several key aspects:\\n\\n1. **No coupled denoising processes**: Our method avoids the need for two denoising processes, thus saving computational resources. For instance, the DIFFEDIT method requires two denoising processes\\u2014one conditioned on the query text and the other conditioned on a reference text. By contrasting the predictions of the two diffusion models, DIFFEDIT generates a mask that locates the regions needing editing to match the query text.\\n\\n2. **Masking in the latent space**: Unlike DIFFEDIT, which operates on the pixel level to generate a mask highlighting the regions of the input image that need editing, our method performs masking in the latent space, bypassing pixel-level operations and patch-level manipulations.\\n\\n3. **Focus on content leakage in style transfer**: While the MDT method introduces a latent masking scheme to enhance the DPMs' ability to learn contextual relations among object semantics in an image, it focuses on predicting randomly masked tokens from unmasked ones. In contrast, our method targets content leakage in style transfer. We mask feature elements that are related to unwanted content from the style reference, guided by clustering results on the element-wise product. The \\\"From Test to Mask\\\" method leverages the rich multi-modal knowledge embedded in diffusion models to perform segmentation. By comparing different correlation maps in the denoising U-Net, it generates the final segmentation mask.\\n\\n\\n**Reply to `` the problem is not formally formulated'':** \\n\\nWe apologize for the confusion. We provide the formulation of the problem for clarity:\\n\\nIn the context of style transfer, given a style reference $c_1$, the content of the style reference $c_2$, and the target text prompt $c_3$, \\nthe text-driven style transfer aims to generate a plausible target image by combining the content of the target text prompt with the style of the style reference, while ensuring that the unwanted content from the style reference does not transfer into the generated result.\\n\\n**Reply to `` the selection criteria are not clearly described'':** \\n\\nWe apologize for the confusion. We introduced the selection criteria in Lines 212\\u2013231 in Section 3.1, as illustrated in Figure 3(a). \\n\\n **Reply to ``The paper does not show significant evidence to demonstrate its efficiency compared with the previous methods'':** \\n\\nWe provide visual comparisons in Figures 6, 7, 10, and 11 to demonstrate the superiority of our method compared to previous approaches. In comparison to the baseline models, IP-Adapter and InstantStyle, we present generation results across various coefficient values in Figure 11 of Appendix A.5. These results highlight that both IP-Adapter and InstantStyle heavily rely on test-time coefficient tuning for style strength, requiring users to engage in a labour-intensive process to achieve a balanced synthesis between target content and style. Particularly in high-coefficient scenarios, both models experience significant content leakage from the style references and a loss of text fidelity. In contrast, our method produces satisfactory results even in high-coefficient settings.\\n\\n **Reply to ``The proposed method is only evaluated on CIFAR-10 dataset'':** \\n\\nWe apologize for the confusion. **The proposed method was not only evaluated on our newly constructed dataset based on the classes of CIFAR-10 (not a real CIFAR-10 dataset) but also on the benchmark dataset Stylebench, proposed in StyleShot, both on text-driven and image-driven style transfer.**\"}", "{\"comment\": \"We sincerely thank you for your valuable comments and apologize for any confusion.\\n\\nWe have carefully incorporated all of your suggestions into our revised manuscript and have submitted it for your review. We have revised both the main manuscript and the appendix, with all changes marked in red for your convenience.\\n\\nFurther explanations are provided for replies 6 and 9.\", \"for_re_6\": \"The binary classification is performed using the CLIP model, with the CLIP-H/14 as the image encoder. Specifically, we denote the cosine similarity between the CLIP image feature of the generated image and the CLIP text feature of reference's content object as $\\\\frac{\\\\langle e_2, e_g \\\\rangle}{|e_2| \\\\cdot |e_g|}$. Similarly, we denote the cosine similarity between the CLIP image feature of the generated image and the CLIP text feature of text prompt as $\\\\frac{\\\\langle e_3, e_g \\\\rangle}{|e_3| \\\\cdot |e_g|}$. If $\\\\frac{\\\\langle e_2, e_g \\\\rangle}{|e_2| \\\\cdot |e_g|} < \\\\frac{\\\\langle e_3, e_g \\\\rangle}{|e_3| \\\\cdot |e_g|}$, the generated image is considered correctly classified, meaning it contains the target content rather than the content of the style reference.\", \"for_re_9\": \"Apologies for the confusion. In Section 4.1, the constructed dataset consists of 10 content objects (from CIFAR-10) and 21 image styles (11 for training and 10 for testing) for each content object, with 8 variations per style. This results in a total of $10 \\\\times 11 \\\\times 8 = 880$ style references. For each style reference, we perform style transfer for 5 target text prompts, with 4 generations per target text prompt, leading to $880 \\\\times 4 = 3520$ generations per text prompt. We randomly sample 50 images from the 3520 generated images for each target text prompt. In total, this gives us $50 \\\\times 5 = 250$ images from each method to evaluate. The same procedure is applied in the evaluation presented in Section 4.2.\\nWe have also included these details in Appendix A.4.2 for further clarity.\"}", "{\"summary\": \"The paper proposes a feature masking method to control the content leakage for the stylization task. It requires an additional content description of the image and decouples this content by masking it out in the style features. The authors also provided theoretical proofs to demonstrate the motivation of their method. Both qualitative and quantitative results are superior to the alternatives.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed method is intuitive and works well.\\n2. Theoretical proofs are valid and well support the experiments.\\n3. Generally the writing is good and the story is complete, though there is some information missing as I mentioned in the following section.\\n4. Experiment design is comprehensive and the performance looks good.\", \"weaknesses\": \"1. The authors should compare with more recent stronger baselines that are proposed to alleviate the content leakage problem like RB-Modulation, which uses attention feature aggregation and different descriptors to decouple content and style. Since it is also training-free and mentioned to outperform InstantStyle, it would serve as a good baseline for comparison. CSGO is another recent work that uses a separately trained style projection layer to avoid content leakage. Though it\\u2019s pretty new (released a month before deadline), some qualitative results would help demonstrate the strengths of your method.\\n2. In L229, should m^i be 1 or 0?\\n3. Figure 3(b) is not referred to but seems to be mentioned in the experiments. I thought this should be part of the method you want to introduce. Can you explain how this is used with your proposed method? And how does the linear layer learn the content feature to be subtracted?\\n4. Theorem 1 indicates that the proposed method archives a smaller divergence. Does \\u201csmaller divergence\\u201d define better style alignment? I\\u2019m asking because there might be several factors that can lead to smaller divergence, like same background/or elements in the images, content leakage, etc. I\\u2019d like to get your insights on what is the style in an image?\\n5. In L215, it seems cluster number K controls how many tokens are masked. Is there any analysis to show how K affects the performance?\\n6. Can you add more details on how you do binary classification as eval in L369?\\n7. Are you using style descriptions in the prompt?\\n8. Why does image alignment keep dropping in figure 5(a)?\\n9. Can you provide more details on how you conduct user study? Like instructions to the raters and how you present the images to the raters.\\n\\nThe answers can be kept short and concise since there are many questions. Thanks!\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"increase score\", \"comment\": \"The authors addressed all my concerns, increased score. As pointed out by the authors, the proposed method do not rely on style descriptions to achieve good performance where most of exsiting state-of-the-art fail to do so.\"}", "{\"comment\": \"We sincerely thank the reviewer for reminding us of the ethical issues. This work aims to make a positive impact on the field of AI-driven image generation. We aim to facilitate the creation of images with diverse styles, but we expect all related processes to comply with local laws and be used responsibly.\\n\\nThe use of AI to generate human-related images, particularly those involving characteristics such as skin color, gender, age, and other demographic factors, raises complex ethical questions. We are aware that the generation of images involving these attributes must be handled with care to avoid reinforcing stereotypes, perpetuating discriminations, or contributing to the misrepresentations of certain groups. We take these concerns very seriously and believe that AI should be used in a way that promotes fairness, inclusion, and respect for all individuals. Here, we give several examples of text prompts containing different genders, skin colors, and ages, as shown in Figure 22 in Appendix A.10.\\nWe observe that in most cases, our method is able to generate images with diversity. However, there are certain cases that general image generation methods can be misused. \\n\\nIn light of these considerations, we have added an ethics statement in Appendix A.10 including the term that the code and methodology in this paper shall be used responsibly. Users are expected to utilize this material in a way that avoids any potential bias related to sensitive attributes such as gender, race, age, and other demographic factors. We believe that the responsible use of AI-driven image generation tools is essential to fostering ethical and equitable outcomes in the field.\"}", "{\"comment\": \"Thanks for providing the new results on the traditional style transfer examples. The new results look promising and I suggest to add it back to the final draft. They are actually better than the figures showing in the paper. Most of my concerns have been addressed.\"}" ] }
88AS5MQnmC
RRM: Robust Reward Model Training Mitigates Reward Hacking
[ "Tianqi Liu", "Wei Xiong", "Jie Ren", "Lichang Chen", "Junru Wu", "Rishabh Joshi", "Yang Gao", "Jiaming Shen", "Zhen Qin", "Tianhe Yu", "Daniel Sohn", "Anastasia Makarova", "Jeremiah Zhe Liu", "Yuan Liu", "Bilal Piot", "Abe Ittycheriah", "Aviral Kumar", "Mohammad Saleh" ]
Reward models (RMs) play a pivotal role in aligning large language models (LLMs) with human preferences. However, traditional RM training, which relies on response pairs tied to specific prompts, struggles to disentangle prompt-driven preferences from prompt-independent artifacts, such as response length and format. In this work, we expose a fundamental limitation of current RM training methods, where RMs fail to effectively distinguish between contextual signals and irrelevant artifacts when determining preferences. To address this, we introduce a causal framework that learns preferences independent of these artifacts and propose a novel data augmentation technique designed to eliminate them. Extensive experiments show that our approach successfully filters out undesirable artifacts, yielding a more robust reward model (RRM). Our RRM improves the performance of a pairwise reward model trained on Gemma-2-9b-it, on Reward-Bench, increasing accuracy from 80.61% to 84.15%. Additionally, we train two DPO policies using both the RM and RRM, demonstrating that the RRM significantly enhances DPO-aligned policies, improving MT-Bench scores from 7.27 to 8.31 and length-controlled win-rates in AlpacaEval-2 from 33.46% to 52.49%.
[ "Reward model", "RLHF", "Alignment" ]
Accept (Poster)
https://openreview.net/pdf?id=88AS5MQnmC
https://openreview.net/forum?id=88AS5MQnmC
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xDZuFWnECG", "wjBLBiC90K", "rO13CsdDzS", "m5iPKILddx", "lVawt8Acsb", "l83P8hVjXo", "c06qxLOGEV", "XhLRKcLMzz", "TzuJFa2WEP", "SINmMKcICz", "NM5fuqLxTH", "GDhrBjNnVy", "Em2uUqm49u", "Dn7Pe7Onpc", "D0qWT6resr", "9wuiuovcM2", "82zMhxlwlm" ], "note_type": [ "official_comment", "meta_review", "official_review", "decision", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732213613226, 1735064726690, 1730717269094, 1737523502853, 1732223522430, 1730649279557, 1732763178479, 1730537913692, 1732223710525, 1732213746925, 1732213392945, 1732214134957, 1730194195488, 1732213916260, 1732214082741, 1732213464645, 1732213493590 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2424/Authors" ], [ "ICLR.cc/2025/Conference/Submission2424/Area_Chair_wiex" ], [ "ICLR.cc/2025/Conference/Submission2424/Reviewer_gp5Y" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2424/Reviewer_Vrzq" ], [ "ICLR.cc/2025/Conference/Submission2424/Reviewer_Vrzq" ], [ "ICLR.cc/2025/Conference/Submission2424/Reviewer_1CBu" ], [ "ICLR.cc/2025/Conference/Submission2424/Reviewer_jPJZ" ], [ "ICLR.cc/2025/Conference/Submission2424/Authors" ], [ "ICLR.cc/2025/Conference/Submission2424/Authors" ], [ "ICLR.cc/2025/Conference/Submission2424/Authors" ], [ "ICLR.cc/2025/Conference/Submission2424/Authors" ], [ "ICLR.cc/2025/Conference/Submission2424/Reviewer_1CBu" ], [ "ICLR.cc/2025/Conference/Submission2424/Authors" ], [ "ICLR.cc/2025/Conference/Submission2424/Authors" ], [ "ICLR.cc/2025/Conference/Submission2424/Authors" ], [ "ICLR.cc/2025/Conference/Submission2424/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We thank the reviewers for their constructive feedback and valuable insights. Below, we address each of the main concerns raised.\\n\\n> Re: The main weakness of this paper is that the experiments are not that comprehensive. Although the results are strong, the experiments are only conducted on one dataset, with one model, and with one random seed. Therefore we cannot tell if the results are statistically significant or generalizable.\\n\\nWe acknowledge that the experiments are not comprehensive enough, although we have evaluated thoroughly on RewardBench, MTBench, and AlpacaEval2 with various alignment algorithms (DPO/BoN) and baselines (ODIN/RM). To further enhance the confidence of our results, we add experiments with a Gemma-2-2b-it reward model in Appendix (Additional Results with Gemma-2-2b-it) of the updated draft. To understand better on whether the artifacts can be effectively mitigated, we also add experiments with two additional artifacts: bold faces and emojis in Appendix (Additional Analysis with Mixed Artifacts). With additional experiments, it enhances the significance of our framework. \\n\\n> Re: The baseline is pretty weak with regards to reward bench. RLHFFlow with Llama 3 8B gets 87.1 on reward bench, while the author\\u2019s baseline gets 80.\\n\\nWe didn\\u2019t run Llama 3 8B due to policy reasons (https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE). We use the same dataset as those in RLHFFlow and for baseline and our approach, we use the same model (Gemma-2-9b-it) as a fair comparison. To further verify the results, we add a Gemma-2-2b-it in Appendix (Additional Results with Gemma-2-2b-it).\\n\\n> Re: The writing describing the methodology not that clear. I think more plainly describing the data augmentation would be better.\\n\\nWe thank the reviewer for the feedback. We illustrate our augmentation strategy in Figure 1 for illustration purpose. We describe our approach in plain language in line 086 as \\u201cOur pipeline is illustrated in Figure 1, where we augment the reward model training data by using responses from other examples to effectively balance the artifacts in chosen and rejected responses\\u201d. In line 224, we also mention that \\u201cIn practice, we can shuffle the dataset twice to achieve $\\u03c3_1$ and $\\u03c3_2$.\\u201d. We can further improve the clarity of our approach in the camera-ready version.\\n\\n> Re: The data filtering step used in their method hurts their experiments interpretability, as it may be the case that the sole reason RRM gets good performance is due to the data filtering step. They should include a baseline where the original dataset is filtered, and then a RM is trained based on that.\\n\\nSorry for the confusion. In the paper we mentioned that \\u201cTo reduce the augmented data size, we first conduct inference on random 50% of the augmented data using the trained RM, \\u2026\\u201d. We only filter the data on the augmented triplets and do not filter the original data. So in both baseline and our approach, we use the full copy of the original data.\\n\\n> Re: How did the authors tune the hyperparameters?\\n\\nIn line 342-344 (and footnote 14) and line 350-352, we cover how to tune the hyperparameters. We use grid search to pick the best hyperparameter based on evaluation metrics (evaluation accuracy for RM and AlpacaEval2 for RLHF).\\n\\n> Re: What are the \\u201cthree possible prompts\\u201d described in line 226?\\n\\nThe three possible prompts correspond to $(x^{(i)}, x^{(\\\\sigma_1(i))}, x^{(\\\\sigma_2(i))})$, which corresponds to the original prompt, a prompt from shuffled dataset, and a prompt from a twice-shuffled dataset.\"}", "{\"metareview\": \"This paper proposes a robust reward model (RRM) training method to mitigate reward hacking by leveraging a causal inference framework. A causal graph for human preference modeling is introduced to help the model distinguish between contextual preference signals and context-free artifacts. Guided by the causal inference framework, the training data is augmented by reorganizing the (prompt, positive, negative) triplet. Experiments demonstrate that the proposed approach effectively filters out undesirable artifacts during reward model training, resulting in a more robust reward model. Specifically, when training the reward model on Gemma-2-9b-it, RRM achieves an absolute 3.54% accuracy improvement over RM on Reward-Bench. Furthermore, policies induced by RRM outperform those trained with RM and ODIN on the MT-Bench and AlpacaEval-2 benchmarks. Analysis also reveals that artifacts such as length bias and deliberately designed artifacts can be effectively eliminated.\", \"the_strengths_of_this_work_lie_in\": [\"Its focus on addressing a critical issue in reward model training.\", \"The novel application of a causal inference framework to tackle this issue.\", \"Given these two strengths, I recommend acceptance at the current stage, albeit with moderate confidence, due to the following reasons:\", \"Almost all reviewers raised concerns about the generalization of the proposed method to more capable LLMs, as experiments on LLMs other than Gemma or larger-scale models are missing. While the authors added results for Gemma-2-2b-it in the Appendix, they are strongly encouraged to include additional experiments on larger-scale LLMs before the camera-ready submission.\", \"Both Reviewer gp5Y and Reviewer jPJZ expressed concerns about the reasoning performance drop observed with RRM. Although the authors explained this is due to the test set, it is strongly recommended that they include results from another test set to further evaluate RRM's reasoning performance before the camera-ready version.\", \"Providing more comprehensive empirical results rather than limiting the work to a proof-of-concept would significantly broaden the scope and appeal to a wider audience.\"], \"additional_comments_on_reviewer_discussion\": \"During the discussion period, Reviewer Vrzq and Reviewer 1CBu actively engaged with the authors. Reviewer Vrzq expressed satisfaction with the responses provided, while Reviewer 1CBu's remaining concern primarily focused on the generalizability of the proposed method to LLMs beyond Gemma and to larger-scale models.\\n\\nThe strengths and weaknesses of this work have been summarized above, resulting in my current recommendation for acceptance, albeit with moderate confidence.\"}", "{\"summary\": \"Traditional reward model (RM) training for large language models (LLMs) struggles to separate prompt-driven preferences from irrelevant artifacts like response length and format. This work introduces a causal framework and data augmentation technique to remove these artifacts, resulting in a robust reward model (RRM) that improves performance across benchmarks, enhancing accuracy and alignment scores significantly.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper addresses an important issue in preference datasets: how to disentangle prompt-driven preferences from prompt-independent artifacts.\", \"weaknesses\": \"While the problem studied in this paper is significant, the proposed solution is relatively straightforward, focusing only on expanding the dataset. To remove prompt-independent artifacts, the dataset size needs to be several times larger, increasing the training cost. Additionally, as shown in Figure 5, the effect of this costly augmentation is not very significant\\u2014after inserting 30% artifacts, the RRM only decreases from the original RM\\u2019s 25% to 20%.\\n\\nRegarding Table 1, the reasoning performance declines, and the authors explain this as due to math and coding tasks being less affected by non-contextual artifacts. This raises the question of whether the proposed method might have a negative impact on attributes unaffected by non-contextual artifacts.\", \"questions\": \"The method proposed in this paper to 'disentangle prompt-driven preferences from prompt-independent artifacts' seems to only eliminate biases unrelated to the input while preserving input-related preferences. This can effectively reduce bias under the 'helpful' alignment objective. However, if our alignment goal is 'safety', the response might be prompt-independent, typically just refusing to reply. Yet humans may still have preferences for different ways of refusal. In this case, how can we disentangle prompt-independent preferences from prompt-independent artifacts?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Thank you for the rebuttal\", \"comment\": \"Thank you for the rebuttal, it improved my understanding of the paper. All of my concerns are addressed.\\n\\nI still suggest the authors improve upon the writing (particularly section 3.2). I think that the \\\"Possible Combinations\\\" paragraph (line 221) could use some simplification. After reading the paper a few times it is clear what is meant, but when someone is first reading the paper it seems like an overcomplicated way to explain a simple idea.\"}", "{\"summary\": \"This paper aims to learn reward models that are unbiased by \\u201cartifacts\\u201d that come from human preference data: namely length biases and formatting. To do so, they propose to augment the RLHF preference dataset with preference pairs where one sample is in response to the prompt, and the other example is not in response to the prompt. They evaluate their learned reward model on reward bench and use it to improve downstream policy performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The authors study a very relevant and timely problem.\", \"The paper shows significant improvement in both reward modeling accuracy and in downstream policy performance.\", \"The authors do a really interesting study where they artificially add artifacts to the preference dataset, and find their method is more robust to it.\"], \"weaknesses\": [\"The main weakness of this paper is that the experiments are not that comprehensive. Although the results are strong, the experiments are only conducted on one dataset, with one model, and with one random seed. Therefore we cannot tell if the results are statistically significant or generalizable.\", \"The baseline is pretty weak with regards to reward bench. RLHFFlow with Llama 3 8B gets 87.1 on reward bench, while the author\\u2019s baseline gets 80.\", \"The writing describing the methodology not that clear. I think more plainly describing the data augmentation would be better.\", \"The data filtering step used in their method hurts their experiments interpretability, as it may be the case that the sole reason RRM gets good performance is due to the data filtering step. They should include a baseline where the original dataset is filtered, and then a RM is trained based on that.\"], \"questions\": [\"How did the authors tune the hyperparameters?\", \"What are the \\u201cthree possible prompts\\u201d described in line 226?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the response from the authors. Some of concerns are addressed in the response, but I still doubt the generalization of the proposed method where the experiments is this paper not cover. I decide to raise my rating to 5.\"}", "{\"summary\": \"This paper presents a method to enhance the robustness of the reward model for LLM alignment. More specifically, rewards models are trained based on the question and the human labelers' preference over the answer pairs. A commonly observed un-robust behavior of the reward model is that there are spurious correlations between the label and the artifacts in the answers which is not relevant to the question. The existence of such spurious correlations causes the reward model to favor certain types of artifacts, e.g. the length of the answer.\\n\\nIn previous works, some of the artifacts are specifically dealt with, like introducing a penalty for the length of the answer. However, the solutions lack a principle and can not handle other artifacts like the word preference and style of the answers. In this work, a systematic approach is proposed based on a causal framework, which is effective in debiasing the reward model from various artifacts.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"The causal framework has been effective and widely used in other problems, but this is the first time I see it applied in LLM reward modeling, and it very well suits the problem under consideration.\", \"The implementation is very straightforward, just augment the data, no empirical tricks.\"], \"weaknesses\": [\"It seems suboptimal to me to train the reward model that is only dependent on the S, because the A part could contribute nontrivially to the reward function, in the case where both answers are equal in terms of the contextual quality, the labeler could prefer the answer which is organized as bullet points for readability.\", \"The explanation on why \\\"reasoning\\\" is performing worse under RRM is not quite convincing. Even if they are less affected by non-contextual artifacts, the augmented data should not have caused the degrade of performance either (as the augmented data are simpler pairs). I would suggest two other hypothesis to investigate:\", \"Look into how \\\"-Neutral\\\" works on \\\"Reasoning\\\" as requiring 50/50 preference over some random pairs could be too strong.\", \"Try to add back the \\\"A only\\\" reward model, as mentioned above the artifact could contribute nontrivially to the reward, removing it entirely could be detrimental especially for math and code, where a high quality answer should be good in formatting which is noncontextual.\"], \"questions\": [\"The same augmentation and disentangle technique can be applied to learn a artifact only model. Some of the artifacts would improve the quality of the answer, if there is a consensus preference on such artifacts in the labelers. Is there any proposal to combine the contextual & non-contextual models? Would adding these two reward models work, or do we need to take the min of these two models? Any suggestions from the authors?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your prompt response and thanks for the suggestion! We will improve the writing in our camera-ready version.\"}", "{\"comment\": \"We thank the reviewers for their constructive feedback and valuable insights. Below, we address each of the main concerns raised.\\n\\n> Re: It seems suboptimal to me to train the reward model that is only dependent on the S, because the A part could contribute nontrivially to the reward function, in the case where both answers are equal in terms of the contextual quality, the labeler could prefer the answer which is organized as bullet points for readability.\\n\\nWe agree that responses with better organization, such as those formatted with bullet points, should be preferred in general. But we would like to address that conditioning on prompt, there can be richer information on determining the preference. There can be cases that the prompt specifies \\u201cplease do not include bullet points in your response\\u201d. Even for factuality and safety attributes, the prompt can ask the model to create a virtual horror story which may violate the factuality/safety constraints.\\nRegarding \\u201cin the case where both answers are equal in terms of the contextual quality, the labeler could prefer the answer which is organized as bullet points for readability.\\u201d, we have a few thoughts to share:\\n\\n1. We would like to answer the question that \\u201cwhat the preference will be if both responses share the same artifacts\\u201d following length-controlled Alpaca [1]? Thus if we modify the one answer to remove the \\u201cbullet points\\u201d, what the preference will be? Maybe after removing the bullet points, the contextual quality can change, becaues the prompt can indicate whether bullet points are needed. If the prompt is for creative writing, removing bullet points can even improve the preference.\\n\\n2. Even if there is such a pattern that should be uniformly preferred, our RRM can still learn this pattern because we only augment data, not delete any existing data. The purpose of adding augmented data is to force the model to learn better and avoid native shortcut prediction. The second point is, theoretically our proposed method can completely remove the effect of A, when we have unlimited samples. But in practice, we don't have infinite samples, and due to computation budget, we only add a small portion of fully augmented data distribution. The added augmented data helps improve the robustness.\\n\\n3. We think the reviewer raises the valid point on prompt-agnostic artifacts that can uniformly contribute to the preference such as safety. To address this, in the conclusion section, we mention that \\u201cFuture work will explore filtering augmented pairs and matching artifacts when constructing response pairs, further refining the training process.\\u201d. Thus, in the future, we should apply filters on the prompt and responses to guarantee that the losing responses are not too bad. But we believe that our work opens a door on debiasing RM with a data augmentation approach with a causal learning framework. Follow-up works can focus on matching the effects of chosen/rejected responses, and filtering data by prompt/responses.\\n\\n[1] Dubois, Yann, et al. \\\"Length-controlled alpacaeval: A simple way to debias automatic evaluators.\\\" arXiv preprint arXiv:2404.04475 (2024).\"}", "{\"comment\": \"We thank the reviewers for their constructive feedback and valuable insights. Below, we address each of the main concerns raised.\\n\\n> Re: While the problem studied in this paper is significant, the proposed solution is relatively straightforward, focusing only on expanding the dataset. To remove prompt-independent artifacts, the dataset size needs to be several times larger, increasing the training cost. Additionally, as shown in Figure 5, the effect of this costly augmentation is not very significant\\u2014after inserting 30% artifacts, the RRM only decreases from the original RM\\u2019s 25% to 20%.\\n\\n> Re: While the problem studied in this paper is significant, the proposed solution is relatively straightforward, focusing only on expanding the dataset. To remove prompt-independent artifacts, the dataset size needs to be several times larger, increasing the training cost.\\n\\nWe acknowledge that our proposed solution may seem straightforward, as it involves dataset expansion. However, this simplicity belies the sophistication of the underlying causal learning framework, a novel theoretical approach not previously applied to this challenge. By disentangling prompt-independent artifacts from input-driven preferences, our method achieves both simplicity and effectiveness. This enables a broader range of applications across diverse use cases.\\nRegarding training cost, we recognize the concern but argue that the cost increase remains manageable within the broader context of reinforcement learning with human feedback (RLHF). The reward model (RM), while crucial to alignment, incurs comparatively less computational burden than supervised fine-tuning or policy optimization. Thus, improving the RM, the key component for model alignment, justifies the additional investment in training. In our study, we trained both the RM and the robust RM (RRM) to convergence, fully utilizing the value and information embedded in our expanded dataset to ensure optimal performance.\\n\\n> Re: Additionally, as shown in Figure 5, the effect of this costly augmentation is not very significant\\u2014after inserting 30% artifacts, the RRM only decreases from the original RM\\u2019s 25% to 20%.\\n\\nWe understand that the observed improvement may initially appear modest. However, it\\u2019s essential to consider the context of the evaluation. With 30% artifacts introduced into responses, a random baseline would yield around 30% artifact presence in optimal responses. Thus, achieving 25% with RM and 20% with RRM represents a meaningful improvement compared to the baseline. The reduction of 5 percentage points below this baseline signifies effective artifact mitigation, illustrating the method\\u2019s impact in refining alignment robustness.\"}", "{\"comment\": \"> Re: The data augmentation method is trivial. The new combination is derived by randomly selecting answers from other prompts. Since this negative sample is not related to the prompt with high probability, it is too easy for the model to distinguish the positive and negative answers. The author is suggested to incorporate some hard negatives to make the training more robust.\\n\\nThe data augmentation indeed appears to be trivial, but we argue that the approach is backed by a solid and sophisticated causal framework. To our knowledge, no previous work has been proposed with this simple approach. Instead, we argue that the \\u201ctriviality\\u201d of our approach is indeed the advantage instead of a disadvantage as it is shown to be quite effective on extensive numerical experiments and analysis.\\nWe fully agree with the reviewer that \\u201cit is too easy for the model to distinguish the positive and negative answers.\\u201d, that\\u2019s what the experiment section (line 337-342) has also covered. We construct the \\u201chard negatives\\u201d by applying the filter such that the original RM performs badly. We appreciate the reviewer mentioning the \\u201chard negatives\\u201d. In the Conclusion section (line 539), we also mentioned that \\u201cFuture work will explore filtering augmented pairs and matching artifacts when constructing response pairs, further refining the training process.\\u201d Thus we believe that a better way to mine the hard negatives can be quite valuable. But we believe that this work opens a door on debiasing RM with a data augmentation approach with a causal learning framework. The hard negative mining can be a natural follow-up of this paper. To further address this, we added \\u201cDiscussion on Data Filtering Strategies\\u201d in Appendix in the updated draft.\\n\\n> Re: Explain why it is needed to add neutrals and set both non-contextual responses as tie in Section 3.2. What is the necessity of this step?\\n\\nAs described in Section 3.2 (line 221-227), we consider all 45 possible augmented triplets. The labeling rule is described in line 234-237. We don\\u2019t have a prior that neutrals will work but we believe the labeling rule is fair since both responses are off-topic and thus the preference is independent of contextual-signal S. As a result, the winning probability should be 0.5 according to the causal rule we proposed. As shown in Table 3, Neutrals can contribute to the improvement of AlpacaEval-2 and MT-Bench first turn. In reality, users can try with and without neutrals to select a better one with minimal cost since the neutral data can be easily filtered by checking the preference label.\\n\\n> Re: In Table 2, why the performance of RRM is not consistent in two chat datasets such as Chat and Chat Hard? In Chat dataset, RRM outperforms RM while in Chat Hard dataset, RRM is even worse.\\n\\nThe Chat is a relatively easy task compared to the Chat Hard. For instance, on MT-bench, the authors of RM-BENCH use 10s v.s. 1s to construct the comparison pairs for the Chat category, while use 7-8 v.s. 5-6 to construct the Chat Hard category. \\nConsequently, the margin between the two responses in the Chat category is large and whether we mitigate the style bias or not will not significantly influence the reward model performance. Therefore, both the RM and RRM perform very well on the Chat task, where the difference may result from the training randomness (e.g. the random initialization). The performance is nearly saturated. \\nIn contrast, for Chat Hard, as described in Section 4.1 in [1], \\u201cHard Accuracy on RM-BENCH is significantly lower than Normal Accuracy, with most reward models failing to exceed random-level performance (50%). This reveals that many existing reward models are more akin to style preference models, favoring well-structured responses over those with stronger substantive content. Our findings highlight the urgent need to mitigate style bias and improve the robustness of reward models.\\u201d That\\u2019s why our approach can perform significantly better. This is strong evidence that we successfully mitigated style bias with a more robust reward model. RRM has 15% higher accuracy than the baseline RM on the Chat Hard dataset. \\n\\n[1] Liu, Yantao, et al. \\\"RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style.\\\" arXiv preprint arXiv:2410.16184 (2024).\"}", "{\"summary\": \"This paper proposes a robust reward model (RRM) training method to mitigatge reward hacking by utilizing causal inference framework. A causal graph for human preference modeling is introduced to enable the model to distinguish contextual preference signals and context-free artifacts. The training data is augmented by re-organizing the (prompt, positive, negative) triplet guided by the causal inference framework. Experiments show that the proposed approach can filter out undesirable artifacts in reward model training and yields a more robust reward model. Specifically, the reward model is trained on Gemma-2-9b-it, and RRM improves RM by an absolute 3.54% accuracy gain on Reward-Bench. Experiments also demonstrate the policy induced by RRM outperforms RM and ODIN in MT-Bench and AlpacaEval-2 benchmark. Analysis show that length artifact and deliberately designed artifact can be eliminated.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Reward hacking is an important problem in LLM alignment. This paper introduces causal inference framework in reward modeling to mitigate reward hacking, which offers an interesting perspective. This idea makes a lot of sense since the essence of reward hacking mitigation is to find out the irrelevant artifacts that have no causal relationship with the preference label, and causal inference algorithms just offer a useful tool. The idea is clearly presented and illustrated. The experiment results verify the effectiveness of proposed method.\", \"weaknesses\": \"The experiments are insufficient. First, the experiments in this paper only cover two possible artifacts, namely length and a deliberately designed phrase, which is not compatible with author's claim that this method should be capable of handling \\\"all potential exploitation patterns\\\". More patterns such as format and emojis should be investigated in the experiments. Second, the reward model is trained only using Gemma-2-9b-it. More experiments should be conducted on different model types such as llama, and larger model sizes such as 57B and 72B. It is in doubt that when the model size increases, the reward hacking problem can be naturally mitigated and the effectiveness brought by this method may diminish. Third, in terms of policy evaluation, this method is only verified in DPO training whereas various other methods should also be considered such as PPO, whose performance relies heavily on reward model quality. Last, the author is suggested to add more baselines in experiment, such as reward model ensemble which can also alleviate reward hacking.\\n\\nThe data augmentation method is trivial. The new combination is derived by randomly selecting answers from other prompts. Since this negative sample is not related to the prompt with high probability, it is too easy for the model to distinguish the positive and negative answers. The author is suggested to incorporate some hard negatives to make the training more robust.\", \"questions\": \"1. Explain why it is needed to add neutrals and set both non-contextual responses as tie in Section 3.2. What is the necessity of this step?\\n2. In Table 2, why the performance of RRM is not consistent in two chat datasets such as Chat and Chat Hard? In Chat dataset, RRM outperforms RM while in Chat Hard dataset, RRM is even worse.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"> Re: The explanation on why \\\"reasoning\\\" is performing worse under RRM is not quite convincing. Even if they are less affected by non-contextual artifacts, the augmented data should not have caused the degrade of performance either (as the augmented data are simpler pairs). I would suggest two other hypothesis to investigate:\\n> * Look into how \\\"-Neutral\\\" works on \\\"Reasoning\\\" as requiring 50/50 preference over some random pairs could be too strong.\\n> * Try to add back the \\\"A only\\\" reward model, as mentioned above the artifact could contribute nontrivially to the reward, removing it entirely could be detrimental especially for math and code, where a high quality answer should be good in formatting which is noncontextual.\\n\\nLet's address the above one-by-one.\\n\\n> Re: The explanation on why \\\"reasoning\\\" is performing worse under RRM is not quite convincing\\n\\nAfter a careful examination of the RewardBench dataset, we realize that the reasoning test samples are generated in an asymmetrical way. Specifically, all the chosen responses are generated by humans, while all the rejected responses are generated by GPT-4. Eventually, models can learn the spurious features of the format to cheat in the test. For instance, in the math-prm subset, the average number of characters in the chosen response is 524.6, while it is 1213.1 for the rejected responses. As our method tries to mitigate the related biases, the decline in the reasoning accuracy is also expected. Therefore, the high reasoning accuracy of the RM cannot reflect the real performance advantage. This is a weakness from the evaluation benchmark, and it would be interesting to build a more reliable test set in the future. \\n\\n> Re: Look into how \\\"-Neutral\\\" works on \\\"Reasoning\\\" as requiring 50/50 preference over some random pairs could be too strong.\\n\\nThis is a great suggestion. We evaluated the Reasoning on \\u201c-Neutral\\u201d RRM, and the score decreased from 90.62 to 86.27. We hypothesized that it was because the reasoning prompt set is relatively small and the augmented non-contextual data can be too easy (with one reasoning and one non-reasoning) so that the augmented data are more likely to be filtered by the original RM. With Neutral, at least both responses are reasoning related or the prompt is reasoning related, thus it may balance the proportion of reasoning data in the data mixture. But we agree that assigning 50/50 needs a careful filtering on the relevance of the response to the prompt. Sometimes it can be too strong.\\n\\n> Re: Try to add back the \\\"A only\\\" reward model, as mentioned above the artifact could contribute nontrivially to the reward, removing it entirely could be detrimental especially for math and code, where a high quality answer should be good in formatting which is noncontextual.\\n\\nAs explained above, the validation set on reasoning has strong style bias: the chosen answers are written by human experts and rejected answers are generated by LLM, which has a strong style. This indicates \\u201cA only\\u201d model can be useful. However, our work mainly focuses on chatting prompts for improving instruction-following and quality. Besides, we still include the original data in our training, which suggests that the noncontextual information can still be learned. We argue the \\u201cA only\\u201d model may better fit as a pointwise reward, and the augmentation works for pairwise reward. But we need careful filtering after the augmentation. We added a section \\u201cDiscussion on Data Filtering Strategies\\u201d in the Appendix further discussing this.\\n\\n> Re: The same augmentation and disentangle technique can be applied to learn a artifact only model. Some of the artifacts would improve the quality of the answer, if there is a consensus preference on such artifacts in the labelers. Is there any proposal to combine the contextual & non-contextual models? Would adding these two reward models work, or do we need to take the min of these two models? Any suggestions from the authors?\\n\\nThis is a great question. We suggest that the reward model can include multiple attributes/dimensions. The RRM works especially well for instruction following and helpfulness. For safety or other prompt-independent dimension, we can use another head to predict that as done in ODIN [1].\\n\\n[1] Chen, Lichang, et al. \\\"Odin: Disentangled reward mitigates hacking in rlhf.\\\" arXiv preprint arXiv:2402.07319 (2024).\"}", "{\"comment\": \"We thank the reviewers for their constructive feedback and valuable insights. Below, we address each of the main concerns raised.\\n\\n> Re: First, the experiments in this paper only cover two possible artifacts, namely length and a deliberately designed phrase, which is not compatible with author's claim that this method should be capable of handling \\\"all potential exploitation patterns\\\". More patterns such as format and emojis should be investigated in the experiments. \\n\\nWe acknowledge that the artifacts covered in this paper are limited. To further investigate the effectiveness of our approach, we conduct a follow-up analysis as follows:\\nWith p=0.1, add bold face (wrap the whole response as bold face) to the chosen responses.\\nAfter step 1, with p=0.1, further add emoji to the end of the chosen responses.\\nTrain RM and RRM and do the same analysis as Figure 5.\\nWe add a section \\u201cAdditional Analysis with Mixed Artifacts\\u201d in Appendix including the above analysis.\\n\\n> Re: Second, the reward model is trained only using Gemma-2-9b-it. More experiments should be conducted on different model types such as llama, and larger model sizes such as 57B and 72B. It is in doubt that when the model size increases, the reward hacking problem can be naturally mitigated and the effectiveness brought by this method may diminish.\\n\\nWe didn\\u2019t run Llama 3 8B due to policy reasons (https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE). To verify the effectiveness of our approach on a different model size, we conduct experiments on Gemma-2-2b-it (Added to \\u201cAdditional Results with Gemma-2-2b-it\\u201d in Appendix). For larger models such as 57B and 72B, we lack computation resources to conduct the experiment at that scale. Regarding the large models, the GPT-4 judge (a large sized model) is found to have length bias when rating sxs responses [1]. It indeed suggests that the bias pattern does not diminish as model size increases.\\n\\n[1] Zheng, Lianmin, et al. \\\"Judging llm-as-a-judge with mt-bench and chatbot arena.\\\" Advances in Neural Information Processing Systems 36 (2023): 46595-46623.\\n\\n> Re: Third, in terms of policy evaluation, this method is only verified in DPO training whereas various other methods should also be considered such as PPO, whose performance relies heavily on reward model quality. Last, the author is suggested to add more baselines in experiment, such as reward model ensemble which can also alleviate reward hacking.\\n\\nBesides DPO, we also evaluate the policy of BoN. [1] shows that BoN policy performs similarly and is more KL efficient than PPO. BoN relies heavily on reward model quality and thus we believe that our evaluation on aligned policy is convincing. \\nRegarding reward model ensemble, we argue that it is orthogonal to our work. We focus on debiasing the artifacts learned in the reward model, it is more on mitigating the bias. But the reward model ensemble is more on reducing the variance. It can not eliminate the artifact bias (such as length, emoji, format) existing in the reward model. Besides, our approach can be naturally combined with the reward model ensemble approach.\\n\\n[1] Gao, Leo, John Schulman, and Jacob Hilton. \\\"Scaling laws for reward model overoptimization.\\\" International Conference on Machine Learning. PMLR, 2023.\"}", "{\"comment\": \"> Re: Regarding Table 1, the reasoning performance declines, and the authors explain this as due to math and coding tasks being less affected by non-contextual artifacts. This raises the question of whether the proposed method might have a negative impact on attributes unaffected by non-contextual artifacts.\\n\\n> Re: the reasoning performance declines\\n\\nAfter a careful examination of the rewardbench dataset, we realize that the reasoning test samples are generated in an asymmetrical way. Specifically, all the chosen responses are generated by humans, while all the rejected responses are generated by GPT-4. Eventually, models can learn the spurious features of the format to cheat in the test. For instance, in the math-prm subset, the average number of characters in the chosen response is 524.6, while it is 1213.1 for the rejected responses. As our method tries to mitigate the related biases, the decline in the reasoning accuracy is also expected. Therefore, the high reasoning accuracy of the RM cannot reflect the real performance advantage. This is a weakness from the evaluation benchmark, and it would be interesting to build a more reliable test set in the future. \\n\\n> Re: the proposed method might have a negative impact on attributes unaffected by non-contextual artifacts.\\n\\nWe totally agree with the reviewer that certain non-contextual artifacts may be desired. In our setting, most of the RM datasets are chat, so we care more about instruction following and relevance. Regarding attributes such as code execution, math reasoning correctness, and safety, we agree that this should not be added to the augmentation. There is a trade-off between helpfulness (instruction following) and those attributes. If the rejected responses are not extremely bad so that they can still be helpful in addressing the prompts. A careful filtering on the extreme bad responses is needed. In the conclusion section, we mention that \\u201cFuture work will explore filtering augmented pairs and matching artifacts when constructing response pairs, further refining the training process.\\u201d. Thus, in the future, we should apply filters on the prompt and responses to guarantee that the losing responses are not too bad. Nevertheless, we believe that our work opens a door on debiasing RM with a data augmentation approach with a causal learning framework. Follow-up works can focus on matching the effects of chosen/rejected responses, and filtering data by prompt/responses. To further address this part, we added a section in Appendix (Discussion on data filtering strategies) in the updated version.\"}", "{\"comment\": \"> Re: The method proposed in this paper to 'disentangle prompt-driven preferences from prompt-independent artifacts' seems to only eliminate biases unrelated to the input while preserving input-related preferences. This can effectively reduce bias under the 'helpful' alignment objective. However, if our alignment goal is 'safety', the response might be prompt-independent, typically just refusing to reply. Yet humans may still have preferences for different ways of refusal. In this case, how can we disentangle prompt-independent preferences from prompt-independent artifacts?\\n\\nWe thank the reviewer for highlighting this important consideration. We fully acknowledge this limitation. In the context of this work, our primary focus was on chat-oriented prompts where responses generated by large language models were not extremely unsafe, and thus, this approach was effective in our evaluation. In real applications, we agree that safety alignment requires careful handling of prompt-independent preferences. In future iterations, we will refine our augmentation process to include additional safety layers, ensuring that rejected responses do not introduce undue risk. To further address this part, we added a section in Appendix (Discussion on data filtering strategies) in the updated version.\"}" ] }
87DtYFaH2d
Tell Me What You Don't Know: Enhancing Refusal Capabilities of Role-Playing Agents via Representation Space Analysis and Editing
[ "Wenhao Liu", "Siyu An", "Junru Lu", "Muling Wu", "Tianlong Li", "Xiaohua Wang", "Xiaoqing Zheng", "di yin", "Xing Sun", "Xuanjing Huang" ]
Role-Playing Agents (RPAs) have shown remarkable performance in various applications, yet they often struggle to recognize and appropriately respond to hard queries that conflict with their role-play knowledge. To investigate RPAs' performance when faced with different types of conflicting requests, we develop an evaluation benchmark that includes contextual knowledge conflicting requests, parametric knowledge conflicting requests, and non-conflicting requests to assess RPAs' ability to identify conflicts and refuse to answer appropriately without over-refusing. Through extensive evaluation, we find that most RPAs behave significant performance gaps toward different conflict requests. To elucidate the reasons, we conduct an in-depth representation-level analysis of RPAs under various conflict scenarios. Our findings reveal the existence of rejection regions and direct response regions within the model's forwarding representation, and thus influence the RPA's final response behavior. Therefore, we introduce a lightweight representation editing approach that conveniently shifts conflicting requests to the rejection region, thereby enhancing the model's refusal accuracy. The experimental results validate the effectiveness of our editing method, improving RPAs' refusal ability of conflicting requests while maintaining their general role-playing capabilities.
[ "Role-play Agents", "Refusal Capabilities", "Representation Editing", "Representation Analyze" ]
Reject
https://openreview.net/pdf?id=87DtYFaH2d
https://openreview.net/forum?id=87DtYFaH2d
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wcUZgB6wok", "w4gzScGqSA", "v6j6FQWXIa", "sIwS2THrYm", "pmxpO5ArJz", "nbSY5SerCX", "nDX6q9Ce3b", "jOscLySNNw", "httLJUtqmf", "hE31Fce4Q8", "fumJsvQeAq", "eDiYzxRHiy", "dGvkvhpAEF", "c96cxprmz5", "Z3b9Vs3QYZ", "XVyY6SLebw", "WgwJV56NyG", "WLHZGmvhkH", "UL8ADzZP9Y", "T6UoI53pS6", "SPTEhfLGt7", "RRLPYQu2d8", "PTF68wkCoh", "PIotUKzy8i", "P6MW4G2a0Q", "NQEmQWLUHZ", "Mxm02HSYox", "LP7VK69vqe", "IrKc2SKytV", "H57ykDAIbk", "F8bekCV1pO", "D1YpmhSdBU", "CjOdLSIQKd", "Brqjda5Anb", "AgEhlvwVIs", "9P6GNh7Fyv", "6spwZ0oWCS", "6NAI3Aa7sh", "6AohlZWUwN", "5vvfEyXE4h", "5LOPz16v2M", "4H1zzAYgJ8", "3tNK9WcX37", "27vQQL9nsP", "1E7D4dbtBX" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730801269519, 1731990087750, 1731989469533, 1730661270752, 1732002847481, 1731989715636, 1730876135311, 1731990162859, 1731989989885, 1732308644200, 1731987070334, 1731987552524, 1730713197325, 1731989907429, 1731989860610, 1731988862376, 1731987231221, 1734385914254, 1730668903282, 1732309386839, 1732297072600, 1731992262304, 1731987897906, 1731990237218, 1732255734651, 1731988443060, 1731988001387, 1731989012990, 1732695887390, 1731989147278, 1731987328405, 1732308895122, 1732664022722, 1731987382447, 1731988200957, 1731990283703, 1731987698677, 1731988359868, 1731988550590, 1732207482106, 1737523642732, 1731989225747, 1732639032173, 1731988659210, 1731989406420 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4477/Reviewer_oci5" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Reviewer_GGuM" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Reviewer_tFS5" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Reviewer_j72b" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Reviewer_Dn9U" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Area_Chair_ETJi" ], [ "ICLR.cc/2025/Conference/Submission4477/Reviewer_j72b" ], [ "ICLR.cc/2025/Conference/Submission4477/Reviewer_j72b" ], [ "ICLR.cc/2025/Conference/Submission4477/Reviewer_j72b" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Reviewer_j72b" ], [ "ICLR.cc/2025/Conference/Submission4477/Reviewer_Dn9U" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Reviewer_GGuM" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Reviewer_oci5" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ], [ "ICLR.cc/2025/Conference/Submission4477/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper investigates the challenges faced by Role-Playing Agents (RPAs) in handling conflicting queries that contradict their role-play knowledge. The authors perform in-depth analysis of RPAs' performance across different types of requests, including contextual and parametric knowledge conflicts. They identify the presence of \\\"rejection regions\\\" and \\\"direct response regions\\\" in representation space. Based on which, they propose a lightweight representation editing method that enhances the refusal accuracy of RPAs while maintaining their overall role-playing capabilities.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.The paper studies an interesting and important problem. Enhancing RPA\\u2019s ability to refuse questions they do not know could important implications for various applications, like virtual assistants and game design.\\n\\n2.I like the representation analysis part. I believe it is a novel finding to identify \\\"rejection regions\\\" and \\\"direct response regions\\\". The analysis provides adequate motivations for the proposed representation editing method.\\n\\n3. The authors provide extensive experiments to demonstrate the effectiveness of the proposed solutions.\", \"weaknesses\": \"1. The paper could benefit from including user-centric studies to evaluate the real-world impact of enhanced refusal capabilities.\\n\\n2.While the empirical findings are strong, the theoretical underpinning of the rejection and response regions may require further exploration to enhance understanding.\", \"questions\": \"Please refer to the Weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Lack of discussion on the refusal mechanisms of existing LLMs.\", \"comment\": \"Our research analyzed the refusal mechanisms of existing LLMs and found that their refusal capabilities are closely linked to internal representations. Based on this, we proposed a representation editing method that enhances the model's refusal capabilities by addressing its internal mechanisms.\\n\\n**1. Source of Refusal Mechanisms:**\\nAccording to our analysis, the generation of refusal responses by a model is related to its internal mechanisms, specifically the presence of representations associated with refusal. When a query triggers the refusal mechanism, the representations related to refusal may become activated, leading the model to produce a refusal response. Through probe analysis and t-SNE visualization, we observed that the representations in refusal mode (queries that generate refusal responses) can be distinctly separated from those in non-refusal mode (queries that do not generate refusal responses), supporting this view to some extent.\\n\\n**2. Relationship Between Our Method and Existing Refusal Mechanisms:**\\nBased on the understanding of the model's refusal mechanisms, we proposed the representation editing method, which aims to enhance refusal capabilities by directly addressing the model's internal mechanisms. Our representation editing method leverages and strengthens the model's existing refusal mechanisms without requiring additional training or changes to model parameters. This approach complements existing refusal mechanisms by providing a lightweight and effective method to enhance refusal capabilities from the perspective of internal representations.\"}", "{\"title\": \"Response to Q12: General Implications\", \"comment\": \"**1. Importance of the Research Problem (Refusal Capability):**\\n\\nThe core of our research is to enhance the refusal capability of LLMs in role-playing agents (RPAs). This capability is crucial for developing trustworthy and reliable AI systems. Models with refusal capabilities can identify and appropriately refuse to respond to requests that are beyond their knowledge scope, conflict with their role settings, or contain inappropriate content. This not only improves the safety and reliability of the models but also enhances user experience by preventing the dissemination of incorrect information.\\n\\n**2. Our Contributions and Their General Significance:**\\n\\nWhile our experiments primarily focus on role-playing scenarios, our findings and methods have broad applicability and offer important insights for the wider fields of AI and natural language processing.\\n\\n(1) **Evaluation of Current Models' Refusal Capabilities:**\\n\\nWe assessed the performance of existing mainstream large language models in handling different types of conflict queries, revealing differences in how models handle contextual knowledge conflicts versus parametric knowledge conflicts. This evaluation not only provides a baseline for understanding the current state but also highlights potential risks in model safety and reliability.\\n\\n*General Significance*: This evaluation method and scenario design can be extended to other tasks and fields, helping developers identify performance deficiencies in models under different contexts and make targeted improvements.\\n\\n(2) **Exploration of Why Models Perform Differently on Various Queries:**\\n\\nBy analyzing the internal representations of models, we discovered differences in the internal mechanisms when handling different types of conflict queries, particularly the lack of internal differentiation capability in parametric knowledge conflicts. This finding reveals the connection between a model's internal representations and its behavior.\\n\\n*General Significance*: This deep understanding of internal mechanisms not only helps explain model behavior but also provides new perspectives for other researchers to analyze and improve model performance across different tasks.\\n\\n(3) **Proposal of a Representation Editing Method Based on Exploration Results:**\\n\\nWe proposed a Representation Editing method that enhances a model's refusal capability without altering its parameters. This method adjusts the model's internal representations to make it more inclined to refuse conflict queries.\\n\\n*General Significance*: This method is a lightweight and versatile technique applicable to various models and tasks. It offers a new approach to improving model safety and reliability without affecting its original capabilities.\\n\\n**3. Generalization and Application of Results:**\\n\\n(1) **Applicable to Various Application Scenarios:**\\n\\nRefusal capability is necessary for any AI system that requires interaction with users. For example:\\n\\n- **Virtual Assistants and Dialogue Systems**: Improve the model's ability to handle inappropriate or out-of-scope questions, avoiding misleading users.\\n- **Content Moderation and Safety Filtering**: Enable models to identify and refuse to generate harmful or policy-violating content.\\n- **Professional Field Applications**: In fields like healthcare, law, and education, ensure that models do not provide incorrect information when encountering knowledge gaps.\\n\\n(2) **Implications for Model Training and Improvement:**\\n\\nOur research emphasizes the importance of focusing on internal representations, encouraging consideration of how models can better understand and manage their knowledge boundaries during training and design. This aids in developing more controllable and trustworthy AI systems.\\n\\n(3) **Promotion of AI Safety and Ethics:**\\n\\nEnhancing a model's refusal capability helps prevent the spread of incorrect information and protects users from potential misinformation. This aligns with AI safety and ethical principles, contributing positively to industry development.\\n\\n**Conclusion:**\\n\\nIn summary, while our research uses role-playing agents as an example, its methods and findings have broad applicability and general significance. We believe that by evaluating and enhancing models' refusal capabilities, understanding performance differences across queries, and proposing general improvement methods, our work provides valuable references for building safer, more reliable, and trustworthy AI systems.\"}", "{\"summary\": \"The paper discusses role-playing agents, which is of great interest. However, while the paper discuss the motivation for \\\"enhancing of refusals\\\", considering the refusal capabilities of state-of-the-art LLMs such as GPT4o, one wonders why refusal is such a problem, i.e., it seems to be possible to achieve it using RL or DPO, so what is the problem? While anyway there is merit in another method, the relation to existing refusals of LLMs (irrespective if its due to knowledge boundaries or ethical reasons) should be better discussed.\\nThe paper also argues that while there exist a few approaches to handle the issue, a systematic evaluation is lacking. However, the paper fails to show convincingly why their approach is systematic and even scientific. That is, the refusal patterns come out of the blue, there is no detailed description of how they were derived so that they might be reproduced and potential gaps could be shown. While I understand that for CS conferences this is rarely done, it is still a shortcoming. As even in CS one would expect more elaboration and motivation. Thus, as a reader it is hard to assess the categories due to lack of depth. \\nThat is, the paper might benefit from more focus, as it also aims to introduce a benchmark. However, here also the description is less than 2 pages, leaving many questions open. It is appreciated that data and code is open-sourced, but given the space constraints of conference papers, it seems close to impossible to aim for doing more than a benchmark (or method) within a paper. \\nOn the positive side, the investigation of why there is a gap in handling different types of conflicting queries is interesting and maybe be worth expanding. Also the editing method based on Li et al. is interesting and more could be done in this direction.\", \"details\": [\"Abstract: Grammar issue: we find that most RPAs behave significant performance gaps toward different conflict requests\"], \"intro\": [\"Ideally, the example \\\"Who murdered ...\\\" in the intro is real not hypothetical. Otherwise it looks like a good example is hard to find...\", \"As a methodological shortcoming, the paper uses GPT4o in dataset construction and also evaluates GPT4o on it, claiming that it outperforms other models. While this might be technically correct, a NIPS paper from 2023/24 discussed at great length self-evaluation biases showing that model tend to better self-evaluate themselves. This has not been brought up in the paper.\", \"Conclusions: PRAs -> RPAs\"], \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"see above\", \"weaknesses\": \"see above\", \"questions\": \"None, really.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Global Response\", \"comment\": \"We express our gratitude to all the reviewers for their valuable insights and constructive feedback! We are pleased to hear that you appreciated our contributions to enhancing the refusal capabilities of RPAs through representation space analysis and editing. We would like to highlight some of the strengths of our work as noted by the reviewers:\\n\\n1. **Interesting and Important Problem** (Reviewer `tFS5`, `oci5`, `Dn9U`, `j72b`, `GGuM`) \\n\\n2. **In-depth Analysis**: Reviewers (Reviewer `oci5`, `Dn9U`) \\n\\n3. **Comprehensive Experiments** (Reviewer `oci5`, `Dn9U`).\\n\\n4. **Well-Written and Easy to Understand**(Reviewer `tFS5`).\\n\\nA major concern shared by several reviewers was the need for more detailed explanations regarding the selection of baselines. To address this, we have clarified our rationale for choosing Fine-tuning and LoRA as baseline methods. These methods are widely used and recognized for enhancing model capabilities, making them suitable benchmarks for evaluating the effectiveness of our representation editing approach. By comparing our method against these established techniques, we aim to provide a fair and comprehensive assessment of its performance.\\n\\nAnother concern raised was the potential bias introduced by using GPT-4 for data generation and evaluation. We have clarified that our choice was based on GPT-4's high alignment with human evaluations, as supported by existing research. Furthermore, we incorporated human evaluation steps to ensure data quality and diversity, minimizing any potential biases.\\n\\nFinally, we express our sincere gratitude to the reviewers for recognizing the contributions of our proposed method to enhancing the refusal capabilities of RPAs. Our work offers a new perspective on improving model safety and reliability through representation editing, providing a lightweight and efficient alternative to more resource-intensive methods. We hope this study inspires further research in developing robust and trustworthy AI systems.\"}", "{\"title\": \"Response to W3&Q2&Q6&Q7&Q10: Detail of t-SNE\", \"comment\": \"**Q7: Method for Separating Regions**\\n\\nWe use t-SNE to visualize and analyze the model's representation space to identify the distribution of different types of queries (such as conflict and non-conflict queries) within the representation space.\\n\\n**Q6: Reasons for Choosing t-SNE**\\n\\n1. **Need for Dimensionality Reduction:**\\n In our study, the model's representations are typically high-dimensional, making direct analysis and visualization challenging. To better understand and analyze these high-dimensional data, we need to reduce them to a more manageable space.\\n\\n2. **Reasons for Choosing t-SNE:**\\n t-SNE is a nonlinear dimensionality reduction technique particularly suited for visualizing high-dimensional data. It effectively preserves local structures, ensuring that similar data points remain close in the low-dimensional space. This is crucial for identifying patterns and structures (such as rejection and direct response regions) in our model's representations.\\n\\nt-SNE has been widely used as a tool for representation analysis. In many studies, t-SNE has proven to be an effective tool for analyzing and visualizing high-dimensional data [1][2][3].\\n\\n**Q10: Practical Significance of t-SNE**\\n\\nIn Figures 3 and 10, we show the distribution of different types of queries in the representation space. These regions represent the internal representation states of the model when processing different types of queries. By identifying these regions, we can better understand the model's decision-making process and further optimize its refusal capabilities. As introduced in Section 6.1, our representation editing method can adjust the model's representations to ensure more conflict queries fall into the rejection region, thereby enhancing the model's refusal capability.\\n\\n**W3&Q2: Detailed Explanation of the t-SNE Visualization Process**\\n\\nAs mentioned in Section 5.2, we use t-SNE to reduce the dimensionality of and visualize the hidden states of the model's last layer to analyze the distribution of different types of queries in the model's internal representation space. The main steps of t-SNE can be divided into:\\n\\n1. Data Collection\\n2. t-SNE Dimensionality Reduction\\n3. Visualization\", \"detailed_steps_are_as_follows\": \"1. **Data Collection:**\\n - **Model Input:** We format the queries according to the prompt template shown in Figure 5 as input to the model.\\n - **Extracting Representations:** For each query type (non-conflict queries, contextual knowledge conflict queries, parametric knowledge conflict queries, etc.), we extract the hidden states of the last token from the model as the embedding for that query. To eliminate the influence of the token's inherent meaning, we standardize the last token to an end-of-text symbol, such as `<|eot_id|>` for `Llama-3.1-8B-Instruct`.\\n - **Sample Size:** For each query type, we select 50 samples from the test set for t-SNE visualization.\\n\\n2. **t-SNE Parameter Settings:**\\n - **Algorithm Implementation:** We use the `TSNE` function from Python's scikit-learn library. For parameter selection, we use its default settings.\\n ```python\\n from sklearn.manifold import TSNE\\n tsne = TSNE(n_components=2, random_state=42)\\n data_2d = tsne.fit_transform(data)\\n ```\\n\\n3. **Ensuring Reproducibility:** To further enhance reproducibility, we plan to provide a complete code example for t-SNE visualization in our code repository, including the full process of data extraction, dimensionality reduction, and plotting.\\n\\n\\n[1] Zou, Andy, et al. \\\"Representation engineering: A top-down approach to ai transparency.\\\" arXiv preprint arXiv:2310.01405 (2023).\\n\\n[2] Ji, Ziwei, et al. \\\"Llm internal states reveal hallucination risk faced with a query.\\\" arXiv preprint arXiv:2407.03282 (2024).\\n\\n[3] Li, Tianlong, Xiaoqing Zheng, and Xuanjing Huang. \\\"Open the Pandora's Box of LLMs: Jailbreaking LLMs through Representation Engineering.\\\" arXiv preprint arXiv:2401.06824 (2024).\"}", "{\"summary\": \"The paper targets understanding the limitation of role-playing agents (RPAs) in recognizing and responding to hard queries that conflict with their knowledge. To this end, the authors develop an evaluation benchmark including conflicting and non-conflicting requests to assess RPAs' ability to identify conflicts and refuse to answer in hard cases without over refusing, thereby derive some findings about the RPAs' performance against different conflicts as well as the underlying reasons through a representation-level analysis. They also proposed the editing approach to let RPAs refuse requests concerning conflicts.\", \"the_significance_of_the_work_is_without_a_doubt\": \"RPAs are not supposed to answer any question which they do not have to answer to, instead of always trying the best to give an answer.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The refusal capabilities appear to be an important issue indeed. It is interesting to see work towards this direction to improve the RPAs' performance.\\n\\nThe paper is well written -- the study follows a step by step procedure, from evaluation design to comparison, and finally, to methods to improve the RPAs. It is well organized and easy to follow, with lots of examples to ease understanding.\\n\\nGiven the examples provided in the paper, it is convincing how the work introduced could help on the RPAs, delivering tangible understnading of the impact within the datasets/scenarios used.\", \"weaknesses\": \"The research methodology is kind of straightforward; it is simple what the authors intend to do and they made it via a sound process. For the same reason, it is not obvious what the challenges are for this study.\\n\\nThe representation editing method intervenes with the representations generated by the model to enhance the refusal ability for conflicting cases. It is compared with several fine-tuning methods designed for LLMs. But essentially, it may not be of the same nature as the compared methods. It is worth to explore how the proposed method works with other methods than LoRA.\\n\\nThere might be some more introduction to give readers a better understand of what RPAs are and what they do in scenarios like video games, etc. It seems from Figure 1 that RPAs mainly converse with users. If the focus is on queries and responses, this should be made clear to narrow the study of RPAs' capabilities to query-response.\\n\\nIt might be necessary to also consider the computation overhead in the proposed design.\", \"questions\": \"Are there any other studies around the refusal capabilities of RPAs? If so, they should definitely be included in the paper and discussed in comparison with the work presented in the paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Why our benchmark and evaluation methods are systematic and scientific\", \"comment\": \"In designing our benchmark and evaluation methods, we systematically constructed conflict query categories based on the sources of a model's knowledge and adopted a multi-dimensional evaluation strategy.\\n\\n1. **Designing Conflict Query Categories from the Perspective of Knowledge Sources:**\", \"large_language_models_primarily_derive_their_knowledge_from_two_sources\": \"Contextual Knowledge and Parametric Knowledge. Based on this, we designed four conflict query categories, each targeting these two knowledge sources. This design approach covers the main types of knowledge conflicts that models might encounter, helping to deeply analyze the model's refusal capabilities across different knowledge sources and identify specific areas for improvement. Moreover, this classification method can be applied to other models and tasks, offering broad applicability.\\n\\n2. **Comprehensive Evaluation of Model Capabilities from Three Main Perspectives:**\\n To thoroughly assess the model's performance, we evaluated it from three main dimensions: general conversational ability, role-playing ability, and refusal capability. These were further subdivided into: (1) General Conversational Ability: quality of response, consistency of response, factuality of response; (2) Role-Playing Ability: consistency with role background, style, personality, and abilities; (3) Refusal Capability: awareness of refusal and execution of refusal. This resulted in a detailed evaluation across nine specific dimensions. We noted that previous research often focused on only one or a few of these evaluation dimensions, such as assessing the personality of role-playing models or evaluating role-playing capabilities. We believe that only by combining these dimensions can we comprehensively evaluate and enhance a model's performance in complex interaction scenarios. Therefore, our research provides a more systematic and scientific evaluation framework.\"}", "{\"title\": \"Response to The importance of refusal capability is unclear.\", \"comment\": \"**1. Importance of Refusal Capability:**\\nWhile the latest LLMs like GPT-4 have demonstrated strong refusal capabilities, we find that there are still significant shortcomings in specific scenarios, particularly in role-playing. The uniqueness of RPAs lies in their need to handle various user requests appropriately while maintaining role consistency, including requests that may conflict with the role's settings or knowledge scope.\\n\\n**2. Why Refusal is Still a Problem:**\\n- *Contextualized Refusal Needs:* In RPAs, refusal is not just a simple safety mechanism; it needs to be closely tied to the role's settings. The model must understand when to refuse and how to do so in a manner consistent with the role's style.\\n- *Limitations of Existing Models:* Although models like GPT-4 have strong refusal capabilities in general scenarios, they still fall short in RPA scenarios, especially when queries conflict with the role's parametric knowledge.\\n\\n**3. Why Not Use DPO and RL**\\nWhile RL and DPO have potential in enhancing specific model capabilities like refusal, we did not adopt these methods in this study for the following reasons:\\n\\n(1) **High Data Production Costs:**\\nRL and DPO methods typically require a large amount of paired preference data, where the model needs to generate multiple candidate outputs for the same input, and these outputs are ranked by preference through human or automated means. Ensuring the model learns the correct preferences requires a substantial amount of high-quality annotated data. The process of ensuring data quality is complex, time-consuming, and labor-intensive.\\n\\n(2) **High Computational Resource Requirements:**\\nRL and DPO methods require significant computational resources during training. The model may need to iterate repeatedly, and if using RL, it needs to evaluate and update strategies to maximize the reward function. For large language models, the training cost and time expenditure are substantial.\\n\\nTherefore, we opted for a representation editing method to address the dependency on preference data and training resources. The advantages of using the representation editing method include:\\n(1) **Efficiency and Lightweight Nature:** Our representation editing method does not require large amounts of preference data or retraining the model. By intervening in the model's internal representations, we can enhance the model's refusal capability without altering its parameters.\\n(2) **Low Data Requirements:** Compared to the large-scale preference data needed for RL and DPO, our method requires only a small amount of annotated data to achieve effective performance improvements, reducing data production costs.\\n\\nOverall, while existing LLMs possess some refusal capabilities, refusal remains a challenging issue in specific RPA scenarios. Our research aims to uncover these challenges and provide effective solutions to enhance the model's refusal capabilities in RPAs, ensuring that the model can interact with users safely and reliably while maintaining role consistency.\"}", "{\"comment\": \"This does not appear the meaning of your analysis?\"}", "{\"title\": \"Response to W1: Challenges in Our Work\", \"comment\": \"Although our research methodology looks like straightforward, there are many significant challenges in enhancing the model's refusal capabilities.\\n\\n1. **Enhancing the model's refusal ability is a challenging task.** Even current state-of-the-art models (such as GPT-4o) cannot fully and correctly recognize and refuse queries that conflict with their role knowledge. In our [Failure Cases](https://anonymous.4open.science/r/Failure-Cases-of-Tell-Me-What-You-Don-t-Know-A3B4/Failure%20Cases.csv), we provide specific examples that demonstrate the challenges in this area of research.\\n\\n2. Existing related work mainly **focuses on specific types of conflicts**, such as time hallucinations, with little research on other types of conflict scenarios. Moreover, previous **evaluations mainly concentrate on certain aspects of role-playing** (such as personality consistency), lacking a systematic evaluation of the model's refusal capabilities.\\n\\n3. Regarding the performance differences for different types of queries, we **conducted an in-depth interpretability analysis**, revealing the internal mechanisms of the model under various conflict scenarios. Based on the analysis, we **proposed a representation editing method** that can enhance the model's refusal capabilities without additional training. This work not only provides new perspectives for understanding and improving the model but also introduces methodological innovations.\\n\\nIn summary, our research not only overcomes the challenges of enhancing the model's refusal capabilities but also makes corresponding contributions in terms of scenarios, evaluation dimensions, methodologies, and analysis. We expanded the research on conflict scenarios, provided a systematic evaluation of refusal capabilities, introduced new methods to improve model performance, and conducted an in-depth analysis of the model's internal working mechanisms.\"}", "{\"title\": \"Response to Q1: Any Other Studies Around The Refusal Capabilities Of RPAs\", \"comment\": \"As we mentioned in the Related Work section, research on the refusal capabilities of models mostly focuses on general question-answering scenarios. The goal is to enable models to recognize their own knowledge blind spots when faced with queries that conflict with their parametric knowledge and appropriately refuse to answer, to avoid providing incorrect or misleading information.\\n\\nIn the field of Role-Playing Agents (RPAs), the studies most relevant to our work include [5] and [6]:\\n- **[5]** In their research, they focus on the performance of role-playing chatbots regarding temporal knowledge consistency. They explore the issue of hallucinations when models handle temporal information and evaluate temporal consistency in role-playing.\\n- **[6]** This work primarily addresses the problem of temporal hallucinations in RPAs and proposes mitigation strategies to make models more accurate when dealing with queries involving temporal information.\\n\\nAlthough the above studies involve the refusal capabilities of RPAs to some extent, they mainly focus on handling specific temporal hallucinations. Our work aims to conduct a systematic and comprehensive study of the refusal capabilities of RPAs. Specifically:\\n1. **Diverse Conflict Scenarios**: We have designed multiple conflict scenarios, including conflicts with the role's contextual knowledge (such as role setting conflicts and role description conflicts) and conflicts with the role's parametric knowledge (such as factual knowledge conflicts and absent knowledge conflicts). These scenarios encompass various aspects of the role's knowledge, not limited to temporal information.\\n2. **In-depth Analysis of Refusal Capabilities**: We not only evaluate the models' refusal capabilities in different conflict scenarios but also analyze the differences in the models' internal representations when handling these conflicts using techniques like linear probes and t-SNE visualization. We explore why models perform differently on different types of conflicting queries.\\n3. **Proposed Improvement Method**: Based on our analysis, we propose a representation editing method aimed at enhancing the models' refusal capabilities without compromising their overall role-playing performance. This method differs from traditional fine-tuning or training strategies and is lightweight and efficient.\\n\\nTherefore, the difference between our work and existing research lies in our more comprehensive and in-depth exploration of the refusal capabilities of RPAs, covering more types of conflicts, and proposing a new improvement method that fills a gap in current research.\\n\\n[5] Ahn, Jaewoo, et al. \\\"TimeChara: Evaluating Point-in-Time Character Hallucination of Role-Playing Large Language Models.\\\" *arXiv preprint* arXiv:2405.18027 (2024).\\n\\n[6] Sadeq, Nafis, et al. \\\"Mitigating Hallucination in Fictional Character Role-Play.\\\" *arXiv preprint* arXiv:2406.17260 (2024).\"}", "{\"summary\": \"This paper proposes a benchmark to evaluate LLM's ability for role playing from the aspect of whether these LLMs can reject conflicting queries. It then uses linear probing and t-SNE to analyze why different models behave differently.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. propose a well-motivated benchmark\\n2. the data construction pipeline is plausible\\n3. conduct interpretability experiment to analyze results\\n4. develop a model editing method based on the representation discoveries\\nIn general, this paper raises interesting research questions and also conduct in-depth analysis.\", \"weaknesses\": \"1. The editing method does not analyze how much the method affects other non-relevant questions, such as questions independent to the role-playing. So the general accuracy of the thresholding method needs a comprehensive analysis\", \"questions\": \"1. Figure 2 shows that the model is bad at parametric conflicting queries. But why is the accuracy high at early layers and then decreases as layer number increase? And also why for non-conflicting queries, the accuracy starts low at early layers?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Ethical Concerns\", \"comment\": \"**1. Emphasis on Ethical Concerns:**\\n\\nWe fully understand and agree with your concerns about the potential misuse of technology. **Any new technology and method can be used for both positive and negative purposes.** As researchers, we have a responsibility to consider the potential impacts of the technologies we develop and strive to ensure they are used to promote societal well-being.\\n\\n**2. Intent and Goals of the Technology:**\\n\\n**The technology we propose aims to enhance the safety and reliability of large language models.** Specifically, we want models to be able to identify and appropriately refuse to respond to requests that are beyond their knowledge scope, conflict with their role settings, or contain inappropriate content. This helps prevent the generation of incorrect, misleading, or harmful information, protecting users from potential negative impacts.\\n\\n**3. On the Limitation of Free Speech:**\\n\\nWe also recognize that freedom of speech is an important societal value. However, we believe that appropriately limiting certain harmful content is necessary to protect users and society. For example, the dissemination of content related to violence, pornography, and racial discrimination can have negative impacts on society and even threaten public safety and moral standards. Therefore, in these cases, it is reasonable and necessary to restrict the spread of such content.\\n\\nIn conclusion, we believe that addressing the issue of technology misuse requires the joint efforts of researchers, policymakers, industry, and society at large. We are willing to actively participate in related discussions and collaborations to promote the ethical and responsible application of AI technology.\"}", "{\"title\": \"Response to W6&Q4:Detail of Probe\", \"comment\": [\"1. **Data Preparation:**\", \"**Hidden Representation Extraction:** For each query, we first use the prompt shown in Figure 5 as input to the model. During the model's forward pass, we extract the hidden states from a specified layer (e.g., the penultimate layer) to use as feature vectors.\", \"**Dataset Construction:** We collect the corresponding hidden representations for different types of queries (e.g., non-conflict queries, contextual conflict queries, parametric knowledge conflict queries). For each type of query, we use 200 samples for training and 50 samples for testing.\", \"**Label Assignment:** For binary classification, we assign a label of 1 to non-conflict query samples and a label of 0 to conflict query samples.\", \"2. **Model Definition:**\", \"**Linear Probe Structure:** We use a simple fully connected neural network with one hidden layer (512 nodes) and an output layer with a Sigmoid activation function. This setup is used to probe whether the model perceives a query as conflicting with its knowledge.\", \"3. **Training Process:**\", \"**Loss Function:** We use the Mean Squared Error Loss (MSELoss) to optimize the model parameters.\", \"**Optimizer and Hyperparameters:** We use the Adam optimizer with a learning rate of 5e-5, a batch size of 512, and train for 10 epochs.\", \"**Training Strategy:** The model is trained on the training set, and at the end of each epoch, its performance is evaluated on the validation set. The model parameters with the highest validation accuracy are saved.\", \"4. **Result Evaluation:**\", \"**Evaluation Metrics:** We calculate the prediction accuracy for each query type on the test set to assess the linear probe's performance in distinguishing between different types of queries.\", \"**Experiment Reproducibility:** To ensure the reliability of the results, we set a fixed random seed and conduct experiments on data from multiple roles, calculating the average performance.\"]}", "{\"title\": \"Response to Q1: What is the definition of the ability to refuse to answer?\", \"comment\": \"In our study, \\\"refusal capability\\\" refers to **a model's ability to provide correct answers to questions within its knowledge scope while appropriately refusing to answer questions that fall outside of this scope.**\\n\\nWe define the model's knowledge scope from two perspectives:\\n1. **Contextual Knowledge**: This refers to the knowledge explicitly provided in the context during interactions. The model should use this immediately available knowledge to answer relevant questions.\\n2. **Parametric Knowledge**: This encompasses the knowledge learned and internalized in the model's parameters through the pre-training process. It represents the long-term knowledge reserves acquired from training on extensive corpora.\", \"the_refusal_capability_of_a_model_is_specifically_manifested_in_three_major_aspects\": [\"1. **Conflict Recognition Ability**: The ability to identify queries that conflict with role contextual knowledge and role parametric knowledge.\", \"2. **Refusal Response Ability**:\", \"**Providing Clear Refusal Responses**: The model should clearly express its inability to answer the question.\", \"**Explaining the Reason for Refusal**: Appropriately explaining why it cannot answer helps the user understand.\", \"**Maintaining Consistency with Role Characteristics**: When refusing, the model should maintain the language style and personality traits of the role.\", \"**Offering Alternative Information or Clarifications When Appropriate**: If possible, the model can provide relevant suggestions or request further clarification from the user.\", \"3. **Refusal Accuracy**:\", \"**Avoiding Over-Refusal**: The model should not incorrectly refuse normal, non-conflict queries.\", \"**Avoiding Missed Refusals**: For conflict queries, the model should accurately identify and refuse them, rather than incorrectly providing an answer.\", \"Through these three core dimensions, we comprehensively define the \\\"refusal capability\\\" of a model. This not only requires the model to correctly identify when a refusal is necessary but also to refuse in a manner consistent with the role's characteristics, ensuring coherence in interaction and a positive user experience.\"]}", "{\"title\": \"Response to W2: Choice of Baseline\", \"comment\": \"Firstly, we chose to compare the representation editing method with fine-tuning methods based on their **shared goal**: enhancing the model's refusal capabilities while preserving its original performance as much as possible. Many previous studies (such as references [1][2][3]) have employed fine-tuning or LoRA to improve the model's refusal abilities. Therefore, comparing our method with these commonly used and effective approaches is practical and valuable.\\n\\nOur method is similar to LoRA in that both introduce a bias term into the model's representations to influence the model's output responses. However, the key differences are:\\n\\n1. **No Training Required**: Our representation editing method does not require any additional training or fine-tuning of the model. Instead, it directly intervenes in the model's internal representations during inference. This makes our method more lightweight and efficient, suitable for scenarios with limited computational resources.\\n\\n2. **Different Implementation Approaches**: The LoRA method adjusts model parameters by adding trainable low-rank adapter to the model, necessitating additional training steps. In contrast, our method directly adds a bias to the model's representations without changing the model's parameters or structure.\\n\\nExperimental results have demonstrated that our representation editing method can effectively enhance the model's refusal capabilities, achieving similar or even better results compared to LoRA without affecting the model's original performance.\\n\\n[1] Brahman, Faeze, et al. \\\"The art of saying no: Contextual noncompliance in language models, 2024.\\\" URL https://arxiv. org/abs/2407 12043. \\n\\n[2] Chen, Lida, et al. \\\"Teaching Large Language Models to Express Knowledge Boundary from Their Own Signals.\\\" arXiv preprint arXiv:2406.10881 (2024).\\n\\n[3] Cheng, Qinyuan, et al. \\\"Can AI Assistants Know What They Don't Know?.\\\" arXiv preprint arXiv:2401.13275 (2024).\"}", "{\"metareview\": \"This paper received a split decision from the reviewers, three were marginally in favor of acceptance and one was marginally for rejection and one was certain about rejection. In my reading of the paper, I also agree that there are many valuable insights presented, but that the evaluation is insufficiently strong. The authors propose a representation editing approach that enables a model to refuse to answer (without retraining the model) and they show that this generates improvement under two types of evaluation. The results show marginal improvement for the experiments that the did, (See Table 4) indicating that they have good insights, but the impact is not unambiguous. Although rejection is never the desired outcome, at present this seems like a weaker paper that could be made stronger with revision.\", \"the_authors_themselves_sum_up_the_critiques_that_were_given_during_the_review_process\": \"A major concern shared by several reviewers was the need for more detailed explanations regarding the selection of baselines. To address this, we have clarified our rationale for choosing Fine-tuning and LoRA as baseline methods. These methods are widely used and recognized for enhancing model capabilities, making them suitable benchmarks for evaluating the effectiveness of our representation editing approach. By comparing our method against these established techniques, we aim to provide a fair and comprehensive assessment of its performance.\\n\\nThe clarification was considered insufficient and more evaluation is requested.\\n\\nAnother concern raised was the potential bias introduced by using GPT-4 for data generation and evaluation. We have clarified that our choice was based on GPT-4's high alignment with human evaluations, as supported by existing research. Furthermore, we incorporated human evaluation steps to ensure data quality and diversity, minimizing any potential biases.\\n\\nThe LLM bias problem is real and more prevalent in more capable models. It would be great if you included some form of human evaluation and described it in the paper. You use \\\"human evaluation steps\\\" in your argument against the criticism of LLM bias but the only evidence that I see of the human evaluation aspect is your comment that you used human spot checking in your data construction process: \\n\\nAdditionally, in our dataset construction process, we did not solely rely on model outputs; we also incorporated a human evaluation step. Specifically, after data generation, we randomly selected a portion of the samples for manual verification and assessment to ensure the quality and diversity of the data. Through this approach, we aim to minimize the impact of any biases that might arise from model self-evaluation.\\n\\nPlease include a more detailed description of your human evaluation process if you believe that it counters the LLM bias issue.\\n\\nOverall, it possible in a future submission please hit on key points of evaluation, including dataset instruction, clearly in the main part of the paper.\", \"additional_comments_on_reviewer_discussion\": \"During the author rebuttal period, only two reviewers commented on the rebuttals and these were the two who gave it a negative rating. Neither of them were swayed by the author's rebuttals and in fact the slightly negative reviewer (5) commented that they agreed with the comment of the highly negative reviewer (3) but just not to the same extent.\\n\\nOverall, my feeling is that the two slightly positive saw the value of the insights of the paper. I did too. I think it has potential, it is just insufficiently rigorous and the writing could be improved. I think the slightly positive reviewers were not against it but not particularly enthusiastic about it either as evidenced by their failure to return to the discussion after their initial review despite reminders.\"}", "{\"summary\": \"The paper considers the problem of \\u201crefusal capabilities\\u201d of LLM agents. The authors in particular identify \\u201crejection regions\\u201d using a t-SNME approach. They claim that they are able to distinguish different types of \\u201crejections\\\" based on this type of analysis\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"In general, the questions proposed by the authors are interesting and worth investigating for their potential implications.\"], \"weaknesses\": [\"The authors claim that they are able to demonstrate the existence of rejection regions and direct response regions. However, this does not seem to be the case in general. In fact, it seems to me that the actual definitions of these two concepts are not \\u201crigorous\\u201d per se.\", \"The reviewer understands that it is difficult to run experiments that are \\u201cgeneralizable,\\u201d but it seems to me that the actual analysis and discussion proposed in the paper tend to consider the experimental results presented as universal, whereas they are probably very specific to the context taken into consideration. The scenarios defined by the authors are somehow very specific. It is difficult to understand how these results will generalize in practice.\", \"The actual identification of these regions is performed through t-SNE, but very limited details are provided to the reader. It appears that the reproducibility of these experiments is rather limited in general.\", \"The authors use GPT-4 for the data generation and synthesis. The actual impact of this choice is difficult to evaluate. In fact, it might be the case that these questions/answers are already \\u201cbiased\\u201d in a sense: the reviewer wonders if the results presented in the paper might just be an artifact linked to the use of GPT-4.\", \"In general, the reviewer understands that it might be very difficult to generate datasets in this area, but the results presented by the authors might be really dependent on the actual generation procedure used by the authors. This aspect should be at least discussed in the paper.\", \"The description of the procedure for linear probing (see also Appendix C.2) is not described in sufficient detail. It is very difficult to judge the results without additional details.\"], \"questions\": [\"What is the definition of \\u201crefusal capability\\u201d that you are actually considering in this paper? It seems to me that this concept should be clarified. In fact, the reviewer has some intuition of the problem (from the examples/description), but this concept is not formally introduced in the paper, in my opinion.\", \"Can you please describe the t-SNE process in detail, including the choice of parameters?\", \"What is the impact of using GPT-4 for data generation?\", \"Can you please describe the procedure used for linear probing in more detail (see Appendix C.2)?\", \"In Section 5.1, the authors say: \\u201cIn contrast, the lower accuracy of the probes for parametric knowledge conflicts indicates that models struggle to internally differentiate these conflicts from non-conflict queries.\\u201d What do you mean by \\u201cinternally differentiate\\u201d here?\", \"The reviewer struggles to understand why t-SNE is a good choice in this case. Can you please justify your selection of the algorithm?\", \"In Section 5.2, the description of the method used for separating the different regions is difficult to understand. Can you please provide more details?\", \"The reviewer was not able to understand the \\u201crepresentation activation method\\u201d presented in Section 6.1. Can you please clarify it? There are also other concepts that are loosely defined, such as \\u201crejection direction.\\u201d A formal definition of this term would be very useful.\", \"In Section 6.2, why do you call fine-tuning and LORA baselines in this context?\", \"Figures 3 and 10: The areas are composed of different types of conflicts. What does this mean in practice?\", \"Figure 10: The authors say: \\u201cThis consistency reinforces the robustness of our findings across different model architectures.\\u201d Why do you say that \\u201cconsistency reinforces robustness of our findings across different model architectures\\u201d? To which consistency are you referring here? This is rather unclear. Referring to Figure 10, the reviewer wonders if the results we observe in this figure are more related to the topics of the sentences themselves (and/or the presence of some keywords).\", \"What are the general implications of this work besides the specific examples considered by the authors? How can we generalize the results presented in this work?\"], \"flag_for_ethics_review\": \"['Yes, Potentially harmful insights, methodologies and applications']\", \"details_of_ethics_concerns\": \"The reviewer would like to raise some concern about the potential double use of these techniques.\\n\\nFor example, in non-democratic and authoritarian regimes, they can be used to limit the freedom of speech by avoiding potential difficult questions about historical events that a regime would like to \\\"remove from history\\\", questions about political and socio-economic issues, etc.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Acknowledgment of the Rebuttal\", \"comment\": \"I read all the authors' comments. The rebuttals are very extensive in terms of length, but they do not actually clarify some key concerns I expressed in my review.\\n\\nI am rather negative about this paper; I have several reservations about its contents.\"}", "{\"comment\": \"This was my concern: the fact that the authors include several experiments does not guarantee the fact that this is the case in general. This seems quite problematic to me since it is an essential point of the the proposed approach.\"}", "{\"title\": \"Response to grammatical and spelling errors\", \"comment\": \"Thank you for pointing out the grammatical and spelling errors. We will make the necessary corrections in the revised version.\"}", "{\"title\": \"Response to W2: Theoretical Underpinning Of The Rejection and Response Regions\", \"comment\": \"Delving deeper into the theoretical underpinnings of the **direct rejection regions** and **response regions** indeed helps enhance our understanding of model behavior.\\n\\nRegarding your suggestion to further explore the theoretical foundations of the rejection regions and response regions, we highly value it and have provided preliminary theoretical definitions of these regions in the revised manuscript.\\n\\nSpecifically, given the representation vector $\\\\\\\\mathbf{h}^l$ at the $l$-th layer of the model and the rejection direction vector $\\\\\\\\mathbf{d}'^l$, we determine the position of the input query in the representation space by calculating the similarity between them, thereby deciding the response strategy the model should adopt. The cosine similarity is defined as:\\n\\n$$\\n\\\\\\\\text{sim}(\\\\\\\\mathbf{h}^l, \\\\\\\\mathbf{d}'^l) = \\\\\\\\frac{\\\\\\\\mathbf{h}^l \\\\\\\\cdot \\\\\\\\mathbf{d}'^l}{\\\\\\\\|\\\\\\\\mathbf{h}^l\\\\\\\\| \\\\\\\\|\\\\\\\\mathbf{d}'^l\\\\\\\\|}\\n$$\\n\\nBased on the similarity, we can define the rejection region and the response region:\\n\\n- **Direct Rejection Region**: When $\\\\\\\\text{sim}(\\\\\\\\mathbf{h}^l, \\\\\\\\mathbf{d}'^l) \\\\\\\\geq \\\\\\\\theta$, where $\\\\\\\\theta$ is a preset threshold, indicating that the input is highly related to the rejection direction, and the model should tend to refuse to answer.\\n\\n- **Response Region**: When $\\\\\\\\text{sim}(\\\\\\\\mathbf{h}^l, \\\\\\\\mathbf{d}'^l) < \\\\\\\\theta$, indicating that the input does not belong to a conflicting query, and the model should proceed to generate a direct response.\", \"mathematically_expressed_as\": \"$$\\n\\\\\\\\text{Decision} =\\n\\\\\\\\begin{cases}\\n\\\\\\\\text{Refusal}, & \\\\\\\\text{if } \\\\\\\\text{sim}(\\\\\\\\mathbf{h}^l, \\\\\\\\mathbf{d}'^l) \\\\\\\\geq \\\\\\\\theta \\\\\\\\\\\\\\\\\\n\\\\\\\\text{Direct Response}, & \\\\\\\\text{if } \\\\\\\\text{sim}(\\\\\\\\mathbf{h}^l, \\\\\\\\mathbf{d}'^l) < \\\\\\\\theta\\n\\\\\\\\end{cases}\\n$$\\n\\nThrough the above definitions, we preliminarily delineate the positional relationships of the rejection region and the response region in the representation space. This theoretical framework provides an initial basis for understanding the model's decision-making process, helping to explain why the representation editing method can effectively guide conflicting queries toward the rejection region.\\n\\nWe also recognize that, due to the high-dimensional and nonlinear characteristics of large language models, conducting an in-depth theoretical analysis poses certain challenges. However, our method has demonstrated robust and significant performance improvements across multiple models and scenarios, validating its practicality and effectiveness.\"}", "{\"title\": \"Response to Concerns about bias in GPT evaluation\", \"comment\": \"We want to clarify that the choice of GPT-4 as an evaluation tool is based on its demonstrated high consistency with human evaluations in existing research. As noted in [1] and [2], GPT-4 shows the highest alignment with human evaluation results when assessing text. This indicates that, despite potential self-evaluation biases, GPT-4 remains the most human-aligned automated evaluation tool currently available.\\n\\nAdditionally, in our dataset construction process, we did not solely rely on model outputs; we also incorporated a human evaluation step. Specifically, after data generation, we randomly selected a portion of the samples for manual verification and assessment to ensure the quality and diversity of the data. Through this approach, we aim to minimize the impact of any biases that might arise from model self-evaluation.\\n\\n[1] Hackl, Veronika, et al. \\\"Is GPT-4 a reliable rater? Evaluating consistency in GPT-4's text ratings.\\\" Frontiers in Education. Vol. 8. Frontiers Media SA, 2023.\\n\\n[2] Liu, Yang, et al. \\\"G-eval: Nlg evaluation using gpt-4 with better human alignment.\\\" arXiv preprint arXiv:2303.16634 (2023).\"}", "{\"title\": \"Response to concern about generalizability\", \"comment\": \"Thank you for taking the time to review our response and for acknowledging our efforts to address the issues raised.\\nWe would like to further clarify your concern about generalizability.\\n1. **Applicability to Different Models:**\\n Our experiments were conducted on multiple LLMs, including the Llama-3, Mistral, and Qwen series. Despite differences in training data, parameter sizes, and design, we observed consistent trends in their refusal capabilities and internal representation patterns across all models. This consistency indicates that our findings are not limited to specific models but are applicable to a range of LLMs.\\n2. **Universality of Refusal Scenarios:**\\n We designed our evaluation benchmark to cover various conflict scenarios, including contextual knowledge conflicts, parametric knowledge conflicts, and non-conflict queries. These scenarios represent the types of knowledge conflicts that RPAs may encounter in various applications such as virtual assistants, educational tools, and game NPCs. Our approach to designing conflicts based on the knowledge sources of large models can also be generalized to other scenarios.\\n3. **General Representation Editing Method:**\\n Although our research focuses on RPAs, the representation space analysis and the proposed representation editing method are not limited to role-playing. As demonstrated in [1], editing a model's representations can enhance its honesty, safety, fairness, and more, fully demonstrating that the representation editing method is also generalizable.\\n4. **Laying a Foundation for Future Research:**\\n The goal of our work is to provide a foundation for understanding the internal mechanisms that influence a model's refusal behavior. By identifying the existence of rejection regions and direct response regions within the model's representation space, we offer insights that can be utilized by future research to develop more robust and trustworthy AI models.\\n\\nThank you again for your response. We hope that these additional explanations can alleviate your concerns about the generalizability of our work :)\\n\\n[1] Zou, Andy, et al. \\\"Representation engineering: A top-down approach to ai transparency.\\\" arXiv preprint arXiv:2310.01405 (2023).\"}", "{\"title\": \"Response to W2: Refusal scenarios lack generalizability.\", \"comment\": \"Regarding your concern that our analysis and discussion may present the experimental results as universally applicable, while they might be specific to certain contexts, we would like to provide a detailed response.\\n\\n**1. Importance of Refusal Capabilities:**\\nWe believe that training models to have refusal capabilities is crucial for developing reliable AI assistants. The ability of a model to clearly recognize its knowledge boundaries and appropriately refuse to answer questions beyond its scope or capability not only enhances the model's reliability but also prevents the provision of incorrect or misleading answers. This refusal capability is significant across a wide range of applications, not limited to simple question-answering systems. In broader natural language processing tasks, models are often assigned specific roles to complete tasks. For example:\\n- **Mathematical Problem Solving**: A model acts as a mathematician to solve complex mathematical problems. When faced with problems beyond its understanding, the model can appropriately refuse, avoiding incorrect solutions.\\n- **Medical Diagnosis**: A model acts as a doctor to provide advice to patients. If the model can refuse to answer in uncertain situations and suggest seeking professional help, it can prevent risks associated with incorrect diagnoses.\\n- **Role-Playing Chat and Game NPCs**: In dialogue systems and games, models play specific roles to interact with users. Characters with refusal capabilities are more realistic and can enhance user experience. For instance, an NPC in a game will not answer questions beyond its set scope, increasing the game's immersion.\\n\\nTherefore, researching and enhancing a model's refusal capabilities has broad applicability for building reliable and intelligent AI systems. This is not only relevant to the specific scenarios discussed in our paper but also important for various applications where models need to understand their knowledge boundaries.\\n\\n**2. Universality of Scenario Design:**\\nOur scenario design is based on the knowledge sources of large language models, namely contextual knowledge (immediate information provided through input) and parametric knowledge (intrinsic knowledge obtained through pre-training). We believe that conflicts between these two knowledge sources are common issues that models encounter in practical applications. Designing experimental scenarios around these knowledge conflicts helps comprehensively evaluate and enhance the model's refusal capabilities.\\n\\n\\n**3. Generalizability of Experimental Results:**\\nWe acknowledge that the experiments were conducted in specific scenarios and role settings. However, the scenarios we chose are intended to represent typical issues that models might encounter in practical applications, such as knowledge conflicts in role-playing and the model's awareness of its knowledge boundaries. These are common challenges across various application scenarios. We understand your concern that the experimental results might be limited by specific scenarios. To verify the broader applicability of our findings, we conducted experiments across multiple models (such as Llama-3, Qwen2, etc.) and different types of conflict scenarios, and the results showed consistent trends. This indicates that our approach has a certain degree of generalizability across different models.\\n\\nWe understand your concerns about the generalizability of the experimental results. However, we believe that while our experimental scenarios involve specific roles and contexts, the underlying principles and methods have universal applicability. The refusal capability of models is key to building reliable AI systems, and the knowledge conflicts and refusal mechanisms we explore are significant in various applications.\"}", "{\"title\": \"Response to W1: General Performance\", \"comment\": \"As we mentioned in Section 6.3.1 of our paper, to thoroughly evaluate whether our proposed editing method enhances the refusal capabilities of RPAs while not affecting their ability to handle non-role-playing-related questions, we employed MT-Bench as a comprehensive evaluation benchmark. MT-Bench is a multidimensional evaluation tool designed to assess the performance of large language models across various tasks and scenarios.\\n\\nWe conducted detailed evaluations across eight dimensions of MT-Bench, and the results are presented in Tables 1 and 2:\\n\\n**Table 1**: Performance of Llama-3-8B-Instruct under different methods\\n\\n| Method | Writing | Roleplay | Reasoning | Math | Coding | Extraction | STEM | Humanities | Average |\\n|--------------------------|---------|----------|-----------|-------|--------|------------|-------|------------|---------|\\n| **FT** | 8.06 | 7.05 | 4.40 | **6.15** | 5.60 | 7.55 | **9.16** | 9.40 | 7.16 |\\n| **LoRA** | 8.15 | 7.70 | **6.55** | 4.70 | **5.75** | 7.20 | **9.16** | **9.80** | **7.37** |\\n| **Representation Editing** | **8.98** | **8.30** | 4.35 | 5.35 | 5.35 | **7.78** | 9.05 | **9.80** | 7.36 |\\n\\n**Table 2**: Performance of Llama-3.1-8B-Instruct under different methods\\n\\n| Method | Writing | Roleplay | Reasoning | Math | Coding | Extraction | STEM | Humanities | Average |\\n|--------------------------|---------|----------|-----------|-------|--------|------------|-------|------------|---------|\\n| **FT** | 7.47 | 7.55 | 3.80 | 6.40 | **6.50** | 6.53 | 7.65 | 9.20 | 6.88 |\\n| **LoRA** | 9.10 | 8.00 | 5.30 | **6.55** | 6.30 | 7.00 | 8.65 | **10.0** | 7.61 |\\n| **Representation Editing** | **9.15** | **8.15** | **5.85** | 6.25 | 5.95 | **7.30** | **9.70** | 9.93 | **7.78** |\\n\\nBased on the results in Tables 1 and 2, we can clearly see that the Representation Editing method enhances the models' performance in the Roleplay dimension compared to both LoRA and FT methods. Simultaneously, the impact of the Representation Editing method on other dimensions is minimal, and we even observe slight performance improvements in certain areas. This demonstrates that our method **successfully improves the models' role-playing abilities while maintaining their general capabilities**. These results fully attest to the effectiveness of the Representation Editing method.\"}", "{\"title\": \"Response to Q5: What is internally differentiate mean?\", \"comment\": \"Previous research has shown that when LLMs process inputs, their internal representations reflect the model's grasp of the knowledge within the query. For example, [1] and [2] have pointed out that a model's internal representations can reveal its ability to distinguish between known and unknown information.\\n\\n**\\\"Internal distinction\\\" refers to the model's ability to recognize and differentiate different types of queries within its internal representations.** Our experimental results indicate that the model lacks this internal distinction capability when dealing with parametric knowledge conflict queries, leading to an inability to appropriately refuse to answer such queries. In contrast, for contextual knowledge conflict queries, the model can effectively distinguish internally and thus adopt appropriate refusal strategies.\\n\\n[1] Azaria, Amos, and Tom Mitchell. \\\"The internal state of an LLM knows when it's lying.\\\" arXiv preprint arXiv:2304.13734 (2023).\\n\\n[2] Ji, Ziwei, et al. \\\"Llm internal states reveal hallucination risk faced with a query.\\\" arXiv preprint arXiv:2407.03282 (2024).\"}", "{\"comment\": \"Thank for your continued attention and feedback. We understand your concerns about the length of our responses. The extensive responses were intended to address all questions you raised comprehensively. Now, we would like to focus on addressing your key concerns:\\n\\n1. Regarding the generalizability of experimental results:\\nWe understand your concern - good performance across multiple experiments does not fully guarantee the generalizability of the method and the scenarios. We would like to offer the following clarifications:\\n\\n- Comprehensiveness of experimental design: We conducted tests across different model architectures (Llama-3, Mistral, Qwen, etc.), and the results showed consistent trends.\\n- Cost-benefit trade-off: While extending to more scenarios could further validate the method's generalizability, this would incur significant costs in data generation and verification. We believe the current experimental scope reasonably demonstrates both the effectiveness of our method and the soundness of our scenario design.\\n- Application potential: As Reviewer `oci5` noted, our method has potential for broad applications.\\n\\n2. Regarding the analysis of internal recognition mechanisms:\", \"to_further_clarify_the_meaning_of_our_analysis\": \"The differences in model accuracy when facing different types of queries directly reflect its internal ability to differentiate and recognize these queries. Specifically:\\n\\n- Contextual conflict queries: The model performs well, indicating its internal mechanism can effectively identify such conflicts.\\n- Parametric knowledge conflict queries: Lower accuracy suggests the model's internal mechanism struggles to differentiate these queries from non-conflict queries.\\n\\nThis differentiation explains why the model exhibits varying refusal capabilities across different types of conflict queries.\\n\\nWhile we acknowledge there is room for improvement in our current work, we believe that under the existing experimental design and analytical framework, this research provides valuable insights and viable improvement methods.\"}", "{\"title\": \"Response to Q8: Formal Definition\", \"comment\": \"We recognize the need to provide a more detailed explanation of the representation editing method in Section 6.1, including formal definitions of key concepts. To help you better understand, we will detail the three steps of the method and provide formal definitions for each step.\\n\\n**Detailed Process of the Representation Editing Method:**\\n\\n---\\n\\n**Step 1: Collecting Activation**\\n\\nFor each role, we construct a set of **conflict queries** and **non-conflict queries**, represented as:\\n\\n- Conflict query set: $\\\\\\\\{ q_{\\\\\\\\text{conflict}}^i \\\\\\\\}_{i=1}^N$\\n- Non-conflict query set: $\\\\\\\\{ q_{\\\\\\\\text{non-conflict}}^i \\\\\\\\}_{i=1}^N$\\n\\nFor each query $q$, we obtain the model's hidden state representation at **each layer**, denoted as:\\n\\n- Conflict query representation at layer $l$: $\\\\\\\\mathbf{h}_{\\\\\\\\text{conflict}}^{i,l}$\\n- Non-conflict query representation at layer $l$: $\\\\\\\\mathbf{h}_{\\\\\\\\text{non-conflict}}^{i,l}$\\n\\nwhere $l = 1, 2, \\\\\\\\dots, L$, and $L$ is the number of layers in the model.\\n\\n---\\n\\n**Step 2: Identifying the Rejection Direction**\\n\\nIn this step, we calculate the representation differences between conflict and non-conflict queries at each layer to capture the features associated with the model's refusal behavior.\\n\\nFor each layer $l$, compute the representation difference vector for the $i$-th query pair:\\n\\n$$\\n\\\\\\\\Delta \\\\\\\\mathbf{h}^{i,l} = \\\\\\\\mathbf{h}_{\\\\\\\\text{conflict}}^{i,l} - \\\\\\\\mathbf{h}_{\\\\\\\\text{non-conflict}}^{i,l}\\n$$\\n\\nThen, calculate the average of all difference vectors to obtain the **rejection direction** $\\\\\\\\mathbf{d}^l$ at layer $l$:\\n\\n$$\\n\\\\\\\\mathbf{d}^l = \\\\\\\\frac{1}{N} \\\\\\\\sum_{i=1}^N \\\\\\\\Delta \\\\\\\\mathbf{h}^{i,l}\\n$$\\n\\nTo filter out noise and retain features highly related to refusal behavior, we compute the variance for each dimension of the difference vectors. Let $\\\\\\\\sigma_{l,j}^2$ be the variance of the $j$-th dimension at layer $l$. We zero out dimensions with variance above a threshold $\\\\\\\\tau$, resulting in the adjusted rejection direction $\\\\\\\\mathbf{d}'^l$:\\n\\n$$\\n\\\\\\\\mathbf{d}_{j}'^l = \\\\\\\\begin{cases}\\n\\\\\\\\mathbf{d}_{j}^l, & \\\\\\\\text{if } \\\\\\\\sigma_{l,j}^2 \\\\\\\\leq \\\\\\\\tau \\\\\\\\\\\\\\\\\\n0, & \\\\\\\\text{if } \\\\\\\\sigma_{l,j}^2 > \\\\\\\\tau\\n\\\\\\\\end{cases}\\n$$\\n\\n---\\n\\n**Step 3: Steering Activation**\\n\\nWith the rejection direction for each layer, we intervene in the model's internal representations when processing new queries.\\n\\nFor a new query $q$, obtain its hidden state representation at layer $l$, $\\\\\\\\mathbf{h}^l$.\\n\\nCalculate the similarity between $\\\\\\\\mathbf{h}^l$ and the rejection direction $\\\\\\\\mathbf{d}'^l$, for example, using cosine similarity:\\n\\n$$\\n\\\\\\\\text{sim}(\\\\\\\\mathbf{h}^l, \\\\\\\\mathbf{d}'^l) = \\\\\\\\frac{\\\\\\\\mathbf{h}^l \\\\\\\\cdot \\\\\\\\mathbf{d}'^l}{\\\\\\\\|\\\\\\\\mathbf{h}^l\\\\\\\\| \\\\\\\\|\\\\\\\\mathbf{d}'^l\\\\\\\\|}\\n$$\\n\\nIf the similarity exceeds a set threshold $\\\\\\\\theta$, the query at layer $l$ may require intervention. We add the rejection direction to the original representation proportionally by $\\\\\\\\lambda$:\\n\\n$$\\n\\\\\\\\mathbf{h}^{l} \\\\\\\\leftarrow \\\\\\\\mathbf{h}^{l} + \\\\\\\\lambda \\\\\\\\mathbf{d}'^l\\n$$\\n\\nBy adjusting the representations at each layer, we gradually guide the model to be more inclined to refuse to answer conflict queries.\\n\\n---\\n\\n**Formal Definitions of Concepts:**\\n\\n- **Rejection Direction $\\\\\\\\mathbf{d}^l$**: At layer $l$, it is the average representation difference when the model processes conflict versus non-conflict queries, capturing features of the model's refusal behavior.\\n\\n- **Adjusted Rejection Direction $\\\\\\\\mathbf{d}'^l$**: Obtained by filtering $\\\\\\\\mathbf{d}^l$ based on variance, retaining features highly related to refusal behavior.\\n\\nThrough this method, we can enhance the model's ability to refuse conflict queries by leveraging its representations at each layer, without altering the model's parameters.\"}", "{\"title\": \"Response to W3: Details of RPA Application Scenarios and Why They Are Limited to Query-Response Scenarios\", \"comment\": \"1. **On the Importance of RPAs:**\\n\\n RPAs play a significant role in the field of artificial intelligence, especially in interactive applications such as video games, virtual assistants, and education. In video games, RPAs are used as Non-Player Characters (NPCs) to create realistic and personalized characters, making the game environment more vivid and immersive. As NPC dialogue systems, they can generate dynamic and context-appropriate responses based on player input [4]. This makes interactions with NPCs more engaging and lifelike, reduces repetitive dialogue, and provides a more exploratory experience within the game. Additionally, RPAs are applied in areas like virtual customer service and online education, acting in specific roles to provide personalized services to users. Therefore, a deep understanding and enhancement of RPAs' capabilities are crucial for improving the quality and efficiency of human-computer interaction.\\n\\n2. **On the Importance of Refusal Abilities:**\\n\\n In interactions with users, RPAs may encounter questions that are beyond their knowledge scope, violate their role settings, or involve sensitive information. If RPAs lack appropriate refusal abilities, it can lead to the following problems:\\n\\n - **Providing incorrect or misleading information:** This may cause confusion for users or even bring about safety and ethical risks.\\n - **Breaking role consistency:** If RPAs answer questions that are inconsistent with their character, it may reduce user immersion and trust. For example, in a game, if an RPA portraying a medieval knight is asked about modern technology and cannot appropriately refuse or respond, it might break the game's authenticity and affect the player's experience.\\n\\n Therefore, enhancing RPAs' refusal abilities is vital for building reliable, professional, and consistent human-computer interaction systems.\\n\\n3. **Why the Research Focuses on Query-Response:**\", \"our_research_concentrates_on_the_query_response_interactions_of_rpas_mainly_because\": [\"**Core interaction method:** In most applications, the primary form of interaction between RPAs and users is through dialogue\\u2014exchanging information and completing tasks via queries and responses.\", \"**Focusing on key issues:** By studying query-response interactions, we can more directly analyze RPAs' ability to recognize and handle different types of requests (including conflicting requests), gaining deeper insights into the models' behavioral mechanisms.\", \"4. **Generalization of the Research:**\", \"Although our experiments are conducted within query-response scenarios, we believe that the designed scenarios, evaluation methods, and strategies for enhancing refusal abilities have broad applicability:\", \"**Universality of scenario design:** Our conflict scenarios are based on the two main knowledge sources of RPAs\\u2014contextual knowledge and parametric knowledge. These types of conflicts are common across various RPA applications and are not limited to specific domains.\", \"**Generality of evaluation methods:** The evaluation metrics and methods we propose can be used to assess RPAs' refusal abilities and role-playing capabilities in different applications, offering wide-ranging reference value.\", \"**Extensibility of enhancement methods:** Our improvement strategy based on the representation editing approach has already proven effective in query-response scenarios across different models. We believe it can also be extended to more complex interaction scenarios, helping to improve RPAs' performance in various applications.\", \"[4] Gallotta, R., et al. \\\"Large Language Models and Games: A Survey and Roadmap. arXiv 2024.\\\" arXiv preprint arXiv:2402.18659.\"]}", "{\"comment\": \"These definitions are rather useful.\"}", "{\"title\": \"Thank you for your detailed reply\", \"comment\": \"Thank you the detailed the reply. I will keep my weak accept score\"}", "{\"title\": \"Response to W4: Computation Overhead\", \"comment\": \"The representation editing method does not incur significant additional computational overhead. We analyze the computational overhead of our method mainly from two aspects: training overhead and inference overhead.\\n\\n**1. Training Overhead:**\\n\\nAs we have shown in Table 4 of our paper, our method does not involve any trainable parameters. Specifically, we only need to precompute and store the rejection vectors, which can then be simply added to the model's internal representations during practical applications. Therefore, compared to Fine-Tuning (FT) and LoRA, the computational overhead during the training phase of the representation editing method is nearly zero.\\n\\n**2. Inference Overhead:**\\n\\nDuring inference, our method only requires a simple vector addition operation between the precomputed rejection vectors and the current internal representations of the model. This operation has a computational complexity similar to the adapter modules in LoRA. Since this operation is extremely lightweight, its impact on inference time and computational resources is almost negligible. Therefore, our method does not introduce significant additional overhead during the inference phase either.\"}", "{\"title\": \"Response to Q1: Changes in Probe Accuracy\", \"comment\": \"**1. Input Processing in Probe Experiments:**\\n\\nIn our probe experiments, the input to the probe is the representation of the last token of the query. To avoid semantic inconsistencies that may arise from different tokens, we standardized the last token of all queries to be the same character (the end-of-text token, for example, `<|eot_id|>` in the Llama model). This approach ensures that the inputs received by the probe are semantically consistent, reducing interference caused by semantic differences among tokens.\\n\\n**2. Reason for High Accuracy in Early Layers:**\\n\\nThe shallow layers of Transformer models (those close to the input layer) primarily handle and encode the basic semantic and syntactic information of input tokens. In this scenario, since the last token of all queries is standardized to the end-of-text token, the representation vectors in the shallow layers are very similar semantically. This high similarity makes it challenging for the probe to distinguish between different types of queries in the early layers. The probe model tends to make the same prediction for most samples (e.g., predicting 0 or 1 for all). As we mentioned in Appendix C.2. For each type, we use the same amount of data, but since there are four types of conflicting queries and only one type of non-conflicting queries, conflicting queries are the majority. Therefore, the probe is inclined to predict conflicting queries, leading to higher accuracy for conflicting queries and lower accuracy for non-conflicting queries in early layers.\\n\\n**3. Reason for Changes in Accuracy as Layer Number Increases:**\\n\\nAs the layers deepen, the model gradually integrates higher-level semantic and contextual information. The representations in the deeper layers contain more complex features related to the query background and context. These features cause different types of queries to become more dispersed and diverse in the representation space. Consequently, the features contained in the representations increase accordingly, leading to changes in the probe's accuracy.\\n\\nOur observations are consistent with existing research findings. For example, in their studies, [1] and [2] point out that the shallow layers of the model are mainly responsible for encoding basic grammatical and semantic information, while the deeper layers integrate more complex contextual and semantic relationships. This explains why the probe exhibits higher/lower accuracy for conflicting/non-conflicting queries in the shallow layers and why the accuracy varies in the deeper layers.\\n\\n[1] Zou, Andy, et al. \\\"Representation Engineering: A Top-Down Approach to AI Transparency.\\\" *arXiv preprint* arXiv:2310.01405 (2023).\\n\\n[2] Liu, Wenhao, et al. \\\"Aligning Large Language Models with Human Preferences through Representation Engineering.\\\" *arXiv preprint* arXiv:2312.15997 (2023).\"}", "{\"title\": \"Response to The cases are not realistic enough\", \"comment\": \"We have provided a set of real failure cases. These cases consist of 10 refusal failures randomly sampled from the \\\"Gandalf\\\" role's absent conflict scenarios on GPT-4o. The data is stored in an [Excel file](https://anonymous.4open.science/r/Failure-Cases-of-Tell-Me-What-You-Don-t-Know-A3B4/Failure%20Cases.csv), which details each specific query and the corresponding response from GPT-4o. To ensure anonymity during the review process, we have stored these failure case data in an anonymous GitHub repository for reviewers to access.\"}", "{\"title\": \"Response to W1: benefit from including user-centric studies\", \"comment\": [\"Thank for your suggestion. As `Reviewer tFS5` mentioned, *\\\"It seems from Figure 1 that RPAs mainly converse with users.\\\"* Our research is indeed user-centric, which is reflected in the following aspects:\", \"**Design of User Interaction Scenarios:** When constructing the RoleRef evaluation benchmark, we designed various query scenarios that involve user interactions, including role setting conflicts and knowledge conflicts. The aim is to simulate real-world issues that users might encounter when using RPAs, ensuring our study addresses practical user concerns.\", \"**Consideration of Avoiding Over-Refusal:** While evaluating RPAs' refusal capabilities, we also emphasize preventing the model from over-refusing. We ensure that users can still receive effective responses within a reasonable scope, thereby enhancing the overall user experience and satisfaction.\", \"**Evaluation of Role-Playing Effectiveness:** We placed special focus on role-playing consistency and the model's ability to recognize knowledge boundaries. These capabilities directly impact user immersion and trust during interactions with RPAs.\", \"**Practical Application Examples:** In our introduction, we discussed the applications of RPAs in video games, virtual assistants, and educational tools. These are real-world, user-facing scenarios that highlight the user-oriented nature of our research.\", \"By centering our study around user interactions and experiences, we believe our work effectively evaluates the real-world impact of enhanced refusal capabilities in RPAs.\"]}", "{\"title\": \"Response to W1: Definition of Rejection Regions and Direct Response Regions\", \"comment\": \"Regarding your concern about the definitions of \\\"rejection regions\\\" and \\\"direct response regions\\\" not being rigorous, we would like to offer clarification.\\n\\nFirstly, in our paper, we did not claim to **demonstrate** the existence of these two regions; rather, we aimed to **reveal** the distinct response patterns of the model to different types of queries through extensive experimental observations.\", \"our_preliminary_definitions_of_these_concepts_are_based_on_empirical_observations\": \"- **Rejection Regions**: When the similarity between the input query's representation vector $\\\\\\\\mathbf{h}^l$ and the rejection direction vector $\\\\\\\\mathbf{d}'^l$ exceeds a certain threshold $\\\\\\\\theta$, i.e., $\\\\\\\\text{sim}(\\\\\\\\mathbf{h}^l, \\\\\\\\mathbf{d}'^l) \\\\\\\\geq \\\\\\\\theta$, the model is more inclined to trigger the refusal mechanism and decline to answer the query.\\n\\n- **Direct Response Regions**: When the similarity is below the threshold $\\\\\\\\theta$, i.e., $\\\\\\\\text{sim}(\\\\\\\\mathbf{h}^l, \\\\\\\\mathbf{d}'^l) < \\\\\\\\theta$, the model tends to generate a direct response to the query.\\n\\nThe similarity $\\\\\\\\text{sim}(\\\\\\\\mathbf{h}^l, \\\\\\\\mathbf{d}'^l)$ can be calculated using cosine similarity:\\n\\n$$\\n\\\\\\\\text{sim}(\\\\\\\\mathbf{h}^l, \\\\\\\\mathbf{d}'^l) = \\\\\\\\frac{\\\\\\\\mathbf{h}^l \\\\\\\\cdot \\\\\\\\mathbf{d}'^l}{\\\\\\\\|\\\\\\\\mathbf{h}^l\\\\\\\\| \\\\\\\\|\\\\\\\\mathbf{d}'^l\\\\\\\\|}\\n$$\\n\\nWe acknowledge that due to the high dimensionality and complex nonlinear characteristics of large language models' representation spaces, providing strict mathematical definitions and proofs is challenging. Therefore, our definitions are more intuitive descriptions based on experimental phenomena. In our research, we observed similar patterns across various large language models (such as Llama-3, Mistral, Qwen) and different conflict scenarios. Despite differences in architectures and training data, these models consistently exhibited differentiated treatment between conflicting and non-conflicting queries in the representation space. This consistency further supports our assertion regarding the existence of rejection regions and direct response regions.\"}", "{\"title\": \"Response to W4&Q3: Concerns about Synthetic Data\", \"comment\": \"**1. Rationale for Using GPT-4 for Data Generation:**\\n\\nIn current natural language processing research, utilizing large pre-trained models like GPT-4 for data synthesis has become a common and effective approach [1]. Numerous studies have demonstrated [2][3] that data generated by GPT can enhance model performance and provide high-quality data support, especially in the absence of large-scale annotated datasets. For instance, researchers have successfully used GPT-generated data to train and evaluate models, achieving significant results.\\n\\n**2. Design and Quality Assurance of the Data Generation Process:**\\n\\nWhen using GPT-4 for data generation, we implemented a rigorous process design and quality control measures, as detailed in Section 3.2 of our paper, to ensure the validity and reliability of the data:\\n\\n- **Diversity and Coverage**: We carefully designed the prompts for data generation to ensure that the generated questions and answers cover a wide range of scenarios and types, avoiding data uniformity and bias.\\n\\n- **Data Quality Filtering**: After data generation, we incorporated a data quality filtering step, using both automated and manual methods to remove low-quality or non-compliant data. This includes eliminating duplicates, ensuring the reasonableness of questions, and the accuracy of answers.\\n\\n- **Manual Verification**: To further ensure data quality, we conducted random sampling and manual verification of the generated data, assessing its fluency, relevance, and accuracy.\\n\\n**3. On Potential Bias and Impact:**\\n\\nOur main conclusions are based on multiple models and diverse datasets, and we observed consistent results across different experimental settings. This indicates that our findings are not merely artifacts of GPT-4 data generation but have a certain degree of generalizability.\\n\\n[1] Qin, Yulei, et al. \\\"Unleashing the power of data tsunami: A comprehensive survey on data assessment and selection for instruction tuning of language models.\\\" arXiv preprint arXiv:2408.02085 (2024).\\n\\n[2] Ding, Ning, et al. \\\"Enhancing chat language models by scaling high-quality instructional conversations.\\\" arXiv preprint arXiv:2305.14233 (2023).\\n\\n[3] Taori, Rohan, et al. \\\"Alpaca: A strong, replicable instruction-following model.\\\" Stanford Center for Research on Foundation Models. https://crfm. stanford. edu/2023/03/13/alpaca. html 3.6 (2023): 7.\"}", "{\"title\": \"Acknowledgement of author's response\", \"comment\": \"The response is good, but after reading through the negative reviewer's comments critizing primarily the generalizability, I kind of agree with him though not to that extent, i.e. I keep my score\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Q9: Baseline selection\", \"comment\": \"We consider Fine-tuning and LoRA as baseline methods for the following reasons:\\n\\n1. **Shared Research Objective**: Our proposed Representation Editing method aims to enhance the model's ability in refusal scenarios while maintaining its original performance as much as possible. Fine-tuning and LoRA are also commonly used methods aimed at improving a model's refusal capabilities. Using them as baselines allows for a fair comparison under the same objectives, providing an objective assessment of the effectiveness of our method.\\n\\n2. **Representativeness and Prevalence**: Fine-tuning and LoRA are widely adopted in research to enhance the performance of large language models. Many previous studies have used these methods to improve a model's refusal capabilities. Choosing them as baselines provides representativeness and reference value, helping to demonstrate the advantages of our method over existing commonly used methods.\\n\\nTherefore, in Section 6.2, we selected Fine-tuning and LoRA as baseline methods to objectively and comprehensively evaluate the effectiveness and advantages of our proposed Representation Editing method. By comparing with these commonly used methods, we aim to demonstrate that our method can effectively maintain or even enhance the model's original performance while improving its refusal capabilities.\"}", "{\"comment\": \"Thank you for your reply. It addresses some of my concerns. I think the research problem is interesting and may have broad applications. I will keep my Weak Accept rating.\"}", "{\"title\": \"Response to W5: Question About The Details of Data Generation.\", \"comment\": [\"In Section 3.2 \\\"Data Construction\\\" of our paper, we have provided a thorough description of how we built our dataset. Specifically:\", \"**Multi-Source Data Collection**: We extended the existing TIMECHARA dataset to construct our RoleRef dataset by integrating real dialogue data, publicly available data resources, and synthetic data generated using large language models like GPT-4. This approach ensured that our dataset was diverse and comprehensive.\", \"**Designing Diverse Refusal Scenarios**: To comprehensively evaluate the refusal capabilities of models, we meticulously designed various conflict scenarios. These include role setting conflicts, role description conflicts, factual knowledge conflicts, and absent knowledge conflicts. Each scenario is representative and diverse, targeting different types of knowledge conflicts that models might encounter in practice.\", \"**Data Generation Strategies**: We combined predefined templates with automated tools to generate a large volume of high-quality queries. For each generated query, we provided corresponding reference answers and explanations to ensure the accuracy and reliability of the data.\", \"**Manual Verification**: To further ensure data quality, we performed random sampling and manual verification of the generated data. This process involved assessing the fluency, relevance, and accuracy of the queries and responses.\", \"**2. Discussion on the Dependency of Results:**\", \"We understand your concern that our experimental results might depend heavily on the specific data generation process. To address this, we have implemented the following measures to ensure the robustness and generalizability of our findings:\", \"**Multi-Model Validation**: We conducted experiments across multiple models with different architectures and scales, including the Llama-3 and Qwen2 series. These models vary in training data, parameter sizes, and design. Despite these differences, our experiments demonstrated consistent trends across all models. This consistency suggests that our findings are not specific to any single model, indicating a certain level of universality and robustness.\", \"**Multi-Scenario Testing**: Our experiments encompass a wide range of conflict and non-conflict query types. By evaluating model performance across different scenarios, we verified the stability of our results and minimized the potential impact of the data generation process on our conclusions.\", \"These efforts collectively ensure that our results are not artifacts of a particular data generation method but are indicative of broader trends in model behavior. By rigorously designing our data generation process and thoroughly validating our findings across multiple models and scenarios, we have strengthened the generalizability and practicality of our research.\"]}", "{\"title\": \"Response to Q11: Explain the meaning of consistency.\", \"comment\": \"1. **Meaning of \\\"Consistency\\\":**\\n\\n**\\\"Consistency\\\" refers to the similar characteristics observed in the distribution patterns of internal representations across different model architectures (e.g., Llama3-8B-Instruct, Mistral-7B-Instruct, Qwen2-7B-Instruct).** Specifically, this consistency is reflected in: (1) Different roles having distinct representation spaces; (2) Roles from the same series (e.g., the same novel or story) being closer in the representation space; (3) Separation of representations for contextual knowledge conflict queries from non-conflict queries; (4) Overlap of representations for parametric knowledge conflict queries with non-conflict queries.\\n\\n2. **Why \\\"Consistency\\\" Enhances Robustness of Results:**\\n\\nWe assert that \\\"this consistency enhances the robustness of our findings across different model architectures\\\" because:\\n- **Similar Performance Across Models**: Different model architectures exhibit similar internal representation distributions when processing the same types of queries. This indicates that our findings are not specific to a single model but have general applicability.\\n- **Reliability of Results**: If different models show consistency in internal representation patterns, our analysis and conclusions about model behavior become more convincing and reliable.\\n- **Commonality in Model Behavior**: Consistency reflects that different models may follow similar mechanisms or strategies when processing information, helping us understand the universal characteristics of models.\\n\\n3. **Whether Results Are Related to Sentence Topics or Keywords:**\\n\\nWe understand your concern that these results might primarily be due to the presence of specific topics or keywords in the queries. To address this, we took the following measures during data generation to ensure query diversity and avoid bias towards specific topics or keywords:\\n\\n- **Increasing Query Diversity**: When constructing queries, we used various question forms, including \\\"when,\\\" \\\"how,\\\" \\\"where,\\\" etc., to avoid relying solely on specific sentence structures or keywords. This made the queries richer in syntactic structure and vocabulary, reducing the influence of specific keywords on the model.\\n- **Ensuring Topic Diversity Based on Original Novel Content**: Our queries were generated based on the original novel texts, covering different events, character relationships, and plots in the novels. This ensured topic diversity and prevented the queries from being overly concentrated on specific topics, requiring the model to handle a variety of semantic content.\\n\\nTherefore, when we refer to **\\\"consistency,\\\" we mean the similarity in internal representation distribution patterns across different models.** This consistency enhances the robustness of our findings because it indicates that our conclusions are applicable across different model architectures, demonstrating generality.\"}" ] }
87B3zDRMjv
RankNovo: A Universal Reranking Approach for Robust De Novo Peptide Sequencing
[ "Zijie Qiu", "Jiaqi Wei", "Xiang Zhang", "Sheng Xu", "Kai Zou", "Zhi Jin", "ZhiQiang Gao", "Nanqing Dong", "Siqi Sun" ]
De novo peptide sequencing is a critical task in proteomics research. However, the performance of current deep learning-based methods is limited by the inherent complexity of mass spectrometry data and the heterogeneous distribution of noise signals, leading to data-specific biases. We present RankNovo, the first deep reranking framework that enhances de novo peptide sequencing by leveraging the complementary strengths of multiple sequencing models. RankNovo employs a list-wise reranking approach, modeling candidate peptides as multiple sequence alignments and utilizing axial attention to extract informative features across candidates. Additionally, we introduce two new metrics, PMD (Peptide Mass Deviation) and RMD (Residual Mass Deviation), which offer delicate supervision by quantifying mass differences between peptides at both the sequence and residue levels. Extensive experiments demonstrate that RankNovo not only surpasses its individual base models, which are used to generate training candidates for reranking pre-training, but also sets a new state-of-the-art de novo sequencing benchmarks. Moreover, RankNovo exhibits strong zero-shot generalization to unseen models—those whose generations were not exposed during training, highlighting its robustness and potential as a universal reranking framework for peptide sequencing. Our work presents a novel reranking strategy that fundamentally challenges existing single-model paradigms and advances the frontier of accurate de novo peptide sequencing. Our source code is provided at an anonymous link.
[ "Peptide Sequencing", "De novo", "Reranking" ]
Reject
https://openreview.net/pdf?id=87B3zDRMjv
https://openreview.net/forum?id=87B3zDRMjv
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wyLJCuC69J", "wexKJNdlUJ", "rB71TcaCWG", "nePsQH1k4Q", "mYy2QqFOo7", "cYJdneu1kd", "ZiEV4OgdrI", "R1NmKcWEBq", "NMvy9BkGfX", "NDtj3TEmU5", "N6sVAvM7ff", "IkX8gckvR0", "IK7l0sT8Py", "GRyR0y7F2s", "B8xUfikda4", "5B5dnmF00T", "4mcOsabhRS" ], "note_type": [ "decision", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1737523651599, 1730537413526, 1732095493729, 1730616984387, 1732095625235, 1732094975635, 1732095109802, 1732096061081, 1732096631035, 1730592945592, 1732627063916, 1734893524520, 1730622745592, 1732096296568, 1732095336039, 1732517453688, 1732096570792 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4619/Reviewer_GLyc" ], [ "ICLR.cc/2025/Conference/Submission4619/Authors" ], [ "ICLR.cc/2025/Conference/Submission4619/Reviewer_irm2" ], [ "ICLR.cc/2025/Conference/Submission4619/Authors" ], [ "ICLR.cc/2025/Conference/Submission4619/Authors" ], [ "ICLR.cc/2025/Conference/Submission4619/Authors" ], [ "ICLR.cc/2025/Conference/Submission4619/Authors" ], [ "ICLR.cc/2025/Conference/Submission4619/Authors" ], [ "ICLR.cc/2025/Conference/Submission4619/Reviewer_2GzM" ], [ "ICLR.cc/2025/Conference/Submission4619/Reviewer_irm2" ], [ "ICLR.cc/2025/Conference/Submission4619/Area_Chair_cJmu" ], [ "ICLR.cc/2025/Conference/Submission4619/Reviewer_pMKc" ], [ "ICLR.cc/2025/Conference/Submission4619/Authors" ], [ "ICLR.cc/2025/Conference/Submission4619/Authors" ], [ "ICLR.cc/2025/Conference/Submission4619/Authors" ], [ "ICLR.cc/2025/Conference/Submission4619/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper presents the first deep reranking framework, RankNovo, based on a list-wise reranking approach, which enhances de novo peptide sequencing by leveraging the complementary strengths of multiple sequencing models. Experimental results show that RankNovo achieves state-of-the-art performance on de novo sequencing benchmarks, outperforming each of its component base models. Moreover, RankNovo exhibits strong zero-shot generalization to unseen models that highlight its robustness and potential.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper introduces a deep learning-based reranking framework for peptide de novo sequencing, and focus on addressing the preferential bias challenges inherent in peptide sequencing.\\n2. The paper introduces two novel metrics, PMD and RMD, for accurate measurement of mass differences between peptides, which enable the model to more accurately distinguish complex similar sequences.\\n3. In the experiments, RankNovo surpasses each of its individual ensemble components on the 9-species-V1 and the 9-Species-v2 dataset, demonstrating superior performance in amino acid and peptide.\\n4. RankNovo generalizes effectively to unseen models in a zero-shot setting, which underscores the potential and value for future applications in de novo peptide sequencing.\", \"weaknesses\": \"1. The proposed RankNovo relies on the setting of multiple base models, which increases the computational cost.\\n2. In formula 10, L_coarse and L_fine are not clearly defined. It seems more appropriate to use L_PMD and L_RMD?\\n3. In formula (1) and formula (10), \\u03bb is confused.\\n4. The setting of \\u03bb in Equation 10 is unclear. Is the \\u03bb used in different tasks inconsistent? The author needs to clarify the setting of \\u03bb and the impact of \\u03bb on performance.\\n5. In order to study the influence of the number of base models and each model, you select five subsets. Could you explain the basis for sequentially removing the strongest model? Why not remove the poorest ones?\\n6. In Table 8, compared with the 5 models, the 6 models introduced the strongest ByNovo, but the average performance was reduced. What is the possible reason?\\n7. In Conclusion, there is relatively little content on the prospects and challenges of future optimization and applications, research limitations. It is suggested to supplement some to make it more comprehensive and forward-looking.\", \"minor_issues\": \"1. typo error: Please confirm if the term \\u2018expriment\\u2019 is used correctly in the titles of Section 4 and 4.1 of the article.\\n2. There are three models in \\\"The latter four models, ByNovo, R-ContraNovo, and R-ByNovo\\\".\", \"questions\": \"Please see the questions and suggestions raised in the section Weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"To Reviewer irm2 Part \\u2161\", \"comment\": \"**Q4**: The definition of robustness in the paper seems to refer to the model's robustness to noise in mass spectrometry data, but RankNovo might not sufficient to address this issue. On the other hand, it seems to refer to the issue of having inconsistent base models available during training and testing phases.\", \"a4\": \"We appreciate your insightful observation and acknowledge the confusion arising from our use of the term \\\"robustness.\\\" Our intended meaning refers to RankNovo's capability to select superior candidate peptides from multiple base models' predictions, even when certain base models were not incorporated during training. To avoid confusion with whether a model will be affected by the noises in mass spectrum data, we have now corrected the misuse of \\\"robustness\\\" and replaced them with other words in the revised manuscripts.\\n\\nRegarding your observation about RankNovo's potential limitations in addressing mass spectrum data noise heterogeneity, we posit that such heterogeneity is not a problem to be solved, but rather an inherent and essential feature of de novo sequencing. One of RankNovo's contributions is highlighting this fundamental feature to researchers in the field, and our reranking framework attempts to leverage this feature constructively. While our approach may have limitations, we believe it provides valuable insights for future research directions in this domain.\\n\\n**Q5**: Whether some naive methods can also achieve reasonable results, such as (1) the peptide with the highest score from a model output (2) the most frequently predicted peptide (3) the peptide closest to the precursor mass.\", \"a5\": \"We appreciate this insightful question. We have evaluated these three methods and included a performance comparison with RankNovo in Appendix A.5.5 of our revised manuscript. Our findings show that methods (1) and (2) demonstrate modest improvements in peptide recall compared to the best ByNovo baseline on Nine-species-V1, with increases of 0.008 and 0.010 respectively. However, these improvements are substantially lower than RankNovo's 0.037 gain. Method (3) resulted in a significant decrease in peptide recall from 0.623 to 0.525. This decline can be attributed to the fact that most candidate peptides already closely match the precursor mass. Method (3)'s exclusive focus on mass difference, while disregarding other semantic information, proves counterproductive. Methods (1) and (2) show some improvement by leveraging collective intelligence, as expected. However, these approaches do not utilize semantic information, spectral data, and candidate peptide features as comprehensively as RankNovo's data mining and machine learning methodology. Therefore, to achieve optimal performance, deep learning reranking approaches as exemplified by RankNovo remain essential.\\n\\n[1] Contranovo: A contrastive learning approach to enhance de novo peptide sequencing\\n\\n[2] Sequence-to-sequence translation from mass spectra to peptides with a transformer model\\n\\n[3] \\u03c0-PrimeNovo: An Accurate and Efficient Non-Autoregressive Deep Learning Model for De Novo Peptide Sequencing\"}", "{\"summary\": \"This paper presents the first deep learning-based reranking framework that enhances de novo peptide sequencing by leveraging the complementary strengths of multiple sequencing models. RankNovo scores the output sequences of multiple de novo models to train the model (training phase) or to obtain the optimal sequence through scoring (inference phase). The paper also introduces novel metrics such as PMD and RMD.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper demonstrates a high degree of novelty by proposing a method that utilizes predicted sequences from multiple de novo models, challenging the traditional single-model paradigm.\\n2. It achieves performance that surpasses all baselines and base models on two datasets.\\n3. The introduction of novel metrics such as PMD and RMD is likely the first of its kind in de novo sequencing, providing valuable reference points.\", \"weaknesses\": \"1. The performance improvement seems modest. RankNovo's peptide recall only surpasses the baseline by 0.037 (V1) and 0.023 (V2). Since RankNovo is trained and inferred based on these base models, it theoretically can only outperform them. The evaluation of RankNovo's performance should be based on its improvement relative to SOTA baselines. Otherwise, for such minor improvements, why not simply use the predictions from other models instead of inference multiple models through RankNovo?\\n\\n2. There seems to be a lack of analysis regarding the parameter number, training time, and inference time of RankNovo compared to other de novo models, which should be addressed in the experimental section. Given that RankNovo requires inference results from multiple (2 to 6) other models during training and inference, it can be inferred that its training and inference times would be slower than those of other models.\", \"questions\": \"1. The definition of robustness in the paper seems somewhat ambiguous. On one hand, it appears to refer to the model's robustness to noise in mass spectrometry data (Lines 12\\u201314, Lines 81\\u201384), but RankNovo might not sufficient to address this issue. On the other hand, it seems to refer to the issue of having inconsistent baseline models available during training and testing phases.\\n\\n\\n2. Based on the disadvantage mentioned earlier regarding modest performance improvement, there should be experiments to demonstrate whether naive methods (instead of RankNovo) can also achieve reasonable results. For instance, during inference, for multiple predicted peptides from different base models, one could select: (1) the peptide with the highest score from a model output, (2) the most frequently predicted peptide, or (3) the peptide closest to the precursor mass, and compare the performance of these methods against RankNovo. This approach would help demonstrate that RankNovo indeed learns some knowledge from the predicted results of multiple models.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"To Reviewer 2GzM\", \"comment\": \"We thank you for your reviews and address your concerns as follows.\\n\\n**Q1**: The approach heavily depends on outputs from existing peptide sequencing models, which may limit its novelty.\", \"a1\": \"Reranking is a well-established technique across numerous natural language processing tasks, including question answering [1][2], recommendation systems [3], and recent large language model answer selection [4]. This technique involves collecting various candidates for a single query and selecting the optimal one among them. In the context of de novo sequencing, while reranking does utilize outputs from different existing models, this should not be viewed as a weakness or limitation. On the contrary, the reranking framework demonstrates excellent flexibility - regardless of base models and the source of candidates, RankNovo consistently delivers performance improvements (see Fig 3 (A) and Appendix A.5.1). RankNovo has illuminated an alternative pathway for enhancing sequencing task performance, distinct from the traditional approach of improving individual models. We anticipate that, influenced by RankNovo, future algorithmic development in this field will benefit from the synergistic interaction between two approaches: \\\"improving single model performance\\\" and \\\"developing superior ranking strategies.\\\" This dual-pathway advancement represents RankNovo's fundamental contribution to the field.\\n\\n**Q2**: The conclusion is somewhat simple and could be expanded to discuss the limitations of the current approach and potential future research directions.\", \"a2\": \"Your suggestion is very pertinent. In response, we have revised the Conclusion section to address the current limitations, future development directions, and potential impact on the de novo sequencing field. Specifically, we acknowledge that the primary limitation is the relatively slow inference speed. Future research could investigate efficient candidate sampling techniques, such as implementing base models with partially shared weights to reduce computational complexity. While RankNovo faces speed constraints, it represents a pioneering deep reranking framework that enables flexible balancing between inference time and performance, introducing an innovative approach to performance enhancement. We anticipate that, influenced by RankNovo, future algorithms in this field will benefit from the synergistic approach of simultaneously improving single-model performance and developing advanced reranking strategies.\"}", "{\"title\": \"General Response\", \"comment\": \"We sincerely thank all reviewers for their thoughtful and constructive feedback. We have carefully addressed each comment and revised the manuscript accordingly. To ensure clarity, all revised sections are highlighted in blue. Below, we summarize the primary revisions made to the paper:\\n\\n**Conclusion Section**: We have substantially revised the Conclusion to explicitly discuss the current limitations of our approach, outline promising directions for future work, and elaborate on the potential impact of our research on the field of de novo sequencing.\", \"introduction_and_related_work\": \"We have enriched the discussion of transformer-based approaches in both the Introduction and Related Work sections to better contextualize our contributions within the existing body of work.\\n\\n**Performance Comparison**: To provide a more comprehensive evaluation, we have included a performance comparison between RankNovo and three na\\u00efve ensemble methods in **Appendix A.5.5**.\\n\\n**Model Characteristics**: Additional information on model sizes, training costs, and inference speeds has been incorporated into **Appendix A.5.6**, offering deeper insights into the practicality and efficiency of our approach.\\n\\n**Subset Analysis**: We now include detailed results for a series of base model subsets generated by sequentially removing the poorest-performing model, which are presented in **Appendix A.5.7**. This analysis further illustrates the robustness of our methodology.\\n\\n**Typographical and Notation Corrections**: We have addressed typographical errors and clarified notation misuses throughout the manuscript to improve overall readability and precision.\\n\\nFor detailed responses and explanations, we refer to the official comments provided by the reviewers. We hope these revisions meet your expectations and enhance the quality and clarity of our work.\"}", "{\"title\": \"To Reviewer pMKc\", \"comment\": \"We thank you for your reviews and address your concerns as follows.\\n\\n**Q1**: The discussion of transformer-based approaches in the introduction is insufficient.\", \"a1\": \"We have significantly enhanced the discussion of transformer-based approaches in both the Introduction and Related Work sections. The revised text provides a more comprehensive review of recent advancements in transformer architectures, with a particular focus on their relevance to de novo sequencing and ensemble methods. These additions aim to contextualize our work more effectively within the broader research landscape. The updated content can be found on lines 74\\u201378 and 136\\u2013138 of the revised manuscript.\\n\\n**Q2**: In line 270, M represents the residue mass, but the same symbol is also used in line 306 to denote prefix mass. It would be better to add a subscript or identifier to distinguish prefix mass or, alternatively, use a different symbol to represent it.\", \"a2\": \"We sincerely apologize for any confusion caused by the inconsistent notation. In the revised manuscript, we have clarified this issue by ensuring that 'M' consistently represents the residue mass throughout the paper. Additionally, we have added summation notation in line 306 to explicitly and accurately denote the prefix mass, thus eliminating potential ambiguity.\\n\\n**Q3**: In the mathematical formula (7) on line 309, should \\\\overline{m}{q\\\\tilde{j}} - \\\\overline{m}{k\\\\tilde{j}} be corrected to \\\\overline{m}{qi} - \\\\overline{m}{k\\\\tilde{j}} to represent the prefix mass difference more accurately?\", \"a3\": \"Thank you for catching this error. You are correct, and we have rectified this typographical mistake in formula (7) on line 309 of the revised manuscript. The correction now accurately reflects the intended representation of the prefix mass difference. We appreciate your careful review and attention to this detail.\"}", "{\"title\": \"To Reviewer 2GzM Part \\u2161\", \"comment\": \"**Q3**: Is there any rationale for the selection of the six base models of RankNovo?\", \"a3\": \"Yes, the criteria for selecting the six base models are described in detail in **Appendix A.2.3** and can be summarized as follows:\\n1. **Avoiding Data Leakage**: The training datasets for the base models must be carefully controlled to ensure there is no data leakage, preserving the integrity of the evaluation.\\n2. **Diversity of Algorithms**: The base models should employ different underlying algorithms to capture varying data preferences, thereby enriching the ensemble\\u2019s capacity to rerank effectively.\\n3. **Performance Proximity**: The selected base models should exhibit comparable performance levels to ensure that no single model dominates the ensemble.\\n4. **Optimal Performance**: The performance of the base models should be as strong as possible to maximize their contributions to the reranking process.\", \"based_on_these_criteria\": \"- **Criterion (1) and (4)** restrict the selection to models trained on **Massive-KB** and evaluated on **Nine-species-V1/2**, as **Massive-KB** is the largest available training dataset, resulting in the best-performing models. Consequently, de novo sequencing algorithms such as GraphNovo [5] and \\u03a0-HelixNovo [6], which do not meet this criterion, were excluded.\\n- **Criterion (4)** necessitates including the previous state-of-the-art model, ContraNovo [7].\\n- **Criterion (3)** excludes weaker models, such as those performing below Casanovo-V2 [8], as these models would likely fail to contribute to reranking. As shown in **Fig. 3(B)** and **Appendix Fig. 8**, even Casanovo-V2 uniquely solves less than 10% of the spectra, making weaker models ineffective for this task.\\n\\nAfter this filtering process, only **ContraNovo** and **Casanovo-V2** remained among publicly available models. To satisfy **Criterion (2)** for algorithmic diversity, we trained four additional models\\u2014**ByNovo, R-Casanovo, R-ContraNovo, and R-ByNovo**\\u2014to complement the ensemble.\\n\\nThis selection strategy ensures a diverse and high-performing base for RankNovo.\\n\\n[1] RankT5: Fine-Tuning T5 for Text Ranking with Ranking Losses\\n\\n[2] RankQA: Neural Question Answering with Answer Re-Ranking\\n\\n[3] Personalized re-ranking for recommendation\\n\\n[4] Aggregation of Reasoning: A Hierarchical Framework for Enhancing Answer Selection in Large Language Models\\n\\n[5] Mitigating the missing-fragmentation problem in de novo peptide sequencing with a two-stage graph-based deep learning model\\n\\n[6] Introducing \\u03c0-HelixNovo for practical large-scale de novo peptide sequencing\\n\\n[7] ContraNovo: A Contrastive Learning Approach to Enhance De Novo Peptide Sequencing\\n\\n[8] Sequence-to-sequence translation from mass spectra to peptides with a transformer model\"}", "{\"title\": \"To Reviewer GLyc Part \\u2162\", \"comment\": \"**Q7**: In Conclusion, there is relatively little content on the prospects and challenges of future optimization and applications, research limitations. It is suggested to supplement some to make it more comprehensive and forward-looking.\", \"a7\": \"Your suggestion is very pertinent. In response, we have revised the Conclusion section to address the current limitations, future development directions, and potential impact on the de novo sequencing field. Specifically, we acknowledge that the primary limitation is the relatively slow inference speed. Future research could investigate efficient candidate sampling techniques, such as implementing base models with partially shared weights to reduce computational complexity. While RankNovo faces speed constraints, it represents a pioneering deep reranking framework that enables flexible balancing between inference time and performance, introducing an innovative approach to performance enhancement. We anticipate that, influenced by RankNovo, future algorithms in this field will benefit from the synergistic approach of simultaneously improving single-model performance and developing advanced reranking strategies.\\n\\n[1] NovoBench: Benchmarking Deep Learning-based De Novo Peptide Sequencing Methods in Proteomics\\n\\n[2] PEAKS: powerful software for peptide de novo sequencing by tandem mass spectrometry\\n\\n[3] PepNovo:\\u2009 De Novo Peptide Sequencing via Probabilistic Network Modeling\"}", "{\"summary\": \"The paper presents RankNovo, a list-wise deep reranking framework for de novo peptide sequencing that uses outputs from multiple base models, applying axial attention and novel metrics for effective peptide reranking.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Introduces a unique list-wise reranking approach that effectively leverages outputs from multiple de novo peptide sequencing models.\\n2. Introduces PMD and RMD metrics, designed for precise quantification of mass differences between peptides.\\n3. The model demonstrates significant improvements over existing methods.\", \"weaknesses\": \"1. The approach heavily depends on outputs from existing peptide sequencing models, which may limit its novelty.\\n2. The conclusion is somewhat simple and could be expanded to discuss the limitations of the current approach and potential future research directions.\", \"questions\": \"1. RankNovo incorporates six de novo sequencing models. Is there any rationale for the selection of these?\\n2. See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you very much for your response. The authors have effectively addressed all of my questions, and I will maintain my positive score.\"}", "{\"metareview\": \"The paper considers the problem of de-novo peptide sequencing and introduces a deep reranking framework, RankNovo, to rerank the\\ncandidate peptides from a set of base models using a list-wise strategy. The approach is demonstrated on de-novo sequencing benchmarks and outperforms each base models. \\n\\nThe paper is well written and the methodology sound and intuitive. The experimental study is extensive. In particular the analysis of the contribution of each base models is interesting. The AC and reviewers appreciate the additional discussion and empirical results provided during rebuttal (e.g. comparison with naive ensemble approaches). However, the novelty and significance of the contributions remain somewhat limited for ICLR. Indeed several reranking approaches have already been proposed for NLP, the approach yields modest performance improvements and the fact that it outperforms component base models is not surprising.\", \"additional_comments_on_reviewer_discussion\": \"The points raised by the reviewers concerned confusing notation, the need for more discussion on transformer-based approaches, additional analysis on time and parameter complexity, selection of the based models, novelty and limited benefits, among others. The AC and reviewers appreciate the authors' clarifying points and additional experiments. However the marginal novelty and limited benefits remain and the AC believes that the paper is better suited for a computational biology conference.\\nAs a remark, it would be interesting to study the influence of varying lambda to balance peptide-level and residual-level losses beyond the extreme cases of 0 and 1 reported in the rebuttal.\"}", "{\"summary\": \"It is an interesting paper that proposes a deep learning-based reranking framework and introduces two new metrics, PMD and RMD, to characterize quality differences at the peptide and residue levels. The new framework leverages the strengths of multiple de novo peptide sequencing models to achieve improved performance through reranking.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"The paper is written very clearly and is easy to understand.\", \"The motivation is highly meaningful, revealing that different models have distinct advantages.\", \"A general framework is proposed, adaptable to any de novo peptide sequencing model.\", \"The experiments are thorough.\"], \"weaknesses\": [\"The discussion of transformer-based approaches in the introduction is insufficient.\", \"The meaning of some mathematical symbols is unclear.\"], \"questions\": \"- Recently, many transformer-based de novo peptide sequencing methods have emerged, such as HelixNovo[1], InstaNovo[2], AdaNovo[3], PrimeNovo[4], GraphNovo[5], etc. The discussion of these transformer-based approaches in the introduction is insufficient.\\n \\n- In line 270, M represents the residue mass, but the same symbol is also used in line 306 to denote prefix mass. It would be better to add a subscript or identifier to distinguish prefix mass or, alternatively, use a different symbol to represent it.\\n\\n- In the mathematical formula (7) on line 309, should \\\\overline{m}_{q\\\\tilde{j}} - \\\\overline{m}_{k\\\\tilde{j}} be corrected to \\\\overline{m}_{qi} - \\\\overline{m}_{k\\\\tilde{j}} to represent the prefix mass difference more accurately?\\n\\n[1] \\u03c0-HelixNovo for practical large-scale de novo peptide sequencing\\n\\n[2] De novo peptide sequencing with InstaNovo: Accurate, database-free peptide identification for large scale proteomics experiments\\n\\n[3] AdaNovo: Adaptive \\\\emph{De Novo} Peptide Sequencing with Conditional Mutual Information\\n\\n[4] \\u03c0-PrimeNovo: An Accurate and Efficient Non-Autoregressive Deep Learning Model for De Novo Peptide Sequencing\\n\\n[5] Mitigating the missing-fragmentation problem in de novo peptide sequencing with a two-stage graph-based deep learning model\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"To Reviewer GLyc\", \"comment\": \"We thank you for your reviews and address your concerns as follows.\\n\\n**Q1**: The proposed RankNovo relies on the setting of multiple base models, which increases the computational cost.\", \"a1\": \"Your assessment is reasonable. We have now incorporated comprehensive details of the inference speeds for RankNovo in the new section A.5.6. As anticipated, RankNovo's inference speed is indeed slower than single-model approaches. However, the reranking process itself is not the primary time constraint. The majority of RankNovo's inference time is consumed in gathering peptide candidates from base models, as these require sequential autoregression and beam search decoding, while RankNovo's inference involves only a single attention forward pass.\\n\\nAs we scale from 2 to 6 base models in RankNovo, the inference time increases approximately linearly, with inference speed decreasing proportionally. This increased computational cost is an inherent characteristic of reranking frameworks and represents an unavoidable trade-off compared to single-model approaches. However, RankNovo effectively leverages this additional inference time to achieve superior performance levels unattainable by single models. This inference time-performance trade-off can be flexibly adjusted by modifying the number of candidates.\\n\\nIn the context of de novo peptide sequencing, RankNovo's significance lies in introducing a novel approach that allows researchers to optionally scale up inference time in exchange for enhanced performance. This represents the first such option in the field.\\n\\n**Q2**: In formula 10, L_coarse and L_fine are not clearly defined. It seems more appropriate to use L_PMD and L_RMD?\", \"a2\": \"We agree with the your suggestion. To enhance clarity and consistency, we have replaced the terms L_coarse and L_fine with L_PMD and L_RMD in Formula 10. These new terms more accurately reflect the nature of the loss functions, aligning with the definitions provided earlier in the paper.\\n\\n**Q3**: In formula (1) and formula (10), \\u03bb is confused.\", \"a3\": \"We're sorry for the confusion caused by the misuse of notations. To resolve this confusion, we have now changed all \\u03bbs representing mass-to-charge ratio to \\u03bc, while \\u03bb is only used in formula (10) representing the weight of aggregating PMD objective and RMD objective.\\n\\n**Q4**: Is the \\u03bb used in different tasks inconsistent? The author needs to clarify the setting of \\u03bb and the impact of \\u03bb on performance.\", \"a4\": \"Thank you for pointing this out. We apologize for the oversight in clarifying this aspect. To address this, we have now explicitly stated in line 348 of the revised manuscript that **\\u03bb** is consistently set to **0.5** across all tasks. The parameter **\\u03bb** is used to aggregate the two training objectives, PMD (Prefix Mass Difference) and RMD (Residue Mass Difference). To analyze the impact of **\\u03bb** on performance, we conducted an ablation study to evaluate the effects of using PMD alone (**\\u03bb=1**) or RMD alone (**\\u03bb=0**) as the training objective. The results of this study are provided in **Table 4** and discussed in detail in **Appendix A.4.2**. Our findings show that combining PMD and RMD (**\\u03bb=0.5**) yields superior results compared to using either PMD or RMD individually. This demonstrates that a balanced integration of both objectives is crucial for achieving optimal performance.\"}", "{\"title\": \"To Reviewer irm2\", \"comment\": \"Thank you for your detailed comments. We re-list and address your concerns as follows. The order of these concerns are rearranged to better express our opinions.\\n\\n**Q1**: RankNovo should be evaluated against SOTA baselines, or others may favor a stronger single model instead.\", \"a1\": \"ContraNovo [1], which is referenced in the initial three lines of our manuscript, represents the current published state-of-the-art baseline for Nine-species-V1 and Nine-species-V2 datasets. In our research, we incorporated ContraNovo as one of six base models in our reranking framework, and Tables 1 and 2 present comprehensive comparative analyses between RankNovo and ContraNovo. Therefore, RankNovo has already been evaluated against SOTA baselines, and shows superior performance both on Nine-species-V1 and Nine-species-V2.\", \"our_reranking_framework_includes_not_only_published_models_but_also_four_internally_developed_base_models\": \"ByNovo, R-Casanovo, R-ContraNovo, and R-ByNovo. These models demonstrate performance comparable to or exceeding that of ContraNovo, with ByNovo being a notable example of superior performance. Tables 1 and 2 clearly demonstrate RankNovo's performance advantages over all baseline models.\\n\\nTherefore, we believe that empirical evidence substantiates RankNovo's superior performance relative to state-of-the-art baselines.\\n\\n**Q2**: RankNovo's peptide recall improvement compared to base models are modest, which is expected since RankNovo uses ensemble and reranking framework.\", \"a2\": \"We acknowledge your observation regarding the scale of RankNovo's improvements, but we would like to contextualize these results within recent advances in the field.\\nFor perspective, the state-of-the-art ContraNovo [1] demonstrated peptide recall improvements of 0.051 and 0.038 over its predecessor, Casanovo-V2 [2]. Subsequently, a recent preprint work PrimeNovo [3] achieved gains of 0.020 and 0.025 over ContraNovo. In this context, RankNovo's improvements of 0.042 (Version 1) and 0.029 (Version 2) over ContraNovo, and 0.037 (Version 1) and 0.023 (Version 2) over the strongest base model ByNovo (also introduced in this work) represent significant advancements in the field. These incremental improvements are particularly meaningful in practical proteomics applications, where each additional correct peptide identification contributes to experimental accuracy. Furthermore, RankNovo's significance extends beyond metric improvements alone. The framework introduces a novel reranking methodology that complements existing sequencing models, establishing an alternative approach to performance enhancement that diverges from conventional single-model optimization strategies. We anticipate that this dual-approach paradigm will influence future algorithmic developments in the field, fostering innovation through the integration of both methodologies.\\n\\n**Q3**: RankNovo should be slower than other single model frameworks, and the parameter number, training time, and inference time should be provided.\", \"a3\": \"We have incorporated comprehensive details regarding parameter counts, training time, and inference speeds for our six base models and RankNovo in the new section A.5.6. In summary, RankNovo comprises 50.5M parameters and requires 4 days of training utilizing four 40GB A100 GPUs, which is comparable to the base models. As anticipated, RankNovo's inference speed is indeed slower than single-model approaches. However, the reranking process itself is not the primary time constraint. The majority of RankNovo's inference time is consumed in gathering peptide candidates from base models, as these require sequential autoregression and beam search decoding, while RankNovo's inference involves only a single attention forward pass.\\n\\nAs we scale from 2 to 6 base models in RankNovo, the inference time increases approximately linearly, with inference speed decreasing proportionally. This increased computational cost is an inherent characteristic of reranking frameworks and represents an unavoidable trade-off compared to single-model approaches. However, RankNovo effectively leverages this additional inference time to achieve superior performance levels unattainable by single models. This inference time-performance trade-off can be flexibly adjusted by modifying the number of candidates.\\n\\nIn the context of de novo peptide sequencing, RankNovo's significance lies in introducing a novel approach that allows researchers to optionally scale up inference time in exchange for enhanced performance. This represents the first such option in the field.\"}", "{\"title\": \"Happy to address remaining questions\", \"comment\": \"We are grateful for the reviewers' thoughtful feedback. We hope that our response has addressed all the issues raised by the reviewers, and that they would consider updating their scores accordingly. As we move towards the end of the public discussion phase, we welcome any additional questions or points requiring further clarification. Many thanks, The authors.\"}", "{\"title\": \"To Reviewer GLyc Part \\u2161\", \"comment\": \"**Q5**: In order to study the influence of the number of base models and each model, you select five subsets. Could you explain the basis for sequentially removing the strongest model? Why not remove the poorest ones?\", \"a5\": \"RankNovo can utilize up to six base models, forming 56 different subsets containing two to five base models. Evaluating all possible combinations would be computationally prohibitive and could lead to noisy and complex results. Thus, it was necessary to select representative subsets to study the influence of the number of base models used during training and inference. In our original experimental design, we created five subsets by sequentially removing the strongest model for two primary reasons:\\n1. **Comprehensive Experimental Insights**: This configuration allowed us to systematically assess performance improvements as the model set size increased from 2 to 6. During this process, **R-ByNovo**, **R-ContraNovo**, **ContraNovo**, and **ByNovo** were sequentially added during training. Observing consistent performance improvements with increasing model set size demonstrated that all four models positively contribute to RankNovo\\u2019s training. Furthermore, comparing RankNovo trained with two base models to the individual performance of **Casanovo-V2** or **R-Casanovo** highlighted the contributions of these two relatively weaker models. \\n2. **Pronounced Performance Differences**: Sequentially removing the strongest model was hypothesized to yield more pronounced performance differences between configurations, making overall trends more apparent. Your suggestion to remove the poorest-performing models is indeed valid. To investigate whether this alternative approach impacts the observed trends, we conducted additional experiments using your proposed configuration, with detailed results presented in **Appendix A.5.7**.\\n\\nYour suggestion to remove the poorest-performing models is indeed valid. To investigate whether this alternative approach impacts the observed trends, we conducted additional experiments using your proposed configuration, with detailed results presented in **Appendix A.5.7**. \\n\\n**Findings**: \\n- The overall trend remains consistent under this alternative setting. Knowledge acquired by certain base models can still be adapted to other base models in a zero-shot manner during inference. Increasing the number of base models during training continues to yield better overall performance. \\n - However, as expected, the performance trends are less pronounced compared to our original configuration, where the strongest models were sequentially removed. This additional analysis strengthens our understanding of the interplay between base model selection and RankNovo\\u2019s overall performance. \\n\\n**Q6**: In Table 8, compared with the 5 models, the 6 models introduced the strongest ByNovo, but the average performance was reduced. What is the possible reason?\", \"a6\": [\"The primary performance metric for peptide *de novo* sequencing is **peptide recall**, not amino acid precision [1]. This is because, in proteomics experiments, a tandem mass spectrum (MS/MS) identification is considered successful only when all residues in the peptide are correctly sequenced. While amino acid-level metrics provide additional insights\\u2014particularly for earlier *de novo* sequencing algorithms evaluated on smaller datasets [2][3]\\u2014they are secondary to peptide recall. From this perspective, RankNovo demonstrates consistent improvement with the inclusion of additional base models during training, as evidenced by the monotonic increase in **peptide recall** from 0.647 to 0.660. The relationship between peptide recall and amino acid precision is not necessarily directly proportional. The slight decline in amino acid precision observed after incorporating ByNovo can be attributed to RankNovo's enhanced ability to identify shorter peptides, which increases peptide recall. However, this improvement may come at the expense of selecting less reliable candidates for certain longer peptides. To illustrate, consider a mini case observed in the Bacillus species dataset (V1). For three MS/MS spectra with the following true sequences:\", \"`KEYAVVNIDK`\", \"`IVFPEGIDER`\", \"`NTPGVTGFVGSAGSGSKPTPIIPGEAETIIKR`\"], \"the_five_model_version_predicted\": [\"`EYKAVVNLDK`\", \"`LVFPVAEDER`\", \"`NTPGVTGFVGSAGSGSKPTPLLPGEAETLLKR`\", \"In contrast, the six-model version predicted:\", \"`KEYAVVNIDK`\", \"`IVFPEGIDER`\", \"`N+0.984TPGVTGGATYAPTSKPTPLLPGEAETLLKR`\"], \"this_resulted_in_the_following_metrics\": [\"**Five-model version**: Peptide recall = 0.333, Amino acid precision = 0.884\", \"**Six-model version**: Peptide recall = 0.667, Amino acid precision = 0.846\", \"This case demonstrates that an increase in peptide recall can coincide with a slight decrease in amino acid precision, which is a common tradeoff in such scenarios. Ultimately, the improvement in peptide recall highlights the enhanced performance of RankNovo in achieving its primary objective.\"]}" ] }
86uYj8DcfK
DiffTell: A Comprehensive Dataset for Image Difference Captioning
[ "Zonglin Di", "Jing Shi", "Yifei Fan", "Hao Tan", "Alexander Black", "John Collomosse", "Yang Liu" ]
The image Difference Captioning (IDC) task is to describe the distinctions between two images. However, existing datasets do not offer comprehensive coverage across all image-difference categories. In this work, we introduce a more extensive dataset, \textit{DiffTell}, which encompasses various types of differences between images, including global image alterations, object-level changes, and text manipulations. \textit{DiffTell} includes both newly collected data and filtered data used in previous studies. Additionally, to scale up the data collection without prohibitive human labor costs, we explore the possibility of automatically filtering for quality control. We prove that both traditional methods and recent multimodal large language models (MLLMs) show improved performance on the IDC task after training on the \textit{DiffTell} dataset. We conducted extensive ablation studies to provide a thorough analysis of the performance gain from \textit{DiffTell}. Experiments show \textit{DiffTell} significantly enhances the availability of resources for IDC research, offering a more comprehensive foundation and benchmark for future investigations.
[ "Image Difference Caption", "Vision Language Task", "A Comprehensive Dataset" ]
Reject
https://openreview.net/pdf?id=86uYj8DcfK
https://openreview.net/forum?id=86uYj8DcfK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "gMVbvcEo2H", "ah30Csk6lH", "V07P99sPOj", "RDhWW6SYBL", "FXEvBb8jTI", "1rCJpWDkAA" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review", "decision", "official_review" ], "note_created": [ 1734501694719, 1729956653366, 1730622069152, 1730707701580, 1737523874400, 1730697527567 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7912/Area_Chair_po3t" ], [ "ICLR.cc/2025/Conference/Submission7912/Reviewer_nVjR" ], [ "ICLR.cc/2025/Conference/Submission7912/Reviewer_PJb5" ], [ "ICLR.cc/2025/Conference/Submission7912/Reviewer_p9ho" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7912/Reviewer_Adcx" ] ], "structured_content_str": [ "{\"metareview\": \"This paper presents a benchmark called DiffTell that is designed for the image difference captioning (IDC) task. The dataset consists of samples of a pair of images, each of which have been altered and the associated text caption describes the alteration. The authors train MLLMs on this dataset and show that the performance on the IDC task is improved.\\n\\nStrengths\\n1. The paper proposes a clever trick to scale up collection of an IDC task by using alterations for both pairs of images.\\n2. Existing IDC datasets are limited in diversity and size. DiffTell improves on both these aspects.\\n3. The experiments in this paper show that on two datasets (IER, PSBattle), training MLLMs on DiffTell improves performance.\\n\\nWeaknesses\\n1. The paper talks about IDC in the abstract (and other sections) but focusses only on the image manipulation aspect, and is tested only on those datasets. This makes the overall claim of the paper weaker and more general than supported by the experiments.\\n2. Since the data in DiffTell is synthetically generated, showing that training on it generalizes to real images is important. This is missing from the work. Given that DiffTell has short captions, this is very important.\\n3. Why do the authors need to combine DiffTell with the IER dataset for training? Does this point to some domain gap?\\n\\n\\nJustification of decision\\nGiven the weaknesses of the work (narrow focus on IER not IDC, synthetic training data not shown to work on real datasets), I recommend the paper for rejection. The majority opinion of the reviewers is the same.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised concerns about the task and the dataset proposed in the paper as being too narrow and not evaluated thoroughly. The reviewers responded suggesting that they would include a more general IDC benchmark in a revised version of the paper. However, this was not supplied at the time of rebuttal.\"}", "{\"summary\": \"In contrast to prior image difference captioning (IDC) datasets that are limited in quantity or styles, this paper introduces a large-scale IDC dataset containing 4 types of image differences (background change, local object change, style change, and text manipulation) and both real and synthetic images. To collect such a comprehensive and extensive dataset, a two-step collection pipeline is implemented, where humans are involved to ensure data quality.\\nEmpirical results show that models pretrained on this new dataset, DiffTell, achieve significant improvement across all evaluation metrics compared to models without such pretraining.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The proposed IDC captioning dataset, DiffTell, is more comprehensive and extensive than previous IDC captioning datasets.\", \"weaknesses\": \"1. The main concern of the proposed dataset is its image difference captions.\\n(a). The average length of the captions in DiffTell is 9.72 words only. Whether captions of this length can describe the image differences in detail remains a question. \\n(b). The diversity of the captions is another concern. A simple and fixed language template is used for generating descriptions of image pairs from the COCO and MARIO-10M datasets, which contribute over half of DiffTell\\u2019s data, potentially restricting the diversity of descriptions.\\n\\n2. There is insufficient clarity in the experiments, and some results appear ambiguous. \\n(a). In Table 3, it\\u2019s unclear whether models are trained on DiffTell first and then finetuned on the IER training set or if they are trained on both DiffTell and the IER training set simultaneously. \\n(b). How does the model perform on the IER testing set in a zero-shot setting (i.e., finetune the model on DiffTell only)? \\n(c). In Figure 3 (a), the model trained with IER+MARIO-10M shows significant improvements across all categories compared to those trained on IER alone, but MARIO-10M provides Text category data only. Where does improvement in other categories come from? Similarly, InstructP2P contributes data for the Background category and none for Text, but the model trained with IER+InstructP2P improves significantly in Text but performs worse in Background. Is there any further explanation for this? \\n(d). In Table 9, it's unclear why OpenFlamingo-3B performs worse in the few-shot setting than in the zero-shot setting.\", \"questions\": \"1. Why is the CLEVR dataset not considered in collecting DiffTell? Although CLEVR contains images in a single domain (toy bricks), it could enhance models\\u2019 3D spatial understanding.\\n2. What model is used to generate masks for the COCO dataset?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a new dataset to boost existing methods on Image Difference Captioning tasks. The images in the dataset are collected from publicly available sources, while the annotations are human-filtered.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The DiffTell dataset is a large dataset for the Image Difference Captioning (IDC) task, which considers various types of differences between images. After incorporating DiffTell dataset in training process, existing methods can achieve better performance on IDC tasks.\", \"weaknesses\": \"As the CLEVR-Change and DiffTell datasets are of a similar scale, both of them containing 70k samples, the paper should include a comparative analysis of models trained on these two datasets.\\n\\nAccording to Figure 3(b), the subset that considers differences in text is entirely from MARIO-10M. However, as shown in Figure 3(a), the model trained on IER+InstructP2P achieves higher performance on captioning difference on text than the model trained on IER+MARIO-10M. The paper should provide an analysis of this discrepancy.\\n\\nAdditionally, the \\u201ci\\u201d in the first \\u201cimage\\u201d in the first sentence of the abstract should be capitalized.\", \"questions\": \"Please see the questions in Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents the DiffTell dataset, developed for the image difference captioning (IDC) task. DiffTell focused on image pairs that exhibit various manipulations, including both synthesized and Photoshopped images. The dataset incorporates four types of image differences: background change, local object change, text manipulation, and style change.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": \"1. The dataset encompasses a diverse range of image modification types from various sources, which increases the dataset\\u2019s variability in modification categories.\\n2. DiffTell may have practical applications in detecting manipulated images and generating descriptive captions for them, may contributing to fields like image forgery detection and multimedia forensics.\", \"weaknesses\": \"1. The DiffTell dataset is limited to synthetic or edited images, whereas typical IDC tasks more often involve pairs of real images, as seen in datasets like Spot-the-Diff and Birds-to-Words. By narrowly defining the IDC task as one limited to manipulated images, the paper restricts its findings and contributions to only synthetic or Photoshopped cases, which may limit the general applicability of the conclusions in real-world scenarios.\\n \\n2. The differences in image pairs within DiffTell, which rely entirely on manipulations, are more easily distinguishable, such as through pixel by pixel subtraction. This raises concerns that models trained on DiffTell could \\u201ccheat\\u201d by learning these manipulated differences rather than truly comparing two images. Thus, these models may not perform well in identifying nuanced differences in real image pairs.\\n\\n3. The paper\\u2019s experimental validation is limited to IER and PSBattle datasets, both of which are also manipulation-focused. However, it excludes testing on more general IDC datasets like Spot-the-Diff or Birds-to-Words, which would offer insight into the model's efficacy with real-world image pairs. \\n\\n4. This paper does not address prior work sufficiently, particularly the recent OneDiff study [1], which also explored IDC with a variety of data sources and employed multimodal large language models. A comparison of DiffTell with OneDiff in terms of contributions and distinctions is essential to demonstrate DiffTell's novel aspects and improvement over existing datasets.\\n\\n[1] Hu, E., Guo, L., Yue, T., Zhao, Z., Xue, S., & Liu, J. (2024). OneDiff: A Generalist Model for Image Difference. arXiv preprint arXiv:2407.05645.\", \"questions\": \"1. Considering the limitations identified above, specifically the restricted focus on manipulated images, how does the proposed dataset and approach aim to generalize to IDC tasks involving real, unaltered image pairs? Would DiffTell-trained models require additional fine-tuning on real image pairs for effective real-world application?\\n\\n2. Could the authors clarify how DiffTell fundamentally differs from OneDiff or other previous works, particularly in dataset construction, data quality, and IDC model performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper introduces the DiffTell dataset for the Image Difference Captioning (IDC) task, which involves describing the distinctions between two images. The existing datasets are noted to lack comprehensive coverage of various image-difference categories, prompting the creation of DiffTell. This dataset includes a wide range of differences, such as global image alterations, object-level changes, and text manipulations, and combines newly collected data with filtered data from prior studies. To efficiently scale data collection and maintain quality, the authors investigate automatic filtering methods. The study shows that training on the DiffTell dataset improves performance for both traditional methods and previous multimodal large language models in the IDC task.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The author's argumentation is well-structured, with logical flow and thorough literature review.\\n2. The dataset constructed in this paper can advance the field of image difference analysis\\uff0c in which there are more comprehensive coverage of various image-difference categories. \\n3. Some experiments demonstrated the effectiveness of the constructed dataset.\", \"weaknesses\": \"1. While Tables 1 and 2 present comparisons between the proposed DiffTell dataset and previous related works, they do not include a comparison of the length of image difference captions, which is an important aspect to consider.\\n2. In Table 3, the authors include previous classic MLLM models as baselines, but it would be beneficial to supplement these baselines with more recent MLLM works to fully demonstrate the advantages of the DiffTell dataset for the image difference captioning task.\\n3. Figures 4 and 5 illustrate the model capability gains of the DiffTell dataset for achieving image difference captioning, but showcasing some failure cases would more clearly highlight the current limitations of the DiffTell dataset.\\n4. Although the paper shows performance improvements from using the data with an automatic classifier in Table 5, it does not quantify the accuracy of the automatic classifier in filtering the data.\\n5. This article focuses on the construction of a dedicated task dataset, while the task itself is not new, and no new methods are presented in the paper. From the perspective of innovation, the contribution is limited.\", \"questions\": \"1. The authors should provide a comparison of the lengths of image difference captions in the DiffTell dataset versus those in previous datasets.\\n2. It would strengthen my evaluation to include additional recent MLLM models as baselines in Table 3. Supplementing classic models with more contemporary ones would better showcase the advantages of the DiffTell dataset.\\n3. Adding examples of failure cases in Figures 4 and 5 would illustrate the limitations of the DiffTell dataset. This would provide valuable insights into areas for improvement and highlight contexts where the DiffTell dataset may struggle.\\n4. The performance improvements shown in Table 5 are compelling. However, quantifying the classification accuracy of the automatic classifier used for filtering the data is necessary. Understanding the classifier's effectiveness would enhance the credibility of the reported performance gains.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
86hNGGo1CU
Toward Efficient Kernel-Based Solvers for Nonlinear PDEs
[ "Zhitong Xu", "Da Long", "Yiming Xu", "Guang Yang", "Shandian Zhe", "Houman Owhadi" ]
This paper introduces a novel kernel learning framework toward efficiently solving nonlinear partial differential equations (PDEs). In contrast to the state-of-the-art kernel solver that embeds differential operators within kernels, posing challenges with a large number of collocation points, our approach eliminates these operators from the kernel. We model the solution using a standard kernel interpolation form and differentiate the interpolant to compute the derivatives. Our framework obviates the need for complex Gram matrix construction between solutions and their derivatives, allowing for a straightforward implementation and scalable computation. As an instance, we allocate the collocation points on a grid and adopt a product kernel, which yields a Kronecker product structure in the interpolation. This structure enables us to avoid computing the full Gram matrix, reducing costs and scaling efficiently to a large number of collocation points. We provide a proof of the convergence and rate analysis of our method under appropriate regularity assumptions. In numerical experiments, we demonstrate the advantages of our method in solving several benchmark PDEs.
[ "Kernel methods", "Non-Linear PDE" ]
Reject
https://openreview.net/pdf?id=86hNGGo1CU
https://openreview.net/forum?id=86hNGGo1CU
ICLR.cc/2025/Conference
2025
{ "note_id": [ "pisopn7u3q", "ndT9YcNXj7", "kg7gMsz6ju", "jbbkJcaa2q", "VtIS9wUzU2", "UgI6OepqW3", "TqO6Un58w6", "MFGtWywvqR", "9lVh5b4ZCW", "4CLkS7Envy", "0kEpRFRmrA" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1737524104300, 1732252406206, 1732552047006, 1732252754377, 1731110025034, 1730231275324, 1730780367347, 1730668770634, 1732299570067, 1732252718753, 1734651696684 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11115/Authors" ], [ "ICLR.cc/2025/Conference/Submission11115/Reviewer_vnHZ" ], [ "ICLR.cc/2025/Conference/Submission11115/Authors" ], [ "ICLR.cc/2025/Conference/Submission11115/Reviewer_Krce" ], [ "ICLR.cc/2025/Conference/Submission11115/Reviewer_vnHZ" ], [ "ICLR.cc/2025/Conference/Submission11115/Reviewer_bwZk" ], [ "ICLR.cc/2025/Conference/Submission11115/Reviewer_JusD" ], [ "ICLR.cc/2025/Conference/Submission11115/Authors" ], [ "ICLR.cc/2025/Conference/Submission11115/Authors" ], [ "ICLR.cc/2025/Conference/Submission11115/Area_Chair_TpSn" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to vnHZ\", \"comment\": \"We thank the reviewer for their time. However, the reviewer seems to completely misunderstand the motivation and contribution of our work. The reviewer is even unclear what the major baseline method DAKS (Chen et al., 2021a) is, which directly motivates our work, and has been explained and highlighted in numerous places in our paper, e.g., Line 34-45 and whole Section 2.\\n\\n>C1: I just cannot see any new idea with this formulation. BTW, if the PDE is nonlinear, I believe RBF will result in a minimization problem as well\", \"r1\": \"**It appears the reviewer has misunderstood our motivation and contribution, resulting in a superficial assessment**. The objective of our work is **not** to propose a new framework to replace kernel ridge regression. As highlighted throughout the paper, our aim is to develop an efficient kernel-based approach for PDE solving that (1) is computationally efficient and convenient to implement, and (2) provides convergence guarantees along with rate analysis. Evaluating a method without understanding its objectives is unproductive. Following the reviewer\\u2019s logic, the baseline method, DAKS (Chen et al., 2021a), could also be dismissed as merely a variation of kernel ridge regression, lacking \\\"new ideas\\\". By this reasoning, numerous neural network papers could similarly be deemed unoriginal just because they employ neural network formulations.\\n\\nWhile the reviewer proposed equations relevant to solving linear PDEs, our work focuses on developing a nonlinear PDE solver (see our test benchmarks). The context and results are entirely different, as nonlinear cases are much more challenging. For instance, in the nonlinear setting, it is generally not possible to achieve the optimal approximation in the interpolation form specified in Eq. (6) in the review comments.\", \"the_real_challenge_is\": \"for **nonlinear** PDEs, if one applies kernel ridge regression directly, justifying **optimality and convergence** becomes very difficult. The reviewer might be aware of the the theoretical foundation of kernel ridge regression. When we solve an optimal recovery problem framed as minimizing $\\\\left\\\\lVert u\\\\right\\\\rVert_{RKHS}$ s.t. $u(z_j) = y_j (j=1 \\\\ldots M)$, the representation theory guarantees that the optimal solution takes an interpolation form, $u(x) = \\\\kappa(x, \\\\mathbf{Z}) \\\\mathbf{\\\\alpha}$. Only with this form can we express the RKHS norm of $u$ is $\\\\mathbf{\\\\alpha} \\\\mathbf{K} \\\\mathbf{\\\\alpha}$. Based on this result, the optimal recovery problem can be reformulated as a soft-constraint optimization problem, resulting in the well-known **kernel ridge regression**:\\n$\\\\min_\\\\mathbf{\\\\alpha} \\\\frac{1}{M}\\\\sum_j u(\\\\mathbf{x}_j - y_j)^2 + \\\\lambda\\\\mathbf{\\\\alpha}^\\\\top \\\\mathbf{K} \\\\mathbf{\\\\alpha} $.\\n\\nHowever, if you impose nonlinear constraints, such as $u^2(z_j) =y_j$, **the optimal solution no longer takes the simple linear form, and applying kernel ridge regression loses the foundation!**\\n\\nTo address this, the prior work DAKS (Chen et al., 2021a) introduces a nested optimization formulation (see Section 2 of our paper for a detailed explanation). This approach ensures that at the inner level, the problem aligns with the standard optimal recovery form, allowing for a claim that, given derivative values $z^j_m$, the optimal solution for $u$ still takes the **standard linear/interpolation** form. However, this formulation requires augmenting the RKHS with derivative operators over the kernels, which inevitably introduces kernel derivative blocks in the kernel matrix.\\n\\nOur contribution, in contrast, is to maintain the favorable interpolation form without derivative augmentation (see Eq. (6) in our paper), allowing for a simpler implementation and more efficient computations while **reconstructing theoretical guarantees**!!. Achieving this required moving beyond the classical optimal recovery framework, as we constrain $u$ to a reduced search space within the RKHS. We have rigorously demonstrated that, even in this reduced space, convergence to the true PDE solution is achievable under mild regularity assumptions commonly employed in PDE convergence analysis. Moreover, our convergence rate remains comparable to that of DAKS. For the detailed proof, see Sections 4, A, and B, where we present a technical and highly nontrivial analysis.\"}", "{\"comment\": \"I admitted that I am very confused reading this paper since the presentation does not help improving my understanding either. While the presentation states that the proposed work is trying to improve upon DAKS, but then it is very unclear to me why one wants to regress to a set of features that include C_{ij} corresponds to derivatives of the kernel as in (3)-(5) in the manuscript. Effectively, the PDE constraints readily provide matrix $A = K''+\\\\alpha K$. At the same time, such a question does not seem to be fair since I am not reviewing DAKS. Second, the proposed new algorithm in this paper now seems to abandon the needs of regressing over C_{ij}, which is how it should be done in the first place. But then I am bothered with why would one devise an algorithm to beat previous method that is unclear. This stems me to look at what is actually being proposed if one solves a simple linear PDE, which helps me clarify what this paper seems to be about.\\n\\nResponding to C2, I am still not convinced why one should do the tensorial kernels. There must be cases when such idea is advantageous and/or not advantageous beyond computational costs. I understand your motivation but the current answer seems to be based on empirical evident of a few examples. \\n\\nResponding to C3, this comparison is not complete. The fact that minimizing over $\\\\eta$ is better than $\\\\alpha$ requires a lot of explanation. There are many factors that can affect the solutions (including optimization scheme, initial conditions, etc) and whether this phenomenon occurs only on 1 example is unclear.\"}", "{\"title\": \"Response to vnHZ\", \"comment\": \">C4: Can you clarify what DAKS is and what its relationship to their current work?\", \"r4\": \"As highlighted in Lines 353-355, DAKS (Chen et al., 2021a) is the prior work that motivates our approach. DAKS is based on the nested optimization framework introduced in Eq. (2) of Section 2. It incorporates a set of derivative values \\u2014 more generally, the linear operators within the PDE, evaluated at the collocation points, denoted as $z^j_m$ in (2). DAKS then uses kernel interpolation to construct the solution approximation, as shown in Eq. (3), where the Gram matrix consists of sub-blocks computed by applying linear operators to the kernels (see Eqs. (4) and (5)). **All relevant details have been provided in Section 2**\"}", "{\"summary\": \"The paper introduces and studies a novel kernel-based method for the approximation of the solution of a broad class of non-linear PDEs. The authors discuss computational aspects and provide error estimates in Sobolev spaces of appropriate regularity.\\nThe method is a modification (actually, a significant simplification) of an existing method, but the novelty is relevant as it provides a much more efficient solution method, and proves corresponding modified theoretical guarantees on the error.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"The topic is of current interest, and the results are clearly presented and discussed.\", \"The new method is computationally efficient and rather simple, when compared to other related approaches.\", \"A throughout theoretical analysis is provided, even if one step needs clarification.\"], \"weaknesses\": [\"The paper lacks a sufficient discussion of the existing work, especially as it does not acknowledge a large body of literature on kernel-based (symmetric and non symmetric) collocation methods. In particular the last paragraph of Section 5 (Related Work) states that the novelty over existing methods is the use of nodal values instead of coefficients, and in the use of an optimization problem instead of the solution of a linear system. Both these aspects are treated in the literature, sometimes also for nonlinear problem, see e.g. [1-5], even if certainly not for such general problems as (1). I would suggest to expand the discussion of existing work, and also clearly articulate how the approach differs from or improves upon these existing methods.\", \"The relation between Assumption 4.1 and the existence and uniqueness of a strong solution of (1) is unclear. This relation would be needed to interpret Lemma 4.2 and Proposition 4.3 (i.e., given a PDE (1), which $k$, $t$, $\\\\tau$ should one expect in (14)).\"], \"this_fact_is_reflected_also_in_the_numerical_experiments\": \"It's unclear if these examples fit into the assumptions of the theoretical results (having strong solutions of sufficient regularity), and if so, with which values of $k$, $t$, $\\\\tau$. In more general terms, the numerical experiments should address the expected convergence rates, other than just errors with fixed discretisations.\\n\\n- I'm not convinced by the argument leading to (32) in the proof of Lemma 4.2 in Appendix A. There is usually an issue in using a sampling inequality in this way. Namely, one has a domain $\\\\mathcal M$ (in this case $\\\\mathcal M=\\\\mathcal T_i$), which has a corresponding critical value $h_0$ (depending on $\\\\mathcal T_i$ via its diameter and boundary). Then, taking sufficiently many points in $\\\\mathcal M$ one can have $h_i<h_0$ and thus apply a sampling inequality like the one of Proposition A.1 in (Batlle et al., 2023). Here however the only collocation point in $\\\\mathcal T_i$ is $x_i$, and $h_i$ can not be made small without changing $\\\\mathcal T_i$ itself, and thus $h_0$. Some oversampling inside $\\\\mathcal T_i$ is usually needed. I suggest to clarify this point.\\n\\n[1] K. B\\u00f6hmer, R. Schaback, A Nonlinear Discretization Theory for Meshfree Collocation Methods applied to Quasilinear Elliptic Equations, ZAMM (2020)\\n\\n[2] V. Bayona et al., RBF-FD formulas and convergence properties, Journal of Computational Physics (2010)\\n\\n[3] I. Tominec, Residual Viscosity Stabilized RBF-FD Methods for Solving Nonlinear Conservation Laws, J Sci. Comp. (2022)\\n\\n[4] Ka Chun Cheung et al, H^2-Convergence of Least-Squares Kernel Collocation Methods, SINUM (2018)\\n\\n[5] N. Flyer et al., A guide to RBF-generated finite differences for nonlinear transport: Shallow water simulations on a sphere, J. Comp. Physics (2012)\", \"questions\": [\"Apart from the points discussed above, there are the following minor points:\", \"Around and after (2), there is some notational confusion between $z^j_m$ (a value) and $z^j$ (a function, later evaluated at $x_m$).\", \"Eq. (23) in Appendix A: The norm in the lhs should be over $\\\\Omega$, not $\\\\mathcal X$. Moreover $u_M$ should be $u_M^*$.\", \"Eq. (26), first row: Arguments $x_m$ are missing in the lhs.\", \"Starting from (7) and in (many of) the following occurrences, the index in the second sum starts from $m=M_+1$, instead of $m=M_{\\\\Omega}+1$.\", \"Use of sampling inequalities: I don't understand why the one of (Batlle et al., 2023) is needed in eq. (21) and later, and not a sampling inequality on the flat Omega (as e.g. [6]).\", \"[6] H. Wendland and C. Rieger, Approximate Interpolation with Applications to Selecting Smoothing Parameters, Numerische Mathematik (2005)\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work considers a slightly different way to solve PDE via the kernel-ridge-regression. To put this work in context, see below.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"This work considers a slightly different way to solve PDE via the kernel-ridge-regression. See the assessment below.\", \"weaknesses\": \"To put this work in the context of RBF (e.g., [2,3]), let's consider solving a simple PDE problem,\\n$$\\n\\\\\\\\mathcal{P}(u) := -u'' + u = f, \\\\\\\\quad (1) %\\\\\\\\label{PDE}\\n$$\\non a one-dimensional periodic domain (ignoring the boundary condition for simplicity). Using the notation in the paper, suppose we let the solution be\\n$$\\nu(x) = k(x,\\\\mathcal{M}) \\\\alpha, \\\\quad (2)%\\\\\\\\label{ansatz}\\n$$\\nwhere $$\\\\\\\\mathcal{M} = \\\\\\\\{\\\\mathbf{x}\\\\_1,\\\\ldots,\\\\mathbf{x}\\\\_M\\\\\\\\}$$ and $k$ denotes any kernel (which is positive definite and it corresponds to the RKHS space $\\\\mathcal{H}$), which norm is defined as,\\n$$\\n\\\\| u \\\\|^2_{\\\\mathcal{H}} = \\\\alpha \\\\mathbf{K} \\\\alpha,\\n$$\\n\\nwhere $\\\\mathbf{K}= k(\\\\mathcal{M},\\\\mathcal{M})\\\\in \\\\mathbb{R}^{M\\\\times M}$ is the Gram matrix of $k$. \\n\\nInserting the ansatz in (2) into the PDE in (1), one can deduce the following linear system\\n$$\\n\\\\mathbf{A}\\\\alpha := (-\\\\mathbf{K}''+c\\\\mathbf{K}) \\\\alpha = \\\\mathbf{f}.\\n$$\\nwhere $\\\\mathbf{K}''$ is a shorthand notation for the Gram matrix corresponds to $k\\\"$ and $\\\\mathbf{f} = (f(\\\\mathbf{x}\\\\_1),\\\\ldots,f(\\\\mathbf{x}\\\\_M))$. At this point, let's just ignore the computational feasibility. Since $\\\\mathbf{A}$ is invertible, one can simply solve this $M\\\\times M$ linear problem. One can write the solution as $u(x) = k(x,\\\\mathcal{M})\\\\mathbf{A}^{-1}\\\\mathbf{f}.$ I am not sure why solving this problem severely worse than the proposed method if the problem is invertible as noted in the last paragraph in Section 5. Many convergence results have been reported in literature (see [2,3] and the references therein).\\n\\nNow, let's look at the Kernel Ridge Regression for this PDE problem, that is, we solve:\\n$$\\n\\\\\\\\min_{u\\\\\\\\in \\\\\\\\mathcal{H}} \\\\\\\\frac{1}{M}\\\\\\\\sum_{i=1}^M (\\\\\\\\mathcal{P}(u)(\\\\\\\\mathbf{x}\\\\_i) - f\\\\_i)^2 + \\\\\\\\lambda \\\\\\\\|u\\\\\\\\|^2_{\\\\\\\\mathcal{H}},\\n$$\\nInserting the ansatz in in (2), we rewrite this optimization problem as,\\n$$\\n\\\\min_\\\\alpha \\\\frac{1}{M} (\\\\mathbf{A}\\\\alpha - \\\\mathbf{f})^\\\\top (\\\\mathbf{A}\\\\alpha - \\\\mathbf{f}) + \\\\lambda \\\\alpha \\\\mathbf{K}\\\\alpha. (2a)\\n$$\\nTaking derivative and set the equation to zero, we arrive at solving the following linear problem,\\n$$\\n(\\\\mathbf{A}^2 + M\\\\lambda \\\\mathbf{K}) \\\\alpha = \\\\mathbf{A} f, \\\\quad (3)%\\\\label{linearproblem1}\\n$$\\nusing the fact that $\\\\mathbf{A}$ is symmetric. If this is invertible, one can simply write the solution as,\\n$$\\nu(x) = k(x,\\\\mathcal{M})(\\\\mathbf{A}^2 + M\\\\lambda \\\\mathbf{K})^{-1}\\\\mathbf{A} f.\\\\quad (4) %\\\\label{sol1}\\n$$\\nThe key idea in this paper is to consider the solution of the following form, \\n$$\\nu(x) = k(x,\\\\mathcal{M}) \\\\mathbf{K}^{-1}\\\\eta.\\\\quad (5)%\\\\label{ansatz2}\\n$$\\nIn such case, the minimization problem becomes,\\n$$\\n\\\\min_\\\\eta \\\\frac{1}{M} (\\\\mathbf{A}\\\\mathbf{K}^{-1}\\\\eta - \\\\mathbf{f})^\\\\top (\\\\mathbf{A}\\\\mathbf{K}^{-1}\\\\eta - \\\\mathbf{f}) + \\\\lambda \\\\eta \\\\mathbf{K}^{-1}\\\\eta,\\\\quad (6)\\n$$\\nand following the standard calculus, the solution satisfies,\\n$$\\n(\\\\mathbf{K}^{-1}\\\\mathbf{A}^2\\\\mathbf{K}^{-1} + M \\\\lambda \\\\mathbf{K}^{-1}) \\\\eta =\\\\mathbf{K}^{-1} \\\\mathbf{A}\\\\mathbf{f},\\n$$\\nwhich is equivalent to multiplying (3) from the left by $\\\\mathbf{K}^{-1}$ and letting $\\\\alpha = \\\\mathbf{K}^{-1}\\\\eta$. If we solve this problem, we end up with\\n$$\\nu(x) = k(x,\\\\mathcal{M})\\\\mathbf{K}^{-1}(\\\\mathbf{K}^{-1}\\\\mathbf{A}^2\\\\mathbf{K}^{-1} + M\\\\lambda \\\\mathbf{K}^{-1})^{-1}\\\\mathbf{K}^{-1} \\\\mathbf{A} f, %\\\\quad (7)\\\\label{soln2}\\n$$\\nwhich I believe is identical to (4) if $\\\\mathbf{K}$ is invertible. In order to write the proposed ansatz in (5), $\\\\mathbf{K}$ is assumed to be invertible. In fact, the author considers speeding up the inversion of $\\\\mathbf{K}$ using an existing method in literature as reported in p.4.\\n\\nWhile I fully understand that the authors consider a minimization algorithm to solve this problem, I just cannot see any new idea with this formulation. BTW, if the PDE is nonlinear, I believe RBF will result in a minimization problem as well. \\n\\nBeyond this issue, I also do not understand why the choice of tensorial kernel makes any sense except for computational convenience. Basically, the proposed idea in Eq.(9) in the manuscript is to consider the tensor product of kernels that compares scalars (a component of the data). So, the solutions are chosen in the space of functions induced by a kernel that only compares distances in one dimension. The classical approach is to choose a kernel that compares distances in $\\\\mathbb{R}^d$. Mathematically, it is unclear why the proposed choice should always be a better choice aside from the numerical advantage. \\n\\nFinally, the numerical simulations are not convincing. I am not sure what DAKS is. It is not surprising the scheme is more accurate than PINN as it is well known that neural-network solutions are not accurate anyway. Lastly, the fact that this scheme beats the finite-difference scheme is also not so surprising based on the classical Numerical Analysis understanding in approximation of derivatives. One can check also the following paper that reported the advantage of RBF over finite-difference [1].\\n\\nBased on these (comparisons with well-established classical literature), I am not sure this paper merits consideration for publication in ICLR.\\n\\nReferences.\\n[1] B. Fornberg. The pseudospectral method: Comparisons with finite differences for the elastic wave equation.\\n Geophysics, 52(4):483--501, 1987.\\n\\n[2] B. Fornberg and N. Flyer. A primer on radial basis functions with applications to the geosciences.\\nSIAM, 2015.\\n\\n[3] B. Fornberg and N. Flyer. Solving pdes with radial basis functions. Acta Numerica, 24:215--258, 2015.\", \"questions\": \"Based on the comments above, here are several questions that maybe useful for the authors if they decide to revise the paper:\\n1. Why solving the minimization problem in (6) is better than solving (2a) needs some clarifications? Can you provide numerical evidence in terms of accuracy and efficiency? The equation numbers here correspond to the equations labelled in the comments above.\\n2. Are there specific types of PDEs or solution structures for which the proposed tensorial kernel approach might be particularly well-suited?\\n3. Can you provide any theoretical insights or empirical evidence demonstrating advantages of this tensorial approach in terms of solution quality or generalization ability?\\n4. Can you clarify what DAKS is and what its relationship to their current work?\\n5. Can you provide additional state-of-the-art methods beyond PINN and finite-difference schemes?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The author proposes a very interesting kernel-based numerical solvers for solving nonlinear PDE.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The topic looks very interesting\", \"weaknesses\": \"1. The manuscript has not been well-written, so that the reader cannot find their motivation clearly.\\n2. The theoretical part has been shown rigorously. \\n3. The experiment part: description not clear\", \"questions\": \"If the authors can rewrite the manuscript more clearly, I would like change the grade.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a kernel learning framework for solving nonlinear partial differential equations (PDEs). Unlike a previous kernel solver (Chen et al., 2021a) that embeds differential operators within the kernel matrix, this new approach directly applies the operator to the kernel function, resulting in a hybrid method between traditional physics-informed neural networks (PINNs) and kernel methods. Additionally, the authors propose placing collocation points on a grid and utilizing a product kernel, so that the Gram matrix can be decomposed as a Kronecker product, significantly reducing computational costs.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"The paper is well presented. The literature review seems adequate and I especially appreciated the introduction to the previous approach by (Chen et al., 2021a). Furthermore, I found particularly clever the idea of placing collocation points on a grid and utilizing a product kernel, so that the kernel decomposes into a Kronecker product.\", \"weaknesses\": \"I did not like the trade-off of making the kernel just a constant smaller (for instance, 1/3 smaller for the Burger's equation) at the cost of having to do an optimization similar to how regular PINNs are trained. Of course a 27 (3x3x3) times speed-up is not negligible, but with the authors approach, we lose the ability to do everything with fast Linear Algebra operations, and still do not get the advantages of training regular PINNs, where the colocation points can be randomly positioned and using stochastic gradients allows for using fewer collocation points per epoch. The product kernel trick was a nice idea, but I am convinced that it can also be applied in Chen et al., 2021a.\\n\\nI am also skeptical of the timings reported by the authors in the appendix. In the experiments setup they say SKS is run using the ADAM optimizer for 1e6 epochs, but in the table 9 in the appendix, where they report time per iteration and total, the ratio is not 1e6. I also found it very surprising that the time per iteration is so small, especially compared to those of PINNs: I would think evaluating the derivative of the PINNs should be faster than evaluating this derivatives on the kernel version, even with the product trick.\", \"questions\": [\"Why learn $\\\\eta$, instead of $K_{MM}^{-1} \\\\eta$? Both are vectors of the same dimension, so it should not make a diference. Evaluating the RHKS regularization term is also possible by calculating $(K_{MM}^{-1} \\\\eta)^T K_{MM} K_{MM}^{-1} \\\\eta$.\", \"In the numerical experiments, the collocation points should be the same for SKS and DAKS, as I am convinced the higher errors of DAKS arise from random sampling: with random sampling, there may be \\\"holes\\\" in the domain without collocation points, where the error is bigger.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to JusD\", \"comment\": \"We thank reviewer for their time, and we would like to reply with the following.\\n\\n> W1: I did not like the trade-off of making the kernel just a constant smaller. ... The product kernel trick was a nice idea, but I am convinced that it can also be applied in Chen et al., 2021a.\\n\\nIn Chen et al., although each block of the full Gram matrix can be converted in kronecker \\nproduct if product kernel is used. However, the full Gram matrix cannot be decomposed into\\nkronecker product. Thus, we break the big Gram matrix into smaller ones and exploit the kronecker \\nstructure for efficient computation.\\n\\n> W2: In the experiments setup they say SKS is run using the ADAM optimizer for 1e6 epochs, but in the table 9 in the appendix, where they report time per iteration and total, the ratio is not 1e6.\\n\\nFor reported runtime, total time is calculated with early stopping criterion. We stopped the optimization, if the performance\\nstopped improving for 1000 iterations. This is the reason why our method total time is not 1e6 * per-iter time. \\n\\n> W3: I also found it very surprising that the time per iteration is so small, especially compared to those of PINNs\", \"we_believe_our_method_time_per_iteration_is_surprisingly_small_compared_to_pinns_is_due_to_the_following\": \"1. Our method is based on linear operations, where we did not calculate gradient explicitly. But PINNs are based on \\nautograd.\\n2. Optimization in our method is simpler and more convenient for modern libraries. Our method only needs to optimize\\nparameter \\\\eta, where PINNs need to optimize with computation graphs on derivative terms.\\n3. Our method is non-parametric, the only tunable parameters are the lengthscale parameters. We further improved our \\ncomputational efficiency by fixing the kernel matrix terms at first iteration.\\n\\n\\n>Q1: Why learn $\\\\eta$, instead of $K^{-1}\\\\eta$?\\n\\nGreat question, we actually tried this approach! But the performance is just subpar, we believe it is due to $K^{-1}\\\\eta$ \\nis hard to optimize. Below, we present results from solving the Burgers' equation with $\\\\nu=0.02$ to illustrate this effect.\\n\\n| \\t| 1200 \\t| 2400 \\t| 4800 \\t|\\n|------------------------|----------|----------|----------|\\n| Our method \\t| 5.40E-03 \\t| 7.83E-04 \\t| 3.21E-04 \\t|\\n| Reviewer's sugguestion \\t| 2.80E-02 \\t| 1.56E-03 \\t| 7.14E-04 \\t|\\n\\n> Q2: In the numerical experiments, the collocation points should be the same for SKS and DAKS, as I am convinced the higher errors of DAKS arise from random sampling: with random sampling, there may be \\\"holes\\\" in the domain without collocation points, where the error is bigger.\\n\\nWe actually tested with both random sampling and grid sampling and random sampling is constant better than grid sampling,\\nthus, we used 5 different seeds and reported the average.\\n\\nBelow, we present results of burgers $\\\\nu=0.02$ with two difference sampling methods.\\n\\n| Sampling method \\t| 600 \\t| 1200 \\t| 2400 \\t| 4800 \\t|\\n|-----------------|----------|----------|----------|----------|\\n| Grid \\t| 4.09E-01 \\t| 3.85E-01 \\t| 4.27E-02 \\t| 5.67E-02 \\t|\\n| Random \\t| 1.75E-02 \\t| 7.90E-03 \\t| 8.65E-04 \\t| 9.76E-05 \\t|\"}", "{\"title\": \"Response to vnHZ\", \"comment\": \">C2: Beyond this issue, I also do not understand why the choice of tensorial kernel makes any sense except for computational convenience\", \"r2\": \"As we have emphasized throughout the paper, our motivation for introducing the product kernel is rooted in reducing computational costs and enabling the use of a large number of collocation points --- strictly from a computational standpoint. We do **not** claim any additional implications beyond computational convenience. **It is surprising to see the reviewer\\u2019s strong focus on points that are tangential to our objectives, indicating a misunderstanding of our motivation**. Nevertheless, the product kernel corresponds to a tensor product structure in the latent feature space, consistent with the tensor product approach commonly employed in numerical methods [1].\\n\\n[1] ARNOLD, D. N., BOFFI, D., and BONIZZONI, F. (2012). Tensor product finite element differential\\nforms and their approximation properties. arXiv preprint arXiv:1212.6559.\\n\\n>C3: Why solving the minimization problem in (6) is better than solving (2a) needs some clarifications? Can you provide numerical evidence in terms of accuracy and efficiency? The equation numbers here correspond to the equations labelled in the comments above.\", \"r3\": \"We did try with direct optimization of the coefficients $\\\\mathbf{\\\\alpha}$ instead of using the approach in Eq. (6). However, we observed that this led to a solution error increased by 1 -- 2 orders of magnitude. Our empirical findings suggest that each coefficient $\\\\mathbf{\\\\alpha}$ globally influences the solution approximation, to which both the boundary condition fit and the PDE residuals are highly sensitive. As a result, small adjustments of the coefficients can cause significant perturbations, leading to an imbalance between these components.\\nThis makes the optimization process much more challenging than directly estimating the local solution values at the collocation points. Below, we present results from solving the Burgers' equation with $\\\\nu=0.02$ to illustrate this effect.\\n\\n| \\t | 1200 \\t| 2400 \\t| 4800 \\t|\\n|------------------------|----------|----------|----------|\\n| Our method \\t| 5.40E-03 \\t| 7.83E-04 \\t| 3.21E-04 \\t|\\n| Reviewer's sugguestion \\t| 2.80E-02 \\t| 1.56E-03 \\t| 7.14E-04 \\t|\"}", "{\"metareview\": \"The paper introduces and studies a novel kernel-based method for the approximation of the solution of a broad class of non-linear PDEs. The authors discuss computational aspects and provide error estimates in Sobolev spaces of appropriate regularity. Unfortunately, two reviewers argue that the proposed methodology does not introduce fundamentally new ideas. The formulations presented appear to replicate well-established methods, particularly in the context of radial basis functions. One of these two reviewers is more positive, acknowledging that the method is a modification (actually, a significant simplification) of an existing method, but the novelty is relevant as it provides a much more efficient solution method, and proves corresponding modified theoretical guarantees on the error. Nevertheless, they point out that the paper lacks a sufficient discussion of the existing work, especially as it does not acknowledge a large body of literature on kernel-based (symmetric and non symmetric) collocation methods. Finally, all reviewers have serious concerns about the experiments.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raised numerous concerns about the novelty and effectiveness of the proposed method. There were some concerns about how the experiments were performed, and I do not feel they have been adequately addressed in the rebuttal.\"}" ] }
86HwTRg0qh
OneFit: Unified Neural Garment Simulation using Function-based Representation and Learning
[ "Ruochen Chen", "Dinh-Vinh-Thuy Tran", "Shaifali Parashar" ]
The digital garment modeling using self-supervised learning has significantly evolved in terms of the speed and visual quality of garment deformation simulations. Recent advances have incorporated size-awareness which allows to drape garments realistically, by stretching only to avoid collisions with the human body. It allows their deployment into virtual try-on systems where the goal is to observe garment fitting. However, a major-shortcoming is that they learn mesh-specific models which requires a distinct model to be trained for each mesh representations of a given garment. In this paper, we introduce a novel self-supervised garment simulation approach to learn garment deformations using only functions. First, our PolyFit module converts the garment mesh patches into functions which allows a compact yet detail-preserving representation. Then, OneFit learns the deformations of these patches by restricting the space of the PolyFit function transformations conditioned on different body poses, in a physics-guided and an intrinsic geometry-aware manner. It not only extends to various mesh-representations of a given garment but also to diverse representations of a garment type. Hence, a model trained on single garment can generalise across several garment types. Thanks to its compact representation, it is computationally superior to its counterparts, in terms of both training and inference and scales well to unseen garments. Thus, by training OneFit on a set of garments, a mesh-agnostic, garment-agnostic deformation model can be learnt which can be finetuned (or postprocessed) to accommodate unseen garment types.
[ "Garment draping", "Unsupervised learning", "Neural simulation", "Virtual try-on" ]
https://openreview.net/pdf?id=86HwTRg0qh
https://openreview.net/forum?id=86HwTRg0qh
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yeA9Ja3j3g", "w0zLd5SvCa", "sbqW7OXaxU", "gqzP6OmylO", "HroEPBwTjY" ], "note_type": [ "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730684224409, 1730889168400, 1730502584863, 1732615630988, 1730841391905 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11766/Reviewer_C5Ld" ], [ "ICLR.cc/2025/Conference/Submission11766/Reviewer_t1Ug" ], [ "ICLR.cc/2025/Conference/Submission11766/Reviewer_z7Xo" ], [ "ICLR.cc/2025/Conference/Submission11766/Authors" ], [ "ICLR.cc/2025/Conference/Submission11766/Reviewer_ScBe" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes to use deformable patches to simulate various garments and adopt MLP and Transformers as the model architecture. Experiments show lower errors on different garments.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The design is agnostic to the topology of garments.\", \"The training and inference speed is faster.\"], \"weaknesses\": \"1. Several variables in the equations and the metric in experiments are unclear. What does the $k$ mean in Equation 2,3,4,5? What does the $\\\\epsilon$ mean in Table 1? The explanations are hard to find. Moreover, the meaning is physics loss in Equation 9 is invalid. In SNUG and HOOD, the physics loss is equivalent to solving an optimisation problem defined by the potential energy and external forces, where the potential energy indicates the internal forces in physics. However, Equation 9 itself is incomplete and violates the optimisation problem, indicating invalid use of loss term. Finally, the proposed method seems unable to deal with different materials of garments.\\n2. The comparisons with baselines are insufficient and less convincing. \\n 1. Only limited qualitative results are displayed in Figure 7, which only includes one pose of the human. Since this is about garment animation, at least few images indicating the dynamics should be provided. Though there are several animations in the supplementary video, the baselines are not compared simultaneously with the proposed method, i.e. each time only comparing one garment sequence with one baseline instead of comparing one garment with all baselines. \\n 2. The results related to HOOD are questionable. Firstly, the training time for HOOD mentioned around L502 is 10 hours, while in the original paper the time is 26 hours on single GPU. On the other hand, the 8 hours achieved by proposed method is tested on 4 GPUs, which is unfair to compare with HOOD. How long will it take to train HOOD 4 GPUs? Secondly, around L363, the author claims that HOOD need post-processing to remove the collisions. However, the collision artifacts are well solved through the collision loss in the original paper. Even for results on unseen cases, HOOD shows less collisions without post-processing, but the proposed method need extra post-processing on unseen garments to remove the artifacts, suggesting limited generalisation abilities and robustness. \\n 3. No quantitative results comparing with baselines are provided. The only quantitative results in Table 4 do not include baselines mentioned in this paper. Based on the weaknesses mentioned above, the results are less convincing and insufficient to verify the effectiveness of the proposed method.\\n3. As mentioned at L513, the author claims that this is the first method to use patches for simulations. However, LayersNet [1] also adopts patches to learn garments with Transformer-based model, which is also agnostic to garment topologies and very close to the settings in this paper. The major difference is that LayersNet did not use physics loss during training. The author should discuss or even compare with LayersNet using similar training loss to verify the effectiveness of the proposed method.\\n4. The deformations in the supplementary materials tend to be rigid, especially the end of the dress with fixed wrinkles.\\n\\nIn summary, this paper could be better polished and not ready yet.\\n\\n[1]. Yidi Shao, et al. Towards Multi-Layered 3D Garments Animation, ICCV2023.\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper claims to present a novel self-supervised draping approach that overcomes the limitation of both mesh-specific and garment-specific learning. It further claims to be one trained on single garment and is able to handle a wide range of inter-class and\\nintra-class garment variations. \\n\\nHowever, this paper misses some critical works that already does what this paper claims and missed comparison with those methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"Please see the weakness section.\", \"weaknesses\": [\"### The two major notable claims of this paper are :\", \"Claim 1. Train on one garment and it will generalize to garment of other types\", \"Claim 2. Handle various mesh resolutions\", \"**Both the above claims are incorrect, and the solutions for these were also proposed by below papers**\", \"_Claim 1 & 2_: GarSim WACV 2023, proposed a unified garment simulator that can *simultaneously learns* deformation of *multiple type of garments (e.g., tops, skirts etc.) of varying topologies and fabrics* conditioned on the underlying body shapes, pose and motion . It is also shown that *GarSim can be trained and tested on the arbitrary resolution garment meshes*. The GarSim is based on graph neural network, hence the statements in the lines 115-120 are also incorrect.\", \"_Claim 1 & 2_: A similar follow-up work called GenSim CVPR 2023, also proposed a generic method for unsupervised 3D garment simulator that can be *trained simultaneously for multiple types of garments of varying sizes, and topology, and bodies of different shapes, poses and resolutions*. Generalizing on unseen garments of different types, and sizes, along with different body shapes and poses.\", \"Additionally, both GarSim and GenSim predicts the fabric aware garment deformation. Which is one of the limitations of the OneFit as mentioned in the limitation section. I encourage the authors to look at the Table 1 of both GenSim and GarSim papers and pitch your claims accordingly.\", \"### The video results shown in the supplementary youtube video are not very encouraging and has flaws.\", \"In general, one all garments are body hugging and short, not sufficient to validate the claims of the paper. The one large skirt animation is also not realistic, garment appears stiff, as body moves, as we can see while jumping and twisting. Better results for the similar garments have been shown by the HOOD, SNUG, GarSim in their supplementary.\", \"Timestamp 0.36, the without post-process result has heavy collision with the body. This indicates the collision loss is not working properly. It may be the case it is working fine with the body-hugging garments to some extent. But from results, it appears drastically failing.\", \"Timestamp 1.06 onwards, the movement of the garments are very siff and unrealistic, a loose garment like skirt should not strictly follow the underlying body movement, it should free flow under the movement of body and gravity.\", \"The authors have missed two important papers to discuss and consider in this research. Which have proposed solutions to the problem setting posed by the onefit paper, the OneFit paper therefore is incomplete and have very limited novelty. Also, the video results are not very encouraging as it has very unrealistic animations and working ok only for body hugging garments only.\", \"Because of the above reasons, I am giving my rating. And encourage the authors to re-write the OneFit paper considering the solutions proposed by GarSim and GenSim. Perhaps mentioning their limitations and how onefit overcomes them. A detailed comparison with them is must if the claims are similar to what they have made in their papers.\"], \"questions\": \"Please see the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper propose a framework called OneFit for learning garment deformation. Unlike traditional methods that require training a separate model for each garment type, OneFit is trained on a few types of garments and can generalize to previously unseen garment meshes within a single model.\\n\\nThe framework operates by dividing the garment mesh into patches and approximating each patch using a Taylor series expansion. These patches are then processed by a multi-layer perceptron (MLP) to obtain garment embeddings. Simultaneously, body movement and shape are represented through dynamic and static descriptors to form a body embedding. The garment and body embeddings are concatenated to predict garment deformation in response to body movements. To ensure realistic and physically plausible deformations, the training process incorporates both geometric and physics-based loss functions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"A key innovation in this work is the use of patches to represent garment meshes. By segmenting the mesh into patches using Voronoi diagrams, the authors introduce a structured approach that allows for more flexible and detailed modeling of garment surfaces.\", \"Each patch is approximated by polynomial function using a fourth-order Taylor expansion, which enables the model to capture intricate geometric information, including curvature and fine wrinkle details. This level of approximation is a noteworthy contribution as it allows for more realistic garment deformations and adds fidelity to the representation of garment.\"], \"weaknesses\": [\"Given that the primary novelty of the paper lies in representing each patch with a polynomial function using Taylor expansion, it would be valuable to see a comparison of different polynomial orders. Such an analysis could provide insights into how varying the order affects the level of detail captured, especially in complex areas with fine geometric details like wrinkles and curvatures.\", \"The paper does not thoroughly explore the impact of patch resolution on model performance. Since smaller patches may simplify the polynomial approximation, an analysis of the number and size of patches would be beneficial. This would clarify how patch resolution influences model accuracy and computational efficiency.\", \"The approach to positional encoding remains unclear, especially given the use of multiple garment types for training. Details on how positional encodings are assigned for each garment would be helpful to assess if they are consistent across garments or if garment-specific encodings are used.\"], \"questions\": [\"How does the choice of polynomial order impact the quality of garment deformation details? It would be helpful to understand whether different polynomial orders were tested and how each affects the model's ability to capture detailed garment features.\", \"How was the patch resolution determined, and were different resolutions tested? Since the size and number of patches may influence both model complexity and accuracy, insight into this decision process would be valuable. Did the authors conduct any experiments to determine an optimal balance between patch resolution and model performance?\", \"Could you clarify the positional encoding strategy? Specifically, is positional encoding applied globally across all garment types, or is it localized per garment type with unique indices for each? Understanding this would help assess the model's ability to generalize across diverse garment types.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper presents a self-supervised garment simulation approach that bypasses mesh-specific limitations by using a function-based representation. The PolyFit module converts garment mesh patches into a compact form that preserves geometric detail, allowing OneFit to generalize across different garment meshes and styles. By conditioning localized patches on body poses, OneFit achieves mesh-agnostic and garment-adaptive deformations.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The approach of using patchwise function for representation in this paper is interesting as it enables the model to handle garment deformations in a flexible, mesh-agnostic manner, and there are some experiments showing its benefit of being faster and generalizing better. The presentation of this concept is clear, with the authors providing structured explanations and visual diagrams that effectively convey how the PolyFit module supports this localized, function-based learning approach. Experiments are generally well presented with details elaborated.\", \"weaknesses\": \"A primary concern with the paper is that the physics-based dynamics appear inaccurate, performing worse than the state of the art. Specifically, the model produces garment sequences that look \\\"tight fitting\\\" and too closely follow the motion of the character, which is a common issue in methods relying on linear blend skinning. This limitation is evident in the supplementary video, where the tops and dress exhibit movement that seems rigidly attached to the character\\u2019s motion, lacking the expected flow and lag typical of realistic fabric behavior. Additionally, the paper lacks quantitative comparisons to ground truth sequences, which could illustrate these issues more clearly, particularly for looser garments like dresses. Furthermore, the absence of experimental results comparing the geometric and physics losses with those of other methods and ground truth data limits the assessment of the method's physical accuracy and overall performance.\", \"questions\": \"1. Handling of Dynamic Behaviors: Could the authors clarify how the method is designed to handle the dynamic behaviors of garments, especially for loose-fitting items like dresses? The results in the supplementary video suggest that the garments move too closely with the character\\u2019s motion, giving a \\\"tight-fitting\\\" appearance. Could the authors provide more details on how they might improve or adjust the model to better capture the expected fabric lag and flow?\\n\\n2. Comparisons: First, the paper lacks quantitative and qualitative comparisons to ground truth sequences, particularly for dynamic accuracy. Second, could the authors include or discuss quantitative evaluations against ground truth or other state-of-the-art models, especially on metrics like geometric invariances, physical accuracy, and external and self collisions?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
868masI331
HALL-E: Hierarchical Neural Codec Language Model for Minute-Long Zero-Shot Text-to-Speech Synthesis
[ "Yuto Nishimura", "Takumi Hirose", "Masanari Ohi", "Hideki Nakayama", "Nakamasa Inoue" ]
Recently, Text-to-speech (TTS) models based on large language models (LLMs) that translate natural language text into sequences of discrete audio tokens have gained great research attention, with advances in neural audio codec (NAC) mod- els using residual vector quantization (RVQ). However, long-form speech synthe- sis remains a significant challenge due to the high frame rate, which increases the length of audio tokens and makes it difficult for autoregressive language models to generate audio tokens for even a minute of speech. To address this challenge, this paper introduces two novel post-training approaches: 1) Multi-Resolution Re- quantization (MReQ) and 2) HALL-E. MReQ is a framework to reduce the frame rate of pre-trained NAC models. Specifically, it incorporates multi-resolution residual vector quantization (MRVQ) module that hierarchically reorganizes dis- crete audio tokens through teacher-student distillation. HALL-E is an LLM-based TTS model designed to predict hierarchical tokens of MReQ. Specifically, it incor- porates the technique of using MRVQ sub-modules and continues training from a pre-trained LLM-based TTS model. Furthermore, to promote TTS research, we create MinutesSpeech, a new benchmark dataset consisting of 40k hours of filtered speech data for training and evaluating speech synthesis ranging from 3s up to 180s. In experiments, we demonstrated the effectiveness of our approaches by ap- plying our post-training framework to VALL-E. We achieved the frame rate down to as low as 8 Hz, enabling the stable minitue-long speech synthesis in a single inference step. Audio samples, dataset, codes and pre-trained models are available at https://yutonishimura-v2.github.io/HALL-E_DEMO.
[ "Text-to-speech synthesis", "LLM-based TTS", "neural audio codec", "long-form generation" ]
Accept (Poster)
https://openreview.net/pdf?id=868masI331
https://openreview.net/forum?id=868masI331
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z46JqNt17S", "yNn4ldU9aw", "uGS1FWyrjS", "tp8qOtY7ac", "shHsIpb9JG", "qEzdnqZXrd", "oLKrbQdzUj", "lxtIlcPUFk", "hqeBVXUu3y", "ggGHtLSsNc", "fMYhlGs3rY", "eTlJPx7alJ", "XVTm4zTIBR", "WJD78V7Hsu", "UNhv7PBzEs", "P5bEZETz6r", "M0EhOoKlTh", "GOv4LONcQt", "DrxZr1D0wH", "7XKSkkefOJ", "6H9ETJ7egD", "5S4yZYMr6z", "4Ew30uGH7O", "23KWWdhyDo" ], "note_type": [ "official_comment", "comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732684256246, 1742302995736, 1730677871020, 1732520507913, 1732643177219, 1732515734444, 1732379314120, 1730113207220, 1730359399432, 1732380018092, 1730705685211, 1740106826085, 1732379506153, 1732379939775, 1732378815094, 1734720021931, 1732409122401, 1732379294712, 1737523635702, 1732380102289, 1732444672164, 1732409146428, 1730224629525, 1732393049533 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4372/Reviewer_UZgt" ], [ "~Yuancheng_Wang1" ], [ "ICLR.cc/2025/Conference/Submission4372/Reviewer_1NPx" ], [ "ICLR.cc/2025/Conference/Submission4372/Reviewer_Fwrz" ], [ "ICLR.cc/2025/Conference/Submission4372/Reviewer_1NPx" ], [ "ICLR.cc/2025/Conference/Submission4372/Reviewer_JBx3" ], [ "ICLR.cc/2025/Conference/Submission4372/Authors" ], [ "ICLR.cc/2025/Conference/Submission4372/Reviewer_JBx3" ], [ "ICLR.cc/2025/Conference/Submission4372/Reviewer_UZgt" ], [ "ICLR.cc/2025/Conference/Submission4372/Authors" ], [ "ICLR.cc/2025/Conference/Submission4372/Reviewer_Fwrz" ], [ "~Ewald_Enzinger1" ], [ "ICLR.cc/2025/Conference/Submission4372/Authors" ], [ "ICLR.cc/2025/Conference/Submission4372/Authors" ], [ "ICLR.cc/2025/Conference/Submission4372/Authors" ], [ "ICLR.cc/2025/Conference/Submission4372/Area_Chair_C9KC" ], [ "ICLR.cc/2025/Conference/Submission4372/Authors" ], [ "ICLR.cc/2025/Conference/Submission4372/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4372/Authors" ], [ "ICLR.cc/2025/Conference/Submission4372/Authors" ], [ "ICLR.cc/2025/Conference/Submission4372/Authors" ], [ "ICLR.cc/2025/Conference/Submission4372/Reviewer_aZfo" ], [ "ICLR.cc/2025/Conference/Submission4372/Reviewer_aZfo" ] ], "structured_content_str": [ "{\"comment\": \"Thanks the author for the reply. I keep my original score, reflecting the contribution of this paper.\"}", "{\"title\": \"Repo Github not Found\", \"comment\": \"Dear authors,\\n\\nThanks for your great work! However, it seems like the repo github https://yutonishimura-v2.github.io/HALL-E_DEMO can not found. And I would like to ask if you are going to open source the MinutesSpeech dataset?\\n\\nThanks, Yuancheng\"}", "{\"summary\": \"This paper presents two post-training approaches to resolve the long context length issues for modern transformer-based autoregressive TTS models based on NAC models. MReQ as a framework reduces the framerate by introducing a novel Multi-resolution vector quantization module that allows the user to decompose an existing NAC model into several different code books, each of which operate at a different framerate. This allows them to reduce the bottleneck of the AR component of LLM-based TTS by reducing the frequency of the first component down to 8 Hz. To train this MRVQ module, student-teacher distillation is required. Using this module, this work presents HALL-E, a hierarchical TTS model that generates tokens based on the MReQ tokens. This paper also introduces MinutesSpeech, a benchmark dataset for TTS synthesis that is curated for long-form speech.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"Idea is pretty interesting and novel and well motivated. The MReQ module is fairly complex, but diagrams and descriptions of the technique are well written. Manuscript is very detailed and training and post-training are well documented. Along with the code release, results look to be easily replicable.\\n\\nAuthors go through great lengths to train and evaluate everything from scratch and provide extensive results for reconstruction and zero-shot TTS which included both automatic and subjective results. Experiments were run on multiple NAC models, showing that this technique can generalize to pre-existing NAC models.\\n\\nBenchmark dataset is a welcome addition and the generation and evaluation of the dataset are well documented. Results show RTF improvements, indicating that this technique indeed reduces the bottleneck behind transformer-based TTS.\", \"weaknesses\": [\"One weakness in the study is that authors do not consider distribution shifts between the different dataset used to train the NAC model and the dataset used for post training. In their experiments, they pretrain their own NAC models (Encodec and Speechtokenizer) on the MinutesSpeech training set and did post training with MReQ with the same training data. However, in practice, researchers will not have access to the datasets used to pretrain these NAC models, so there may be some distribution shift. A similar issue occurs with quantization, where you need some training data to calibrate the model. Additionally, while SpeechTokenizer is meant to tokenize speech, Encodec was originally designed to encode speech along with other kinds of audio. It is unclear whether this capability will be preserved after post-training. Ablation studies do not cover what happens when you try to use this technique with an off-the shelf NAC model like SpeechTokenizer or Encodec off the shelf.\", \"The statement on line 211/212 that Training NAC models with the MRVQ module does not seem to be justified or elaborated on. Is this just saying that training this kind of codec model from scratch rather than from a pretrained model is difficult? It seems like from Table 10, that for the MReQ model, starting this kind of training w/o pretraining seems to not affect the WER or RESQ significantly.\", \"The qualitative results in the ablation study near line 495 are not well explained. The statement that this waveform is unnatural with almost no silence is not clear without a transcription. From a glance, this seems to be a problem with the prosody of the generation, but this is unclear without hearing the samples. Appendix D1 also does not seem to elaborate on this much. I tried to listen to the samples from MinutesSpeech test-90s, and it seems like this is primarily a prosody issue as the Valle model seems to speak at a very consistent rate with much louder inhalation sounds.\", \"Furthermore, the ablation study for \\u201cCan HALL-E handle long audio prompts?\\u201d seems trivial, it should be expected that longer voice prompts gives the model more context on the user\\u2019s speech. It would be more interesting to see if this effect exists or is less pronounced in the VALLE model.\", \"There are also some wording errors in the manuscript. For instance, in the implementation details on page 4, lines 199 to 201, the manuscript says that \\u201cFigure 2a shows the MRVQ module applied to the Encodec model\\u201d, however, Figure 2a actually just shows the normal Encodec model as a teacher model. This should actually be Figure 2b that shows the MRVQ module applied to the Encodec model. Furthermore, Line 201 states that \\u201cFigure 2b shows the LRVQ block\\u201d, however, it is actually Figure 2c that shows the LRVQ block. Also, should the statement in line 485/486: \\u201cThe results indicate that both losses and post-training play important roles in the overall performance.\\u201d say that both losses and \\u201cpre-training\\u201d play important roles, since the second row of Table 10 is about the pre-training of the Encodec model?\"], \"questions\": [\"This technique seems well suited for LLM-based TTS solutions, similar to Valle with AR and NAR components. However, can you also say anything about how this kind of encoding might be used as a general purpose codec for speech in general? It would be interesting if this model could have lower bitrates or higher robustness. Furthermore, does this means that the PreQ and PostQ components of your LRVQ blocks would be unnecessary since they are primarily used to train the NAR components of the TTS model?\", \"With a VALLE model based on Encodec, once can remove RVQ code sequences from the NAR generation to get faster generation, in exchange for coarser generation. For this MReQ based paradigm, how does the audio quality here change?\", \"For the second ablation in section 7.4, how does this effect appear with the VALLE model that you trained?\", \"For the third ablation in section 7.4, it is very surprising that even w/o pretraining, it does not seem like this technique suffers significantly (compared to the performance drop in Table 11 for HALL-E). Why is that?\", \"The splits of the dataset in Table 2 and the description of why these splits were created in Section 7.1 seems very arbitrary and over designed for these experiments. Are these training splits deduplicated from each other?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors\\u2019 efforts in addressing my questions and concerns to some extent. The clarification regarding the NAC loss and the pretrained TTS model is helpful. While I still find the modeling approach to be quite complex, even compared to certain modules in audiocraft/models/encodec.py, I appreciate that the authors have provided the implementation of this work including the encodec file. Furthermore, there remains some uncertainty about how the proposed model outperforms recent state-of-the-art NAR methods or how the NAR TTS model, MaskGCT, might perform if trained on a minutes-long TTS dataset. However, the newly conducted experiment still offers meaningful insights, demonstrating that autoregressive models, when combined with the proposed method, can generate coherent long-form speech. I thank the authors for their efforts to respond, and I remain confident in my assessment.\"}", "{\"comment\": \"Thank you, authors, for your reply. These questions were solely for clarification purposes and for the improvement of your manuscript. I maintain my original score, which already reflects the contribution of this work.\"}", "{\"comment\": \"Thank you for addressing my concerns and providing detailed clarifications. I particularly appreciate the additional analyses you conducted on SIM for larger audio prompts, which I found to be highly insightful and valuable.\"}", "{\"title\": \"Reply to Reviewer 1NPx (Part 2)\", \"comment\": \"**Question 5:** The splits of the dataset in Table 2 and the description of why these splits were created in Section 7.1 seems very arbitrary and over designed for these experiments. Are these training splits deduplicated from each other?\\n\\n**Response:** The training sets share the same transcription and audio data. The difference between these sets lies in the audio lengths when segmenting long original audio files.\", \"our_initial_design_focused_on_only_two_durations\": \"90 seconds and 180 seconds. However, we decided to add 28 seconds and 54 seconds for a fair comparison with VALL-E.\\nAs shown in Table 2, the train-28s set was created to have the same token length as train-90s in the \\\"Sum\\\" column.\\nThis aligns the input length of the AR language models for HALL-E and VALL-E.\\nSimilarly, train-54s was created to align with train-180s.\\n\\n**Comment 1:** In practice, researchers will not have access to the datasets used to pretrain these NAC models, so there may be some distribution shift.\\n\\n**Response:** Investigating distribution shift using models pretrained on inaccessible data sounds interesting; however, it also limits experimental opportunities. For instance, when comparing NAC architectures (e.g., Encodec and SpeechTokenizer) in our framework, it would be necessary to use the same data for pretraining to ensure a fair comparison. Without access to the pretraining data, it is also challenging to discuss the extent of a distribution shift. We leave dataset curation for non-speech audio data as future work.\\n\\n**Comment 2:** It seems like from Table 10, that for the MReQ model, starting this kind of training w/o pretraining seems to not affect the WER or RESQ significantly.\\n\\n**Response:** As discussed in Question 4, this is because the MRVQ module involves high frame rate layers.\\n\\n**Comment 3:** The qualitative results in the ablation study near line 495 are not well explained. The statement that this waveform is unnatural with almost no silence is not clear without a transcription.\\n\\n**Response:** Our intent behind the sentence was to convey that the AR model of VALL-E struggles with duration prediction. Following reviewer's suggestion, we manually annotated the segments for each word to show the transcription in Figure 6 (Section 7.4) and in Figure 8 (Appendix D.1). As shown in the first sentence highlighted in red, the duration of the proper noun \\\"Cam Ross\\\" differs significantly across the models. In the speech generated by HALL-E, this phrase is pronounced more slowly and clearly, closely resembling the ground truth.\\n\\n**Comment 4.** Furthermore, the ablation study for \\\"Can HALL-E handle long audio prompts?\\\" seems trivial, it should be expected that longer voice prompts gives the model more context on the user's speech. It would be more interesting to see if this effect exists or is less pronounced in the VALLE model.\\n\\n**Response:** As discussed in Question 3, we have added results of VALL-E. The effect exists for both HALL-E and VALL-E. With a prompt length of 60 seconds, HALL-E outperformed VALL-E. Since the reduced frame rate achieved by MReQ enables efficient generation even with long prompts, the result demonstrates that HALL-E is capable of achieving high-fidelity synthesis, highlighting its practical value.\\n\\nWe also have corrected wording errors pointed out by the reviewer. Thank you very much for your careful review and great advice, which have improved the quality of our paper.\"}", "{\"summary\": \"The paper introduces a novel text-to-speech (TTS) model designed to generate extended speech sequences. To address the inherent limitations of autoregressive language models in handling long outputs, the authors propose the use of *multi-resolution audio codes* (MReQ). Furthermore, the paper presents HALL-E, an autoregressive language model specifically developed to generate these MReQ audio codes efficiently. In addition to the model, the authors construct a new benchmark dataset comprising 40,000 hours of curated speech data. The experimental results demonstrate that the proposed approach can achieve a low frame rate of 8 Hz, highlighting its potential for generating high-quality long-form speech with reduced computational overhead.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper introduces the Multi-Resolution Requantization (MReQ) model, designed to enhance signal processing across multiple granularities.\\n2. It presents the development of HALL-E, an autoregressive language model capable of generating extended speech waveforms with high fidelity. \\n3. Furthermore, the study proposes a novel benchmark dataset, MinutesSpeech, specifically curated to evaluate the performance of models on long-duration speech sequences.\", \"weaknesses\": \"1. The concept of time-varying audio codes is not a new development. For reference, please see the work by [Variable-rate Discrete Representation Learning (2021)] [https://arxiv.org/pdf/2103.06089].\\n\\n2. A notable issue lies in maintaining temporal coherence and acoustic fidelity, as evidenced by the declining performance observed in the SIM score.\\n\\n3. The model continues to face difficulties in handling spontaneous speech, highlighting an ongoing challenge in this area.\", \"questions\": \"--\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"To deal with the long-form speech synthesis problem, this paper propose a multi-resolution re-quantization method which hierarchically reorganizes discrete tokens with teacher-student distillations. Based on the hierarchical codec codes, an LLM-based TTS model HALL-E is proposed for speech synthesis. Besides these, a new benchmark dataset MinutesSpeech is introduced for minute-long speech synthesis.\\n\\nThis paper is well written and easy to follow.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper has three contributions 1). A multi-resolution requantization method MReQ is proposed to generate hierarchical codec codes. 2). A hierarchical LLM-base TTS model is proposed to predict MReQ codec codes. 3). A minute-long speech synthesis dataset is introduced.\\n2. This paper is well written and easy to follow.\", \"weaknesses\": \"1. Some important codec models are not compared with, such as DAC and Vocos.\\n2. MReQ is only tested on the speech reconstruction, without evaluated with audio dataset.\\n3. Some latest zero-shot TTS models are not compared with, such as Voicebox, E2 and VALL-E 2.\", \"questions\": \"1. Have you tried to train MReQ with a much smaller sampling rate, such as 2 or 4 Hz for the first layer?\\n2. What is the accuracy of the sub-encoder E to predict b_{k+1} based on a_{k+1}? Will this accuracy affect the final performance a lot?\\n3. For the L_{total}\\uff0c are the weights of the three sub losses equal and all set to 1.0 ? Have you tried other settings?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer aZfo (Part 2)\", \"comment\": \"**Question 5:** Why is DNSMOS used? DNSMOS is specifically designed and trained to assess the cleanness of speech rather than its naturalness. I strongly suggest replacing it with a naturalness MOS metric, such as UTMOS.\\n\\n**Response:** We used DNSMOS following previous studies (e.g., [3, 4]) but we agree with the reviewer that UTMOS is more appropriate. We have added results of UTMOS to Tables 3, 4, 5, 6, 9 and 10. Below is a copy of Table 4. The results show that HALL-E exhibits significantly high UTMOS scores than VALL-E, indicating its strong performance in terms of naturalness. Thank you very much for your great advice.\\n\\n**Table 4.** Zero-shot TTS performance on MinutesSpeech test sets. Best results are marked in bold.\\n\\n| TTS model | NAC model | Training | WER$\\\\downarrow$ | SIM$\\\\uparrow$ | WD$\\\\downarrow$ | DNSMOS$\\\\uparrow$ | UTMOS$\\\\uparrow$ | QMOS$\\\\uparrow$ | SMOS$\\\\uparrow$ |\\n|------------------|----------------|----------------|---------------|------------|------------|-----------|----------------------|---------------------|---------------------|\\n| **test-90s** | | | | | | | | | | |\\n| **GT** | -- | -- | 10.30 | -- | -- | 3.79 $\\\\pm$ 0.24 | 3.28$\\\\pm$ 0.51 | 3.83 $\\\\pm$ 0.30 | -- |\\n| **VALL-E** | Encodec | train-28s | 39.77 | **0.726** | 23.62 | 3.84 $\\\\pm$ 0.19 | 3.61 $\\\\pm$ 0.48 | 2.29 $\\\\pm$ 0.32 | 2.04 $\\\\pm$ 0.25 |\\n| **VALL-E\\u2020** | Encodec | train-90s | 16.14 | 0.712 | **2.68** | 3.87 $\\\\pm$ 0.17 | 3.58 $\\\\pm$ 0.60 | 2.48 $\\\\pm$ 0.28 | 2.36 $\\\\pm$ 0.26 |\\n| **HALL-E** | MReQ-Encodec | train-90s | **9.79** | 0.685 | 4.00 | **3.91 $\\\\pm$ 0.21** | **3.74 $\\\\pm$ 0.37** | **3.35 $\\\\pm$ 0.26** | **3.15 $\\\\pm$ 0.26** |\\n| **test-180s** | | | | | | | | | | |\\n| **GT** | -- | -- | 10.20 | -- | -- | 3.78 $\\\\pm$ 0.25 | 3.25 $\\\\pm$ 0.50 | 4.26 $\\\\pm$ 0.22 | -- |\\n| **VALL-E** | Encodec | train-54s | 36.52 | **0.706** | 25.66 | 3.55 $\\\\pm$ 0.53 | 3.15 $\\\\pm$ 0.97 | 1.68 $\\\\pm$ 0.25 | 1.70 $\\\\pm$ 0.26 |\\n| **VALL-E\\u2020** | Encodec | train-180s | 21.71 | 0.702 | 12.52 | 3.76 $\\\\pm$ 0.30 | 3.55 $\\\\pm$ 0.62 | 2.11 $\\\\pm$ 0.26 | 2.15 $\\\\pm$ 0.26 |\\n| **HALL-E** | MReQ-Encodec | train-180s | **10.53** | 0.660 | **5.79** | **3.91 $\\\\pm$ 0.19** | **3.74 $\\\\pm$ 0.37** | **3.38 $\\\\pm$ 0.25** | **3.31 $\\\\pm$ 0.23** |\\n\\n[3] X. Wang, et al., SpeechX: Neural Codec Language Model as a Versatile Speech Transformer, IEEE TASLP, vol. 32, pp. 1-14, 2024.\\n\\n[4] Sanyuan Chen, Shujie Liu, Long Zhou, Yanqing Liu, Xu Tan, Jinyu Li, Sheng Zhao, Yao Qian, and\\nFuru Wei. VALL-E 2: Neural codec language models are human parity zero-shot text to speech\\nsynthesizers. arXiv preprint arXiv:2406.05370, 2024.\"}", "{\"summary\": \"The paper presents HALL-E, a neural codec language model aimed at addressing challenges in minute-long zero-shot text-to-speech (TTS) synthesis. It introduces Multi-Resolution Requantization (MReQ) to reduce frame rates in neural audio codec (NAC) models, and proposes HALL-E, which leverages MReQ for efficient and effective long-form speech synthesis. The work also introduces MinutesSpeech, a new speech dataset for long-context conversational TTS. The experimental results are extensive, showing that the proposed method can outperform baseline models both quantitatively and qualitatively.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": [\"The paper offers thorough quantitative and qualitative results, demonstrating that the proposed generative modeling outperforms baseline methods in long-form speech synthesis.\", \"The introduction of MinutesSpeech is a noteworthy addition to TTS research, providing high-quality, long-context speech data. The illustrated curation and preprocessing techniques further enhance its utility.\", \"The ablation studies on MRVQ provide insights into the necessity of each component, justifying their inclusion in the architecture.\"], \"weaknesses\": [\"Although the MReQ module achieves low-resolution quantization while maintaining sample quality, its design is excessively complicated, posing challenges for future research to build upon. The LRVQ design incorporates multiple RVQ modules\\u2014pre-quantizer, main quantizer, and post-quantizer\\u2014alongside the additional HSR loss, making it significantly more complex than prior RVQ methods. Furthermore, the use of multiple RVQ quantizations within the NAR transformer of HALL-E adds another layer of complexity, reducing the accessibility of the proposed approach.\", \"The paper only compares the proposed method with VALL-E and does not include recent zero-shot TTS baselines. Considering that non-autoregressive modeling can be advantageous for long speech generation, incorporating comparisons with state-of-the-art non-autoregressive zero-shot TTS models would better demonstrate the effectiveness and relevance of the proposed method.\"], \"questions\": [\"The description of the LRVQ modules appears to omit the use of commitment loss for residual vector quantization. If the LRVQ module does not require commitment loss, the authors would explain the reasoning behind this and how the training is stabilized without it.\", \"Providing a detailed description of the pre-trained LLM-based TTS model that HALL-E is post-trained on would greatly aid in understanding the training process and setup of the proposed model.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Issues running tts_demo.py script\", \"comment\": \"Dear authors,\\n\\nI very much enjoyed reading your paper! Thank you for releasing the source code as supplementary material.\\nI tried to run the tts_demo.py script, but ran into some issues:\\n\\nIt appears the SpeechGenSolver class implementation is missing from audiocraft/solvers/valle_ar.py:\\n```\\n(halle) ~/HALL-E/scripts $ python tts_demo.py\\nTraceback (most recent call last):\\n File \\\"/home/user/HALL-E/scripts/tts_demo.py\\\", line 6, in <module>\\n from audiocraft.models import (\\n File \\\"/home/user/HALL-E/audiocraft/__init__.py\\\", line 24, in <module>\\n from . import data, models, modules\\n File \\\"/home/user/HALL-E/audiocraft/models/__init__.py\\\", line 15, in <module>\\n from .halle import Halle\\n File \\\"/home/user/HALL-E/audiocraft/models/halle.py\\\", line 20, in <module>\\n from ..solvers.valle_ar import SpeechGenSolver\\n File \\\"/home/user/HALL-E/audiocraft/solvers/__init__.py\\\", line 21, in <module>\\n from .halle_ar import HalleARSolver\\n File \\\"/home/user/HALL-E/audiocraft/solvers/halle_ar.py\\\", line 35, in <module>\\n from .valle_ar import SpeechGenSolver\", \"importerror\": \"cannot import name 'load_hier_lm_model' from 'audiocraft.models.loaders' (/home/user/HALL-E/audiocraft/models/loaders.py)\\n```\\nAre you planning to release an updated source code repository?\\n\\nThanks,\\nEwald\"}", "{\"comment\": \"Thank you very much for your positive feedback and constructive comments.\\nBelow, we respond to each comment.\\n\\n**Comment 1:** The concept of time-varying audio codes is not a new development. For reference, please see the work by [Variable-rate Discrete Representation Learning (2021)] [https://arxiv.org/pdf/2103.06089].\\n\\n**Response:** We have added the suggested paper to the reference list and updated Sections 2 accordingly.\\n\\n**Comment 2:** A notable issue lies in maintaining temporal coherence and acoustic fidelity, as evidenced by the declining performance observed in the SIM score.\\n\\n**Response:** We conducted an additional experiment to demonstrate that providing longer audio prompts can mitigate the issue and significantly improve the SIM score. As shown in Table 8, with a prompt length of 60 seconds, HALL-E slightly outperformed VALL-E. Since the reduced frame rate achieved by MReQ enables efficient generation, even with long prompts of 60 seconds, this result demonstrates that HALL-E is capable of achieving high-fidelity synthesis, highlighting its practical value.\\n\\n\\n**Table 8. SIM as a function of audio prompt length.**\\n\\n| Prompt length | VALL-E | HALL-E |\\n|---------------|---------------|---------------|\\n| 3s (default) | 0.727 | 0.685 |\\n| 10s | 0.801 | 0.750 |\\n| 20s | 0.830 | 0.809 |\\n| 40s | 0.851 | 0.846 |\\n| 60s | 0.856 | 0.859 |\\n\\n\\n**Comment 3:** The model continues to face difficulties in handling spontaneous speech, highlighting an ongoing challenge in this area.\\n\\n**Response:** We agree with the reviewer that handling spontaneous speech remains a significant challenge. Improving prosody modeling within the AR models is essential for achieving greater naturalness in synthesized spontaneous speech. We believe that low frame rate tokens will be key to addressing this challenge, and that the MinuteSpeech dataset and HALL-E serve as valuable resources to facilitate research in this direction.\"}", "{\"title\": \"Reply to Reviewer aZfo (Part 1)\", \"comment\": \"Thank you very much for your insightful review and positive feedback. Below, we respond to each question and comment.\\n\\n**Question 1:** Why and how can PESQ be used to filter out noisy data in the data curation pipeline?\\n\\n**Response:** Following the previous study [1], PESQ scores are estimated using TorchAudio-Squim [2], which provides a reference-less method for predicting speech quality metrics. This is the reason why PESQ can be used in the data curation pipeline. We have clarified this in the revised paper.\\n\\nThe score distribution for the MinutesSpeech training dataset is shown in Figure 7 (Appendix B.3), with an average score of 3.10.\\nCompared to LibriSpeech, which has an average score of 3.56, MinutesSpeech has lower-quality audio data.\\nHowever, audio reconstructed by the Encodec and SpeechTokenizer models trained on MinutesSpeech achieved scores of 3.45 and 3.52, respectively. This indicates that the dataset possesses sufficient quality for TTS purposes.\\n\\n[1] A. Vyas, et al., Audiobox: Unified Audio Generation with Natural Language Prompts, arXiv:2312.15821, 2023.\\n\\n[2] A. Kumar, et al., TorchAudio-Squim: Reference-less Speech Quality and Intelligibility measures in TorchAudio, ICASSP, 2023.\", \"https\": \"//pytorch.org/audio/main/tutorials/squim_tutorial.html\\n\\n**Question 2:** Why is the PESQ score for the ground truth so low in Table 3?\\n\\n**Response:**\\nThe score for the ground truth (3.56) is low because we utilized the estimation method to make it comparable with the scores in the training dataset, as discussed above.\\nPlease see our response to Question 6 for the updated Table 3 with UTMOS (suggested criterion), PESQ and STOI scores obtained by reference audio.\\n\\n**Question 3:** Zero-shot TTS experiment was conducted by \\\"continual\\\" setting or \\\"cross-utterance\\\" setting?\\n\\n**Response:** The experiment was conducted by the continual setting. We have clarified this in Section 7.\\n\\n**Question 4:** Did the authors exclude the audio prompt portion of the synthesized audio when computing the speaker SIM?\\n\\n**Response:** Yes, we excluded the audio prompt when computing speaker SIM. This process was implemented as follows:\\n\\n```\\nwith torch.no_grad():\\n emb1 = model(ref_wav[..., prompt_length:])\\n emb2 = model(gen_wav[..., prompt_length:])\\n```\\n\\nThe table below compares speaker SIM values calculated with and without the audio prompt for HALL-E. Note that the increasing trend was observed with both VALL-E and HALL-E.\\n\\n\\n\\n| Prompt length | VALL-E | HALL-E | HALL-E (GT)\\n|---------------|---------------|---------------|---------------|\\n| 3s (default) | 0.727 | 0.685 | 0.692 |\\n| 10s | 0.801 | 0.750 | 0.778 |\\n| 20s | 0.830 | 0.809 | 0.861 |\\n| 40s | 0.851 | 0.846 | 0.930 |\\n| 60s | 0.856 | 0.859 | 0.962 |\"}", "{\"comment\": \"We appreciate your positive feedback and detailed comments. Below, we respond to each question and comment.\\n\\n**Question 1:** The description of the LRVQ modules appears to omit the use of commitment loss for residual vector quantization.\\n\\n**Response:** The commitment loss was not omitted and is included in the loss term $L_{NAC}$. For clarification, we have added the detailed loss definition to the appendix A.1.\\n\\n**Question 2:** Providing a detailed description of the pre-trained LLM-based TTS model.\\n\\n**Response:** Pre-training was performed without incorporating the sub-modules of MRVQ. Thus, the pre-trained model is identical to VALL-E. We have clarified this in Section 7.1.\\n\\n**Comment 1:** Although the MReQ module achieves low-resolution quantization while maintaining sample quality, its design is excessively complicated, posing challenges for future research to build upon.\\n\\n**Response:** While the proposed architecture for achieving high zero-shot TTS performance may appear complicated, the core idea to introduce a recursive architecture to RVQ is simple. In particular, our implementation of MReQ in audiocraft/models/encoder.py within the provided code is not complicated, and is expected to facilitate future research and development.\\n\\n**Comment 2:** Comparisons with state-of-the-art non-autoregressive zero-shot TTS models would better demonstrate the effectiveness and relevance of the proposed method.\\n\\n**Response:** \\nWe conducted evaluation experiments using MaskGCT [1] (a state-of-the-art NAR TTS) with the pre-trained model provided by the authors. \\n\\nTable 24 shows that the performance of MaskGCT significantly decreases as the length of synthesized speech increases.\\nThis indicates that even with NAR TTS, generating long speech in a single inference procedure is challenging.\\nIn the 30-second test setting in Table 25, the WER of MaskGCT is lower than VALL-E and HALL-E. However, its UTMOS is worse compared to HALL-E.\\nIn the 90-second test setting in Table 26, only HALL-E achieves a WER lower than 10.0%, \\nwith the highest UTMOS.\\nFor generating spontaneous speech, producing more natural durations becomes crucial.\\nAR models excel in this aspect compared to NAR models, which likely contributes to the higher UTMOS scores observed.\\nWe have added these results in Appendix E.\\n\\nThe comparison suggested by the reviewer provided a clearer demonstration of the proposed method's effectiveness and relevance. We greatly appreciate the reviewer's insightful suggestion.\\n\\n[1] Y. Wang, et al., MaskGCT: Zero-Shot Text-to-Speech with Masked Generative Codec Transformer, arXiv:2409.00750.\\n\\n**Table 24. MaskGCT performance by test duration**\\n\\n| Method | Duration | WER | SIM | DNSMOS | UTMOS |\\n|-----------|---------------|-------|-------|--------------|------------------|\\n| MaskGCT | 30 sec. | 7.74 | 0.763 | 3.95 $\\\\pm$ 0.23 | 3.42 $\\\\pm$ 0.46 |\\n| MaskGCT | 45 sec. | 10.75 | 0.763 | 3.94 $\\\\pm$ 0.22 | 3.39 $\\\\pm$ 0.49 |\\n| MaskGCT | 60 sec. | 23.08 | 0.757 | 3.92 $\\\\pm$ 0.23 | 3.30 $\\\\pm$ 0.54 |\\n| MaskGCT | 90 sec. | 51.12 | 0.691 | 3.57 $\\\\pm$ 0.51 | 2.57 $\\\\pm$ 1.02 |\\n\\n**Table 25. Performance comparison in 30-second test**\\n\\n| Method | Duration | WER | SIM | DNSMOS | UTMOS |\\n|---------------|---------------|-------|-------|--------------|------------------|\\n| GT | 30 sec. | 10.65 | - | 3.74 $\\\\pm$ 0.29 | 3.20 $\\\\pm$ 0.51 |\\n| VALL-E train-28s | 30 sec. | 11.84 | 0.731 | 3.84 $\\\\pm$ 0.19 | 3.61 $\\\\pm$ 0.41 |\\n| VALL-E train-90s | 30 sec. | 14.83 | 0.695 | 3.82 $\\\\pm$ 0.20 | 3.46 $\\\\pm$ 0.50 |\\n| HALL-E train-90s | 30 sec. | 10.88 | 0.685 | 3.87 $\\\\pm$ 0.21 | 3.61 $\\\\pm$ 0.38 |\\n| MaskGCT | 30 sec. | 7.74 | 0.763 | 3.95 $\\\\pm$ 0.23 | 3.42 $\\\\pm$ 0.46 |\\n\\n**Table 26. Performance comparison in 90-second test**\\n\\n| Method | Duration | WER | SIM | DNSMOS | UTMOS |\\n|---------------|---------------|-------|-------|--------------|------------------|\\n| GT | 90 sec. | 10.30 | - | 3.79 $\\\\pm$ 0.24 | 3.28 $\\\\pm$ 0.51 |\\n| VALL-E train-28s | 90 sec. | 39.77 | 0.726 | 3.84 $\\\\pm$ 0.19 | 3.61 $\\\\pm$ 0.48 |\\n| VALL-E train-90s | 90 sec. | 16.14 | 0.712 | 3.87 $\\\\pm$ 0.17 | 3.58 $\\\\pm$ 0.60 |\\n| HALL-E train-90s | 90 sec. | 9.79 | 0.685 | 3.91 $\\\\pm$ 0.21 | 3.74 $\\\\pm$ 0.37 |\\n| MaskGCT | 90 sec. | 51.12 | 0.691 | 3.57 $\\\\pm$ 0.51 | 2.57 $\\\\pm$ 1.02 |\"}", "{\"metareview\": \"In this work, the authors targeted to resolve the challenge of long-form TTS. The high frame rate results in the long length of audio tokens, which makes it different for autoregressive language models to generate tokens. Representative works such as VALL-E encounter challenges in generating high-quality audio for extended sequences. Consequently, this research is significant for advancing speech generation beyond 10+ seconds.\", \"there_are_two_main_contributions_of_this_paper\": \"1) Multi-Resolution Requantization (MReQ) which is used to reduce the frame rate of neural codec. 2) HALL-E which is an LLM-based TTS model to predict hierarchical tokens of MReQ. Furthermore, the authors have developed a new dataset comprising 40,000 hours of training data, which will support the community in researching long-form speech generation.\\n\\nThe work is sufficiently novel to address a very meaningful problem of speech generation. The experiments are well-executed with convincing results. During the rebuttal, the authors actively answered reviewers\\u2019 questions with clarified details. Therefore, the work is worth publication in ICLR.\\n\\nIn summary, the strength of this paper is effectively addressing the challenging problem of synthesizing speech longer than 10 seconds by utilizing the proposed MReQ to reduce the frame rate and HALL-E to model the hierarchical tokens of MReQ. However, a noted weakness is the lack of comparison with recent advanced zero-shot TTS models such as VALL-E 2, which has shown significant improvement over VALL-E used in the paper, despite the inclusion of a comparison with MaskGCT during the rebuttal.\", \"additional_comments_on_reviewer_discussion\": \"The initial scores of this paper are consistent with 5, 6, 6, 6, 6. The final scores for this paper are 6, 6, 6, 6, and 8. The authors did a very good job during rebuttal to dismiss the reviewers\\u2019 concerns.\\nSpecifically, Reviewer aZfo originally raised lots of concerns on the experience setup such as unreasonable usage of PESQ and why DNSMOS is used. The authors dismissed all those concerns item by item. As a result, the reviewer raised the score to 8. \\nTo respond to the review comments of comparing to SOTA models, the authors added experiments of MaskGCT.\"}", "{\"title\": \"Reply to Reviewer UZgt (Part 1)\", \"comment\": \"Thank you very much for your positive feedback and constructive comments.\\nBelow, we respond to each question and comment.\\n\\n**Question 1:** Have you tried to train MReQ with a much smaller sampling rate, such as 2 or 4 Hz for the first layer?\\n\\n**Response:** Yes, Table 18 in the Appendix shows the results. Reducing the sampling rate below 8 Hz remains challenging. Since the audio token length is comparable to the text token length at a sampling rate of 8 Hz, further reductions would require new architectures and training algorithms. We leave further reduction of the token length for both audio and text streams for future work.\\n\\n**Question 2:** What is the accuracy of the sub-encoder E to predict $b_{k+1}$ based on $a_{k+1}$? Will this accuracy affect the final performance a lot?\\n\\n**Response:** The sub-encoder is frozen during the training of HALL-E. Because the role of this sub-module is to aggregate predicted $a_{k+1,1}, \\\\cdots, a_{k+1,\\\\alpha_{k+1}}$ into $b_{k+1}$, accuracy cannot be computed for this sub-module.\\n\\n**Question 3:** For the $L_{\\\\mathrm{total}}$, are the weights of the three sub losses equal and all set to 1.0 ? Have you tried other settings?\\n\\n**Response:** Appendix A.1 provides a more detailed description of $L_{\\\\mathrm{total}}$, and Tables 11 and 12 list the weights for each loss function. The weights shown in Table 11 are from the official implementation of Encodec and have not been modified. Table 12 presents the weights for the FLD loss and HSR loss in our proposed method. \\nThese weights were selected after considering equal weighting and other weight values, choosing those that resulted in the greatest reduction in training loss.\\n\\n**Comment 1:** Some important codec models are not compared with, such as DAC and Vocos.\\n\\n**Response:** In preliminary experiments, we confirmed that SpeechTokenizer outperforms these conventional codec models. While training with various codec models could be interesting, it is not expected to improve the performance.\\n\\n**Comment 2:** MReQ is only tested on the speech reconstruction, without evaluated with audio dataset.\\n\\n**Response:**\\nOur method relies on the fact that speech retains semantic information primarily in relatively low-frequency components. \\nBy incorporating a hierarchical structure, MReQ achieved high reconstruction performance by preserving semantic information in low frame rate tokens while accurately retaining acoustic information in high frame rate tokens. \\nTherefore, extending our method to general audio is non-trivial, and verifying this would require different large-scale downstream tasks, which is beyond the scope of this work. \\nAs the reviewer have pointed out, achieving a lower frame rate for general audio, not limited to speech, is an important direction for future research.\"}", "{\"title\": \"Reply to Reviewer 1NPx (Part 1)\", \"comment\": \"Thank you very much for your insightful review and positive feedback. Below, we respond to each question and comment.\\n\\n**Question 1.1:** This technique seems well suited for LLM-based TTS solutions, similar to VALL-E with AR and NAR components. However, can you also say anything about how this kind of encoding might be used as a general purpose codec for speech in general?\\n\\n**Response:** The proposed approach could be advantageous for generative speech tasks, such as speech-to-speech translation beyond TTS, due to its ability to model long-term context using low sampling rate tokens.\\nTo reduce the bitrate, it would be preferable to improve the teacher model rather than rely on the distillation framework, as shown in Table 19.\\n\\n**Question 1.2:** Furthermore, does this means that the PreQ and PostQ components of your LRVQ blocks would be unnecessary since they are primarily used to train the NAR components of the TTS model?\\n\\n**Response:** It depends on the task and downstream architecture. For discriminative tasks, including automatic speech recognition, if learning a discriminator over the features of the first quantization layer is sufficient, the PreQ and PostQ components may no longer be needed.\\nFor generative architectures that do not rely on NAR components, such as Fisher and Moshi using the AR+AR combination, components like PreQ and PostQ may be helpful but this requires further investigation.\\n\\n**Question 2:** With a VALLE model based on Encodec, once can remove RVQ code sequences from the NAR generation to get faster generation, in exchange for coarser generation. For this MReQ based paradigm, how does the audio quality here change?\\n\\n**Response:** If our understanding is correct, the reviewer's intent is that using only the output of the AR model and omitting the NAR model can result in faster generation, albeit with coarser quality. \\n\\nThe table below shows the results when audio is generated using only the AR model, omitting the NAR model in each method. \\nAs expected, the overall performance deteriorates; however, trends in metrics such as WER remain similar to those observed in Table 4, indicating that our method (HALL-E) continues to demonstrate effectiveness. \\nNevertheless, as illustrated in Figure 5, the NAR model does not demand significant computational resources, so it is advisable to employ it to improve quality.\\n\\n| Method | WER | SIM | DNSMOS | UTMOS |\\n|---------------|-------|-------|--------------|------------------|\\n| VALL-E train-28s | 42.62 | 0.616 | 2.878 $\\\\pm$ 0.143 | 1.449 $\\\\pm$ 0.182 |\\n| VALL-E train-90s | 19.34 | 0.624 | 2.860 $\\\\pm$ 0.118 | 1.434 $\\\\pm$ 0.173 |\\n| HALL-E train-90s | 16.23 | 0.616 | 3.003 $\\\\pm$ 0.194 | 1.343 $\\\\pm$ 0.112 |\\n\\n\\n\\n**Question 3:** For the second ablation in section 7.4, how does this effect appear with the VALLE model that you trained?\\n\\n**Response:** In response to the feedback, we have added results for VALL-E (trained on MinutesSpeech-90s) to Table 8, where the audio prompt length was varied from 3 seconds to 60 seconds. The SIM scores are recalculated over audio samples with a minimum utterance length of 65 seconds.\\n\\nAs shown below, a similar effect was observed with VALL-E. However, the gap between HALL-E and VALL-E narrowed as the prompt length increased. Finally, HALL-E slightly outperformed VALL-E at a prompt length of 60 seconds, demonstrating that highly accurate cloning is achievable with HALL-E.\\n\\n**Table 8. SIM as a function of audio prompt length.**\\n\\n| Prompt length | VALL-E | HALL-E |\\n|---------------|---------------|---------------|\\n| 3s (default) | 0.727 | 0.685 |\\n| 10s | 0.801 | 0.750 |\\n| 20s | 0.830 | 0.809 |\\n| 40s | 0.851 | 0.846 |\\n| 60s | 0.856 | 0.859 |\\n\\n**Question 4:** For the third ablation in section 7.4, it is very surprising that even w/o pretraining, it does not seem like this technique suffers significantly (compared to the performance drop in Table 11 for HALL-E). Why is that?\\n\\n**Response:** This is because the MRVQ module involves three 48 Hz quantization layers. It is known that with Encodec, reducing the number of RVQ layers to 6 or 3 degrades performance but does not cause collapsed speech. Therefore, the performance drop was not significant compared to that of HALL-E.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Reply to Reviewer aZfo (Part 3)\", \"comment\": \"**Question 6:** Please present the speaker similarity and MOS scores (UTMOS?) for Table 3 and Table 10 (Table 9 in the revision).\\n\\n**Response:** Following the reviewer's suggestion, we have added speaker similarity (SIM) and UTMOS to the tables. Additionally, we analyzed the PESQ and STOI scores using reference audio in Table 3.\\nAs shown below, MReQ maintains performance in terms of WER, PESQ$^\\\\dagger$, STOI, SIM and UTMOS. These results confirmed that MReQ effectively reduces the sampling rate while maintaining naturalness and clarity, which are essential for text-to-speech applications.\\n\\n\\n**Table 3. Speech reconstruction on LibriSpeech.** \\n\\n| NAC (frame rate, Hz) | WER$\\\\downarrow$ | PESQ$\\\\dagger$$\\\\uparrow$ | PESQ$\\\\uparrow$ | STOI$\\\\uparrow$ | SIM$\\\\uparrow$ | UTMOS$\\\\uparrow$ |\\n|------------------------------|------------------|---------------------------|------------------------|-------------------|------------------|----------------------------|\\n| GT | 1.96 | 3.56 | 4.64 | 1.00 | 1.00 | 4.05 $\\\\pm$ 0.33 |\\n| Encodec (8) | 100.0 | 1.25 | 1.22 $\\\\pm$ 0.08 | 0.32 $\\\\pm$ 0.05 | 0.52 | 1.33 $\\\\pm$ 0.00 |\\n| Encodec (24) | 2.01 | 3.27 | 3.68 $\\\\pm$ 0.17 | 0.94 $\\\\pm$ 0.03 | 0.90 | 3.82 $\\\\pm$ 0.37 |\\n| Encodec (48) | 2.00 | 3.45 | 4.02 $\\\\pm$ 0.13 | 0.96 $\\\\pm$ 0.02 | 0.94 | 3.86 $\\\\pm$ 0.36 |\\n| Encodec (48) + MReQ | 2.02 | 3.47 | 3.89 $\\\\pm$ 0.15 | 0.95 $\\\\pm$ 0.02 | 0.92 | 3.89 $\\\\pm$ 0.36 |\\n| SpeechTokenizer (48) | 2.00 | 3.52 | 4.10 $\\\\pm$ 0.11 | 0.96 $\\\\pm$ 0.02 | 0.95 | 3.97 $\\\\pm$ 0.33 |\\n| SpeechTokenizer + MReQ (8,16,24,48) | 1.99 | 3.53 | 3.96 $\\\\pm$ 0.14 | 0.95 $\\\\pm$ 0.02 | 0.93 | 4.01 $\\\\pm$ 0.32 |\\n\\n$\\\\dagger$ indicates PESQ (Kumar et al., 2023a).\\n\\n\\n**Table 9. Ablation study for MReQ.** \\n\\n| Model | WER$\\\\downarrow$ | PESQ$\\\\dagger$ $\\\\uparrow$ | PESQ$\\\\uparrow$ | UTMOS$\\\\uparrow$ |\\n|------------------|-----------------|----------------|--------------|-----------------------|\\n| Proposed | 2.02 | 3.47 | 3.89 $\\\\pm$ 0.15 | 3.89 $\\\\pm$ 0.36 |\\n| w/o FLD loss | 2.03 | 3.47 | 3.89 $\\\\pm$ 0.15 | 3.87 $\\\\pm$ 0.36 |\\n| w/o HSR loss | 2.15 | 3.44 | 3.79 $\\\\pm$ 0.17 | 3.86 $\\\\pm$ 0.37 |\\n| w/o pre-training | 2.08 | 3.44 | 3.83 $\\\\pm$ 0.17 | 3.85 $\\\\pm$ 0.37 |\\n\\n\\n**Question 7:** Please provide a breakdown of the AR and NAR model RTFs for Table 7.\\n\\n**Response:** We have replaced Table 7 with Figure 5 showing a breakdown, which confirms that the AR model is the primary computational bottleneck.\\n\\nWe also have fixed errors and added the above-mentioned papers to the reference list. Thank you very much again for your careful review and valuable advice, which have improved the quality of our paper. We hope our responses have addressed your concerns.\"}", "{\"comment\": \"Thank you very much for your quick reply and positive feedback.\\n> I believe estimated PESQ score (PESQ with \\\\dagger) is no longer necessary in Table 3 and Table 9. Please remove the column unless there\\u2019s a certain reason to keep them.\\n\\n**Response:** We have removed the column. Thank you for your advice.\\n\\n> What is \\u201cHALL-E\\u201d and \\u201cHALL-E (GT)\\u201d in your explanation? Which is the version with and without the audio prompt?\\n\\n**Response:** HALL-E (GT) is the version with audio prompt.\\n\\n> Why the number in the new table is different from the number in the prior version of the paper? For example, HALL-E score for 3s prompt was 0.663 in the original paper, but the result on the new table is 0.685 or 0.692, both does not match the number in the original paper.\\n\\n**Response:** To report results with a prompt length of 60 seconds, the SIM scores were recalculated over a subset of audio samples with a minimum utterance length of 65 seconds (previously, this value was 25 seconds).\"}", "{\"title\": \"Reply to Reviewer UZgt (Part 2)\", \"comment\": \"**Comment 3:** Some latest zero-shot TTS models are not compared with, such as Voicebox, E2 and VALL-E 2.\\n\\n**Response:**\\nWe conducted evaluation experiments using MaskGCT [1] (a state-of-the-art NAR TTS) with the pre-trained model provided by the authors. Table 24 shows that the performance of MaskGCT significantly decreases as the length of synthesized speech increases.\\nThis indicates that even with NAR TTS, generating long speech in a single inference procedure is challenging.\\nIn the 30-second test setting in Table 25, the WER of MaskGCT is lower than VALL-E and HALL-E. However, its UTMOS is worse compared to HALL-E.\\nIn the 90-second test setting in Table 26, only HALL-E achieves a WER lower than 10.0%, \\nwith the highest UTMOS.\\nFor generating spontaneous speech, producing more natural durations becomes crucial.\\nAR models excel in this aspect compared to NAR models, which likely contributes to the higher UTMOS scores observed.\\nWe have added these results in Appendix E.\\n\\nThe comparison suggested by the reviewer provided a clearer demonstration of the proposed method's effectiveness and relevance. We greatly appreciate the reviewer's insightful suggestion.\\n\\n[1] Y. Wang, et al., MaskGCT: Zero-Shot Text-to-Speech with Masked Generative Codec Transformer, arXiv:2409.00750.\\n\\n**Table 24. MaskGCT performance by test duration**\\n\\n| Method | Duration | WER | SIM | DNSMOS | UTMOS |\\n|-----------|---------------|-------|-------|--------------|------------------|\\n| MaskGCT | 30 sec. | 7.74 | 0.763 | 3.95 $\\\\pm$ 0.23 | 3.42 $\\\\pm$ 0.46 |\\n| MaskGCT | 45 sec. | 10.75 | 0.763 | 3.94 $\\\\pm$ 0.22 | 3.39 $\\\\pm$ 0.49 |\\n| MaskGCT | 60 sec. | 23.08 | 0.757 | 3.92 $\\\\pm$ 0.23 | 3.30 $\\\\pm$ 0.54 |\\n| MaskGCT | 90 sec. | 51.12 | 0.691 | 3.57 $\\\\pm$ 0.51 | 2.57 $\\\\pm$ 1.02 |\\n\\n**Table 25. Performance comparison in 30-second test**\\n\\n| Method | Duration | WER | SIM | DNSMOS | UTMOS |\\n|---------------|---------------|-------|-------|--------------|------------------|\\n| GT | 30 sec. | 10.65 | - | 3.74 $\\\\pm$ 0.29 | 3.20 $\\\\pm$ 0.51 |\\n| VALL-E train-28s | 30 sec. | 11.84 | 0.731 | 3.84 $\\\\pm$ 0.19 | 3.61 $\\\\pm$ 0.41 |\\n| VALL-E train-90s | 30 sec. | 14.83 | 0.695 | 3.82 $\\\\pm$ 0.20 | 3.46 $\\\\pm$ 0.50 |\\n| HALL-E train-90s | 30 sec. | 10.88 | 0.685 | 3.87 $\\\\pm$ 0.21 | 3.61 $\\\\pm$ 0.38 |\\n| MaskGCT | 30 sec. | 7.74 | 0.763 | 3.95 $\\\\pm$ 0.23 | 3.42 $\\\\pm$ 0.46 |\\n\\n**Table 26. Performance comparison in 90-second test**\\n\\n| Method | Duration | WER | SIM | DNSMOS | UTMOS |\\n|---------------|---------------|-------|-------|--------------|------------------|\\n| GT | 90 sec. | 10.30 | - | 3.79 $\\\\pm$ 0.24 | 3.28 $\\\\pm$ 0.51 |\\n| VALL-E train-28s | 90 sec. | 39.77 | 0.726 | 3.84 $\\\\pm$ 0.19 | 3.61 $\\\\pm$ 0.48 |\\n| VALL-E train-90s | 90 sec. | 16.14 | 0.712 | 3.87 $\\\\pm$ 0.17 | 3.58 $\\\\pm$ 0.60 |\\n| HALL-E train-90s | 90 sec. | 9.79 | 0.685 | 3.91 $\\\\pm$ 0.21 | 3.74 $\\\\pm$ 0.37 |\\n| MaskGCT | 90 sec. | 51.12 | 0.691 | 3.57 $\\\\pm$ 0.51 | 2.57 $\\\\pm$ 1.02 |\"}", "{\"summary\": \"The paper presents three contributions: (1) Multi-Resolution Requantization (MReQ), a framework that post-trains a conventional RVQ codec into one with a different token rate in each layer; (2) HALL-E, an extension of the VALL-E framework to utilize an MReQ-based codec; and (3) the introduction of MinutesSpeech, a new benchmark for minute-long speech synthesis. Leveraging the MReQ concept, the authors successfully train a codec whose first layer operates at only 8Hz. With this low-token rate modeling, the HALL-E model generates minutes-long speech with significantly better WER and RTF compared to the original VALL-E model.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"The three contributions listed above are valid and well-received. The topic of minute-long speech synthesis is also one that will attract a wide audience.\", \"weaknesses\": [\"While the paper is generally well-written, there are several points that seem unclear to me. It is difficult to fully accept their proposal unless these points are clarified. I am open to reconsidering the score later.\", \"Unreasonable usage of PESQ raises doubts about their experiments.\", \"Why and how can PESQ be used to filter out noisy data in the data curation pipeline? PESQ requires reference audio to compute the score, and I cannot understand how it could be applied for data curation purposes.\", \"Why is the PESQ score for the ground truth so low in Table 3? The PESQ score for the ground truth should be 4.64.\", \"Questionable inference and evaluation settings\", \"Zero-shot TTS experiment was conducted by \\u201ccontinual\\u201d setting or \\u201ccross-utterance\\u201d setting? From the description, it\\u2019s unclear.\", \"Assuming that a \\\"continual\\\" setting is used, did the authors exclude the audio prompt portion of the synthesized audio when computing the speaker SIM? This is the approach taken in the VALL-E paper and other related works. The increase in speaker SIM in Table 9 with respect to the prompt length is unusually large, raising concerns that the authors may not have excluded the prompt from the speaker similarity computation.\", \"Why is DNSMOS used? DNSMOS is specifically designed and trained to assess the cleanness of speech rather than its naturalness. I strongly suggest replacing it with a naturalness MOS metric, such as UTMOS.\", \"Other minor request to assess the quality of the model\", \"Please present the speaker similarity and MOS scores (UTMOS?) for Table 3 and Table 10.\", \"Please provide a breakdown of the AR and NAR model RTFs for Table 7.\", \"Additionally, I found several minor issues in the description. They are not critical, but they need to be revised:\", \"Equation 2 is likely incorrect. I believe it should be x_l = x_{l-1} - \\\\tilde{z_l}\", \"Similarly, Equation 9 is likely incorrect.\", \"In Table 8, the upsampling ratio should be explicitly described instead of simply listing \\\"HALL-E\\\" in the last row.\"], \"questions\": \"Please address the questions raised in the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the authors' rebuttal\", \"comment\": [\"Thank you for the clarifications and revision. I think the authors addressed the major concerns that I had, and I raised the soundness score and overall rating of the paper. I still have a few clarification questions. I\\u2019d appreciate it if the authors could answer them.\", \"QA2: I believe estimated PESQ score (PESQ with \\\\dagger) is no longer necessary in Table 3 and Table 9. Please remove the column unless there\\u2019s a certain reason to keep them.\", \"QA4: I have a few clarification questions.\", \"What is \\u201cHALL-E\\u201d and \\u201cHALL-E (GT)\\u201d in your explanation? Which is the version with and without the audio prompt?\", \"Why the number in the new table is different from the number in the prior version of the paper? For example, HALL-E score for 3s prompt was 0.663 in the original paper, but the result on the new table is 0.685 or 0.692, both does not match the number in the original paper.\"]}" ] }
85X9awoVtv
Auditing Data Controller Compliance with Data Withdrawal
[ "Muhammad H. Ashiq", "Hung Yun Tseng", "Grigorios Chrysos" ]
We study auditing total data withdrawal, the case in which a user requests the exclusion of their data from both the training and test data for some machine learning task. This approach is motivated by the need for comprehensive compliance with data privacy regulations and legal frameworks around the world. We conceptualize the task of auditing total data withdrawal as an optimization problem. Compliance verification is conducted under mild assumptions using a dedicated verification algorithm. We then evaluate this formulation over various datasets, architectures, and verification hyperparameters. Our verification algorithm serves as a tool for regulators to ensure auditable compliance and provides enhanced privacy guarantees for users.
[ "transparency", "auditing", "data privacy", "right-to-object", "verification" ]
https://openreview.net/pdf?id=85X9awoVtv
https://openreview.net/forum?id=85X9awoVtv
ICLR.cc/2025/Conference
2025
{ "note_id": [ "nwntSPJ36M", "na9LQcbYAR", "inwksj1Ame", "b0nuUCaYsg", "YiDq6gzCdZ", "0bP64IHAOr" ], "note_type": [ "official_review", "comment", "official_review", "official_comment", "official_review", "official_review" ], "note_created": [ 1730637991317, 1731462461907, 1730801903673, 1731462287308, 1730123124442, 1730753351470 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2695/Reviewer_r14D" ], [ "ICLR.cc/2025/Conference/Submission2695/Authors" ], [ "ICLR.cc/2025/Conference/Submission2695/Reviewer_SEbt" ], [ "ICLR.cc/2025/Conference/Submission2695/Authors" ], [ "ICLR.cc/2025/Conference/Submission2695/Reviewer_vwsT" ], [ "ICLR.cc/2025/Conference/Submission2695/Reviewer_5nKQ" ] ], "structured_content_str": [ "{\"summary\": \"This paper addresses the concept of total data withdrawal, where a user requests that their data be removed entirely from a machine learning system, affecting both data at training and test time. The authors develop a formal framework to audit data controllers\\u2019 compliance with these withdrawal requests, ensuring that user data is neither used to train the model nor make predictions at test time. To validate compliance, the authors propose a regulatory check in which the data controller must misclassify the user\\u2019s instance and all perturbed variations of it within a specified range, preventing potential circumventions through adversarial perturbations.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The idea of verifying compliance through misclassification of instances within a closed ball is somewhat innovative and hopes to enforce more stringent guarantees against data misuse.\", \"The work introduces a structured approach to the issue of data withdrawal from both training and test data, which could benefit the evolving discussion around data privacy and user rights in machine learning.\"], \"weaknesses\": [\"**Relevance of Total Data Deletion**: First, The core problem formulation - total data deletion - raises practical concerns. During the deployment phase, predictions are typically made on new data instances provided voluntarily by users, making it unclear why users would request data deletion yet still expect service predictions. If a user continues to provide data, making predictions would likely be legal, limiting the relevance of total data withdrawal to the training phase alone. Second, effective techniques for verifying data deletion from the training set already exist, such as membership inference frameworks that achieve reliable results in real-world settings (e.g., [1]). This casts doubt on the necessity of the proposed approach.\", \"**Infeasibility of Methodology**: The proposed verification method appears impractical, as it requires adversarial perturbation across all training data points to identify potential instances of non-compliance. This would be computationally prohibitive, particularly for large datasets, and could significantly degrade the model\\u2019s performance due to the extensive perturbations required for each training instance.\", \"**Unrealistic Assumptions**: Some assumptions in this work are prohibitively strict, notably the requirement of zero training error. In many models, especially large language models (LLMs), achieving zero training error is impractical due to their immense size and complexity. If this assumption is unmet, the authors' proof seems to fail, limiting the framework\\u2019s applicability to a small subset of ML models that can achieve such high accuracy on the training data.\", \"**Suggestions for improvement**\", \"To address the issue of practical relevance, the authors should consider focusing exclusively on the training phase in the context of data deletion. This approach could help streamline the method and align it with established deletion verification practices.\", \"The feasibility of the proposed approach could be improved by considering alternative compliance verification techniques that don\\u2019t rely on adversarial perturbation of all training data points.\", \"Re-evaluating the assumptions, especially zero training error, would make the approach more applicable to modern machine learning models and increase its potential impact.\", \"----\", \"**References**\", \"[1] Gaussian Membership Inference Privacy, https://proceedings.neurips.cc/paper_files/paper/2023/hash/e9df36b21ff4ee211a8b71ee8b7e9f57-Abstract-Conference.html\"], \"questions\": [\"Could the authors clarify their motivation for considering both training and test time deletion? Are specific use cases or regulatory requirements they had in mind that necessitate verification of deletion at test time as well?\", \"Could the authors provide complexity analysis or empirical runtime measurements as well as performance evaluation for their method on different dataset size? This would allow a more concrete understanding of scalability limitations.\", \"Could the authors discuss how their method might be adapted or relaxed to handle models that don't achieve zero training error?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper focuses on auditing the removal of a certain user's data from both the training and test sets of a model (which they call total data removal). This involves proving unlearning (when it happens) and falsifying it (when unlearning does not happen). The proposed framework relies on delivering provable guarantees that all perturbed examples around the current one are incorrectly classified, and is supported by experimental results.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper identifies (and attempts to formalize) a very important problem. Verifying unlearning has already been considered but provably refuting compliance with data deletion requests is just as important, as this paper points out.\", \"weaknesses\": [\"**Setting not fully justified**: I find the requirement that the model misclassify all instances in a ball around the original instance to be counterintuitive.\", \"If a model generalizes well to an unseen data point, this is still considered a violation as per the proposed framework. This does not map to practical use cases in my head machine learning is fundamentally based on generalization. How can you justify this seemingly extreme requirement?\", \"In particular, this seems to be the reason that the examples that need to be deleted are highly out of distribution (see line 325, for instance). It is still useful to study methods that can only verify the removal of out-of-distribution examples, but this has to be stated upfront.\", \"Further, it seems to me that this definition of data removal can be gamed. It is very easy to misclassify an example by simply changing its label. This does not mean the data has been removed from training. Membership inference attacks might still be able to detect the presence of the example. How can you justify this serious drawback?\", \"**Mathematical rigor**: The paper talks about establishing a necessary and sufficient condition. There is also what looks like a proof sketch on page 5. However, there are no mathematical statements (theorems, lemmas, etc.) that clearly state the assumptions and the conclusion. Moreover, there is no fully rigorous proof either (only a proof sketch).\", \"**Unclear algorithm/complexity**: The paper proposes to use CROWN BaB, which is not described. The entire algorithm looks like a black box for this reason. It is helpful to give a high level idea for those unfamiliar with this algorithm (as it is not a textbook algorithm). Further, what are the time and space complexity of this method? Also, how to choose the parameter $\\\\delta$?\", \"**Minor comments:**\", \"Figure 1 is confusing. What is it supposed to convey?\", \"Eq. 1: is it argmax instead of max?\", \"Experiments: baselines are not explained clearly. What are the baselines trying to achieve? Why are they reasonable to use here?\"], \"questions\": [\"Some failures are reported in line 362: what are the failure modes and why do they happen? Also, don't failures defeat the purpose of auditing?\", \"The paper describes auditing the deletion of test data to be highly important. How do you actually verify this?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewers,\\n\\nThank you for your thoughtful comments. We have decided to withdraw the paper at this time; we will incorporate your feedback and resubmit at another venue.\"}", "{\"summary\": \"This paper seeks to audit if a particular data point was entirely unlearnt from a model -- essentially verifying that the model owner carried out the GDPR \\\"right to be forgotten\\\" requirement. It frames the problem as that of verifying that the resulting classifier (after unlearning) misclassifies the point x to be unlearnt as well as its entire $\\\\delta$ neighborhood. It then uses an existing robustness verification algorithm due to Wang et al 2021 to do the verification, and provides some nice experimental evidence.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper studies an important problem -- verifying and auditing if a model complied with GDPR is indeed a major technical challenge in trustworthy ML that we are barely scratching the surface of. More work on this topic should be encouraged.\"], \"weaknesses\": \"The major weakness of the paper is the framing of the problem. There are three assumptions made -- there are 3 or more labels, training data points are at most distance epsilon from each other, and the model $f_m$ obtained after removal has training loss going to zero (basically that it correctly classifies all training points).\\n\\nI think Assumptions 1 and 2 are reasonable; in fact a version of Assumption 2 was empirically shown for a number of common datasets by [1]. Assumption 3 seems a little less reasonable to me, especially for the larger models of today where one can barely make a few passes over the training data. \\n\\nIn any case, the conclusion drawn from all three of these is that any model $f_m$ that is capable of verification MUST misclassify all points in a ball around the removed point. This is technically true -- if the points are not misclassified, then one cannot statistically verify removal.\\n\\nHowever, this is not a sensible requirement of any model. Suppose the data point x is very much an \\\"in-distribution\\\" point -- an ordinary looking zero in MNIST -- then it is unreasonable to ensure that removing it misclassifies all zeros in the vicinity. In fact, it's quite likely that if x never even existed, and if we never trained the model with x, the model would have classified x correctly. If this is the case, then it should classify x correctly when it is removed as well -- otherwise we are hurting generalization properties of the model. \\n\\nAnother key word in the conclusion drawn about verification is \\\"statistically\\\" -- it is true that one cannot verify statistically, but it might still be possible to verify removal cryptographically without resorting to such strong assumptions. For example, there is a growing body of work on cryptographical verification of properties of models using tools such as one-way functions and zero-knowledge. I would encourage the authors to check out some of that body of work. [2, 3, 4, 5]\\n\\n[1] Yang, Yao-Yuan, et al. \\\"A closer look at accuracy vs. robustness.\\\" Advances in neural information processing systems 33 (2020): 8588-8601.\\n\\n[2] https://arxiv.org/abs/2210.09126\\n\\n[3] Garg, Sanjam, Shafi Goldwasser, and Prashant Nalini Vasudevan. \\\"Formalizing data deletion in the context of the right to be forgotten.\\\" Annual International Conference on the Theory and Applications of Cryptographic Techniques. Cham: Springer International Publishing, 2020.\\n\\n[4] Garg, Sanjam, et al. \\\"Experimenting with zero-knowledge proofs of training.\\\" Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security. 2023.\\n\\n[5] Yadav, Chhavi, et al. \\\"FairProof: Confidential and Certifiable Fairness for Neural Networks.\\\" arXiv preprint arXiv:2402.12572 (2024).\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper focuses on the problem of machine unlearning verification. The author defines a set assumption and showcases that under that set of assumptions their approach can be effective.\\nThey also compare their approach to one of previous works and show they can outperform the existing approach.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"Very important problem and paper does a good job of motivating the problem.\", \"weaknesses\": \"In section 3 the authors claim that an approach is verifiable if and only if the model misclassified the points that are selected to be withdrawn. This can be incorrect given a different model. Let's assume a SVM that is trained and the model trainer will release every step of the training with zero knowledge proof of inputs and outputs or some other trust mechanism approach (trusted hardware, \\u2026). Now if you remove a point which is not around the decision boundaries it won't change anything about the function so it won't misclassify those points as results you are claiming this data is not auditable withdrawn, however, this is not true and an auditor can easily verify this. This makes most of the framing in this work not applicable in most applications.\\n\\n\\n\\nMoreover the authors also mention that a point has to be verifiably withdrawn from the training data if the point should be wrong on all points around a point. Again this can be very problematic as we know adversarial examples exist and if the ball around the point is not very small given a model we can adversarially construct a point such that it can be classified correctly and also actually verifying this in practice can be very hard. As to ensure such a point does not exist we have to do an exhaustive search. \\n\\nSimilar to the previous point we can also construct adversarial scenarios where even if a model answers wrong on a point and all surrounding points it does not mean it is verifiable and not used in the training. Just as an example assume I train a model on many points and when I want to remove a point I just add a if statement that if I get any point near those selected points return a wrong label, the paper here considers this verifiable withdrawn, however, this is falsified. In general the goal of designing an approach for auditing data removal is to make sure the approach does not have high false positives and cannot be easily bypassed, which is not the case for this approach. \\n\\n\\nThe 3rd assumption in Section 3 is \\\"The training loss of f_m goes to 0\\\", I am not sure what this exactly means. Does it go in the limit or it is zero when released ? \\n\\n\\nThe experiments in this work are very unrealistic. Significantly modifying the points will be easily detectable, however, this is not going to in practical scenarios. If the authors want, they can evaluate many face recognition datasets as suggested by themselves.\", \"questions\": \"Mentioned above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
85WHuB5CUK
STOP! A Out-of-Distribution Processor with Robust Spatiotemporal Interaction
[ "Jiaming Ma", "Pengkun Wang", "Binwu Wang", "Zhengyang Zhou", "Xu Wang", "Yudong Zhang", "Du Qian", "Yang Wang" ]
Recently, spatiotemporal graph convolutional networks have attained significant success in spatiotemporal prediction tasks. However, they encounter out-of-distribution (OOD) challenges due to the sensitivity of node-to-node messaging mechanism to spatiotemporal shifts, leading to suboptimal generalization in unknown environments. To tackle these issues, we introduce the **S**patio-**T**emporal **O**OD **P**rocessor (STOP), which leverages spatiotemporal MLP channel mixing as its backbone, separately incorporating temporal and spatial elements for prediction. To bolster resilience against spatiotemporal shifts, STOP integrates robust interaction including a centralized messaging mechanism and a graph perturbation mechanism. Specifically, centralized messaging mechanism configures Context Aware Units (ConAU) to capture generalizable context features, constraining nodes to interact solely with ConAU for spatiotemporal feature interaction. The graph perturbation mechanism uses Generalized Perturbation Units (GenPU) to disrupt this interaction process, generating diverse training environments that compel the model to extract invariant context features from these settings. Finally, we customized a spatiotemporal distributionally robust optimization (DRO) to enhance generalization by exposing the model to challenging environments. Through evaluations on six datasets, STOP showcases competitive generalization and inductive learning. The code is available at https://anonymous.4open.science/r/ICLR2025-STOP.
[ "Spatiotemporal learning; out-of-distribution learning; spatiotemporal prediction" ]
Reject
https://openreview.net/pdf?id=85WHuB5CUK
https://openreview.net/forum?id=85WHuB5CUK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yiCqtiluzl", "xfUP3yQPXB", "vfVcN4x5PX", "vM3BdhxPBx", "kGOHrcp9dP", "ji73FN9MJB", "geR2lVuSI3", "dc2GPLVmsL", "cUVh0I9uLd", "aokqYxggst", "aFrVGbJvYT", "Zw9Nd6aRph", "ZM3FSDEwxi", "XZIWXsfnpF", "VuuHENsGMw", "T0hGvPocow", "RwoTjgSvEG", "QfmbVP6Wl6", "PImXQxXfJv", "N572RUqTJB", "LdYTa38PQN", "J5rGgocYUo", "GG448V7BzD", "EMDQ7pnZNx", "D1OrDVkqnh", "CGvky2VZVR", "BZVNAdT3lx", "9o7eSgbhcf", "9YhgVybze4", "6sG3YaqCJv", "1lCYSvTjkw", "11kqCy0Y1F" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732415247965, 1734479506100, 1732165423746, 1732172791704, 1730688162177, 1732166745714, 1732172501815, 1730651669803, 1732254201909, 1732239214896, 1732171162487, 1732168281968, 1732165254631, 1730448324182, 1732166401028, 1732582995545, 1732165998746, 1732166661564, 1732170110677, 1732610443298, 1732166534185, 1729752266206, 1732167850073, 1732172218249, 1732624538902, 1732173553416, 1732173336852, 1737523378446, 1732583216141, 1732610581068, 1732624368517, 1732167430022 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission116/Reviewer_LDYw" ], [ "ICLR.cc/2025/Conference/Submission116/Area_Chair_8L8b" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Submission116/Reviewer_BojP" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Submission116/Reviewer_LDYw" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Submission116/Reviewer_f4as" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Submission116/Reviewer_f4as" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Submission116/Reviewer_4myW" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Submission116/Reviewer_4myW" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Submission116/Reviewer_BojP" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ], [ "ICLR.cc/2025/Conference/Submission116/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for the authors' detailed response, which has addressed most of my concerns. I'd like to raise my score.\"}", "{\"metareview\": \"This submission proposes the Spatio-Temporal OOD Processor (STOP) to address spatiotemporal graph convolutional networks' out-of-distribution (OOD) challenges by leveraging spatiotemporal MLP channel mixing to separate spatial and temporal elements for prediction. STOP incorporates a centralized messaging mechanism with Context-Aware Units to extract generalizable features and a graph perturbation mechanism with Generalized Perturbation Units to create diverse training environments, enhancing robustness. A customized distributionally robust optimization (DRO) further improves generalization, achieving competitive results on six datasets.\\n\\nExtensive experiments on various datasets are conducted which are acknowledged by most reviewers. Also, the paper's organization particularly for the ablation section clearly conveys all the findings. This submission also focuses on one of the important questions about addressing the spatiotemporal shifts for graph learning. Most weaknesses lie in sufficient in-depth analysis of different proposed components and their separate impact on addressing domain shifts. Most of the proposed components are customized from existing works, which leads to certain concerns about the innovation.\\n\\nIt is really a borderline case. Although we recommend a rejection this time, this submission could be stronger by incorporating all discussions during the rebuttal period.\", \"additional_comments_on_reviewer_discussion\": \"The authors did a great job and lots of effort during the rebuttal period, which addressed the majority of concerns of reviewers in terms of more experiments, more analysis, and more innovation/insight justifications. It raises another concern that the extensive new experiments and analysis in rebuttal seem to take a major portion of the paper revision. In this case, a resubmission with all new content is more encouraged.\"}", "{\"title\": \"W2. Robustness analysis of each component.\", \"comment\": \"> **(1) CPUs robustness analysis**\\n\\n**We explained the robustness of the CPUs for OOD shifts in a single paragraph from lines 241 to 249 of the manuscript.** This has been highlighted in red in the revised manuscript, and we have reproduced it below for your convenience:\\n\\n- The proposed C2S messaging mechanism is constrained to operate between nodes and CPUs, effectively avoiding the complexity associated with direct node-to-node interactions. CPUs in this mechanism assimilates contextual features, which is used to generate output representations for individual nodes. These features are coarse-grained and high-level, which exhibits resilience to temporal variations for individual nodes. Furthermore, structural changes (such as adding or removing nodes) do not significantly disrupt the message-passing pathways between nodes and CPUs. New nodes can also leverage these contextual features to develop information-rich representations, thereby enhancing inductive learning capabilities. In summary, our approach demonstrates remarkable resilience to spatiotemporal variations and strong in out-of-distribution (OOD) environments.\\n\\n> **(2) GPUs robustness analysis**\\n\\nThe GPUs are used to perturb the CPUs perceptual nodes during the feature extraction process. Essentially, these perturbations simulate a diversified training environment characterized by spatiotemporal shifts. By exposing the model to these varied training environments, it can better perceive invariant contextual features that generalize effectively to unknown environments.\\n\\n>**(3) Spatiotemporal DRO robustness analysis**\\n\\nThe spatiotemporal DRO is specifically designed to enable the optimization process of the GPUs. It prunes relatively simple environments for model and directs the model to optimize towards the most challenging environments along the steepest gradient. This focus on challenging environments is advantageous, as it enables the model to extract more robust features that contribute to improved performance in OOD scenario.\\n\\nAs GPUs and DRO serve as the foundational methods of our proposed graph perturbation and are utilized in tandem, we have also included a dedicated paragraph in Section 4.5 of the revised manuscript to underscore the robustness of the proposed graph perturbation mechanism. The content is as follows:\\n- The GPUs introduces perturbations in the spatial interaction process, effectively generating diversified training environments. This strategy prevents the model from becoming overly reliant on a single training environment, thereby promoting the learning of more generalizable features. And the incorporation of spatiotemporal DRO compels the model to engage with the most challenging instances within the generated environments, which can exposure further enhances the model's robustness.\"}", "{\"title\": \"W4. Related work\", \"comment\": \"We sincerely apologize for any confusion caused. In fact, we further discussed related work **in Appendix Section A**, including introductions to continual learning for spatiotemporal shifts and temporal OOD learning methods. In order of their relevance to our focus on OOD problems with spatiotemporal shifts, we have sequentially introduced four related subfields.\\n\\n(1). In the main body, we first introduced the progress of **traditional spatiotemporal prediction models**, whose limitation is that their good performance can only be demonstrated in independently and identically distributed environments. \\n\\n(2). In the main body, we presented the progress in **spatiotemporal OOD learning work**. Unlike these works, our method proposes a novel spatial interaction mechanism to further enhance robustness.\\n\\n(3). In the appendix, we then discuss **the continual learning strategy for handling dynamic spatiotemporal graph data**. However, since these strategies require training the model with new data to adapt to shifting distributions, they share the same limitation as traditional learning methods by adhering to the i.i.d. assumption.\\n\\n(4). Finally, we introduced works related to **temporal shift learning in time series**, which focus on addressing changes in statistical features. However, they ignore the dynamic nature of spatiotemporla grpah topology.\\n\\nTo avoid confusion, we have refined the related work section and added clarifications in the main text to strengthen the connection between the main content and the related work section in the appendix. Additionally, we have summarized existing work using more refined language.\"}", "{\"summary\": \"The paper introduces STOP (Spatio-Temporal Out-of-Distribution Processor), a model designed to enhance the robustness of spatiotemporal predictions in out-of-distribution (OOD) environments. Traditional spatiotemporal graph convolutional networks (STGNNs) often struggle with OOD scenarios due to their reliance on node-to-node messaging, which is highly sensitive to shifts in temporal data distributions or structural changes in graph networks. STOP overcomes these limitations by integrating a novel client-server (C2S) messaging mechanism and a graph perturbation strategy that simulates various training conditions.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The client-server messaging approach with Context Perception Units is a clever departure from traditional node-to-node interactions, reducing sensitivity to structural shifts and enhancing adaptability in OOD settings.\\n2. STOP\\u2019s design primarily relies on lightweight MLP layers, making it more computationally efficient than complex STGNNs or Transformer-based models, which often require high processing power.\\n3. The model was thoroughly tested across a variety of datasets, demonstrating not only generalization but also strong performance in both temporal and spatial OOD scenarios, enhancing the model\\u2019s credibility. \\n4. The ablation studies are thorough and convincing.\", \"weaknesses\": \"See the questions listed below.\", \"questions\": \"1. Adding a paragraph discussing the research motivation and broader impacts of STOP could strengthen the work, particularly in terms of situating the work within the broader context of spatiotemporal OOD challenges. Additionally, the positioning of Figure 1 alongside the introductory text makes the flow slightly disjointed. A more developed introduction with a refined layout could improve the paper\\u2019s readability and help frame the significance of STOP\\u2019s contributions.\\n\\n2. While the authors present the CPUs, GPUs, and DRO as core mechanisms in STOP, the paper could benefit from a more in-depth analysis of why each component specifically enhances spatiotemporal OOD detection. A theoretical analysis of how the graph perturbation strategy affects the model's ability to learn robust representations would provide a more solid grounding for their effectiveness and make the claims more convincing. \\n\\n3. What's the potential limitation of STOP? The authors may add a section to discuss potential trade-offs or scenarios where STOP might not perform as well compared to other approaches, which would provide a more comprehensive evaluation of the model's capabilities and limitations.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"W3. Discussion regarding the potential limitations\", \"comment\": [\"Thank you very much for your suggestions. We acknowledge the following potential limitations, which may serve as valuable directions for future research\\uff1a\", \"**Validating the Broad Impact of STOP**. The spatial interaction module integrated within the STOP framework is inherently generic, suggesting its potential for broader applicability. In upcoming research, we will propose replacing the graph convolutional networks utilized by other spatiotemporal backbones with the spatial interaction module to validate its effectiveness across various contexts. This initiative will help us better understand the potential value and applicability of the STOP module in a wide range of application domains.\", \"**Exploring a Wider Range of OOD Scenarios**. Current OOD problems are typically defined within the confines of single-modal data and single tasks. However, spatiotemporal data exhibits diverse modalities and varied tasks. We believe that an improved spatiotemporal OOD handler should be capable of addressing challenges such as cross-task and cross-modal processing, areas that have not been thoroughly explored in the spatiotemporal domain.\", \"**Integrating Large Language Models for zero-shot learning**. In OOD settings, accurately predicting new nodes poses a significant challenge, as these nodes have not been encountered by the model during training\\u2014commonly referred to as the zero-shot challenge. Large language models excel in this context, as their representational capabilities, developed from extensive training on massive datasets, can enhance a model's zero-shot learning ability. While this has been successfully demonstrated in the time series community, it remains relatively unexplored within the spatiotemporal domain. In future work, we plan to integrate large language models into the STOP framework to further enhance its scalability for predicting new nodes.\", \"We have added this discussion to Appendix Section D.\"]}", "{\"title\": \"W3. Nouns\", \"comment\": \"Thank you very much for your warm reminder.\\n\\nOur paper focuses solely on spatiotemporal OOD problems and does not involve federated learning scenarios. \\n\\nWe recognize that our abbreviations caused confusion, and we have made the following efforts to address your concerns:\\n\\n**1. CPUs \\u2192 ConAU**. This refers to Context Perception Units, to avoid any potential confusion, we will adopt the term ConPU (**Con**text **A**ware **U**nits) instead. \\n\\n**2. GPUs \\u2192 GenPU**. GPUs originally stood for Generalized Perturbation Units, and we will revise this to GenPU (**Gen**eralized **P**erturbation **U**nits). \\n\\n**3. Client-Server interaction mechanism -> Centralized interaction mechanism**. We establish a small number of perception units that engage in centralized interactions with nodes, replacing traditional node-to-node decentralized interactions, which we plan to denote as centralized interaction mechanism.\\n \\n4.$~$Furthermore, except for widely recognized abbreviations (such as OOD, MLP, or STGNN), we use full terms rather than abbreviations for other terms to avoid potential confusion.\\n\\nConsidering that these abbreviations are closely related to our method, we may not be able to implement these changes immediately in the recent revised version to avoid confusion among reviewers who may not synchronize with the abbreviation changes in a timely manner. We assure you that we will make the modifications after the discussion phase to prevent any overlap with commonly understood terms. Thank you once again for your valuable suggestions.\"}", "{\"summary\": \"This paper proposes the Spatio-Temporal OOD Processor (STOP) to enhance the generalization of spatiotemporal graph convolutional networks in out-of-distribution (OOD) environments. STOP uses spatiotemporal MLP channel mixing to separately incorporate temporal and spatial elements. It introduces Context Perception Units (CPUs) and Generalized Perturbation Units (GPUs) to capture generalizable context features and create diverse training environments, thereby improving robustness against spatiotemporal shifts. Additionally, a customized Distributionally Robust Optimization (DRO) is developed to further enhance generalization.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. meticulous work and comprehensive experiments: The paper presents a thorough and detailed introduction and study, with extensive experiments conducted across multiple datasets.\\n\\n2. effective illustrations: The figures and diagrams included in the paper are well-designed and enhance the reader's understanding of the mechanisms introduced.\", \"weaknesses\": \"1. excessive use of abbreviations leading to confusion: The paper employs an abundance of abbreviations, some of which overlap with commonly understood terms like CPUs and GPUs. Abbreviations should be distinctive and aid in communication, rather than repurposing established terms in a way that might confuse the reader.\\n\\n2. lack of original innovation and theoretical justification: The proposed method appears to be a combination of existing modules and techniques borrowed from other works, giving an impression of an engineering patchwork rather than introducing novel contributions. The authors have not provided sufficient theoretical explanations to justify why their design choices effectively address the research problem.\\n\\n3. lack of discussion regarding the potential limitations of the proposed method.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"W3. Discussion regarding the potential limitations\", \"comment\": [\"Thank you very much for your advice. We acknowledge the following potential limitations, which may serve as valuable directions for future research\\uff1a\", \"**Validating the Broad Impact of STOP**. The spatial interaction module integrated within the STOP framework is inherently generic, suggesting its potential for broader applicability. In upcoming research, we will propose replacing the graph convolutional networks utilized by other spatiotemporal backbones with the spatial interaction module to validate its effectiveness across various contexts. This initiative will help us better understand the potential value and applicability of the STOP module in a wide range of application domains.\", \"**Exploring a Wider Range of OOD Scenarios**. Current OOD problems are typically defined within the confines of single-modal data and single tasks. However, spatiotemporal data exhibits diverse modalities and varied tasks. We believe that an improved spatiotemporal OOD handler should be capable of addressing challenges such as cross-task and cross-modal processing, areas that have not been thoroughly explored in the spatiotemporal domain.\", \"**Integrating Large Language Models for zero-shot learning**. In OOD settings, accurately predicting new nodes poses a significant challenge, as these nodes have not been encountered by the model during training\\u2014commonly referred to as the zero-shot challenge. Large language models excel in this context, as their representational capabilities, developed from extensive training on massive datasets, can enhance a model's zero-shot learning ability. While this has been successfully demonstrated in the time series community, it remains relatively unexplored within the spatiotemporal domain. In future work, we plan to integrate large language models into the STOP framework to further enhance its scalability for predicting new nodes.\", \"We have added this discussion to Appendix Section D.\"]}", "{\"title\": \"Feedback on Author's Revisions\", \"comment\": \"Thank you for your prompt response, which includes extensive additional experiments. The results and analyses address my concerns and questions to a considerable extent, and I am willing to raise my rating for the paper.\"}", "{\"title\": \"W1. Node-to-node messaging\", \"comment\": \"As explained in the introduction of the paper, node-to-node messaging mechanisms have the following limitations when dealing with spatiotemporal shifts: **Limition.1**. Coupled with the aggregation paths used during training (i.e., graph topology), structural shifts lead to inaccurate aggregation. **Limition.2**. Node representation errors flood throughout the entire graph, making it sensitive to temporal shifts of nodes. **Limition.3**. Inefficient induction ability for newly added nodes. Next, we explain three limitions and the limited role of node-to-node mechanisms in OOD scenarios.\\n\\n### **Limition.1**\\n\\nUsing the SD dataset as an example, we first select the test data of 550 nodes and then input this data into four backbones, then we extract their output representations from their first layer that uses the node-to-node mechanism, denoted as $\\\\alpha$. Second, we remove 55 (10%) nodes of thse 550 nodes and add 55 new nodes, and take the new data into models again. Finally, we extract the output representations from the same layer, denoted as $\\\\beta$. \\n\\nAfter aligning the common nodes (495 nodes) between $\\\\alpha$ and $\\\\beta$, we calculate the representation error percentage using the following formula: $\\\\frac{||\\\\alpha-\\\\beta||}{||\\\\alpha||}\\\\times100$\\\\%, where $||\\\\cdot||$ represents the Euclidean distance. The representation errors and final predicted performance gap are shown in the following table:\\n\\n| **Model** | **GWNet** | **STGCN** | **STAEformer** | **D2STGNN** |**Ours**|\\n|:-:|:-:|:-:|:-:|:-:|:-:|\\n| Error percentage | 8.71% | 6.64%| 12.96%| 11.81% |**2.68%**|\\n|Performance gap| -25.47% | -14.71% |-20.41%|-32.63%|**-1.04%**|\\n|||\\n\\nThe error percentage results show that structural shift affect the coupled aggregation paths of node-to-node messaging mechanisms, thereby impacting their accuracy in representing the entire graph. In contrast, our proposed interaction mechanism is more robust.\\n\\n---\\n---\\n### **Limition.2**\\n\\nUsing SD dataset as example again, we randomly select 30% of nodes from 550 nodes and added random noise to their data to simulate temporal shift of nodes, the representation is denoted as $\\\\gamma$. The errors between $\\\\alpha$ and $\\\\gamma$ are shown:\\n\\n|**Model**|**GWNet**| **STGCN** | **STAEformer** | **D2STGNN** |**Ours**|\\n|:--:|:--:|:--:|:--:|:--:|:--:|\\n| Error percentage| 2.35%| 7.29%| 6.13% |9.17%|**1.23%**|\\n|Performance Gap| -7.92%| -19.87% |-4.34%|-15.05%|**-0.83%**|\\n|||\\n\\nWhen temporal distribution of nodes change, these models cannot accurately represent these nodes, and the errors also flood to the entire graph through the message passing mechanism, thereby degrading the performance of the entire graph. \\n\\n---\\n\\n### **Limition.3**\\n\\nThe weak inductive learning capability of the node-to-node messaging limits the model's ability to accurately describe untrained nodes [1]. However, in OOD (Out-Of-Distribution) scenarios, new nodes frequently emerge. In Table 3 of our paper, we compared STOP with other models, and it clearly shows that our model has better inductive capability, with improvements of up to 15.07%.\", \"ref\": \"[1] Hamilton W, Ying Z, et al. Inductive representation learning on large graphs[J]. Advances in neural information processing systems, 2017.\\n\\n---\\n### **Node-to-node interaction vs. Ours**\", \"we_used_two_backbones\": \"STGCN and STAEformer. The first one utilizes graph convolution as node-to-node interaction, while the latter use the self-attention mechanism for node-to-node interaction. We removed their node-to-node interaction layer and named these variants as '-graph'. Additionally, we replaced their node-to-node interaction with our spatial interaction mechanism, denoting these variants as '+ Ours'. We use SD and KnowAir datasets with OOD settings in our paper, and the performance results are shown in the following table:\\n\\n| | | SD| | | | KnowAir | |\\n|:----:|:--:|:--:|:--:|--|-----|:--:|:--:|\\n| Model| MAE| RMSE | MAPE | |MAE| RMSE| MAPE |\\n| STGCN| 25.72 | 40.03| 18.21 | |29.49 | 40.93| 63.85 |\\n| STGCN-graph| 25.45 | 39.62| 17.98 | |26.18 | 38.03| 55.75 |\\n| STGCN+Ours| **24.87** | **38.98**| **17.65** | |**25.44** | **37.42** | **52.80** |\\n|| | | | ||| |\\n| STAEformer | 26.20 | 41.18 | 18.39 | |27.25 | 38.93 | 56.48 |\\n| STAEformer-graph | 25.80 | 40.84 | 17.45 | |25.82 | 37.28 | 55.65 |\\n| STAEformer+Ours | **24.65** | **38.46** | **17.20** | |**25.46** | **37.25** | **55.04** |\\n||||\\n\\nWe can observe that after removing the node-to-node interaction mechanism, these variants surprisingly show better generalization performance. **This demonstrates the limited (or even counterproductive) effect of node-to-node mechanisms.** Meanwhile, our proposed spatial interaction module brings performance improvements, demonstrating that our proposed module is more effective than the node-to-node interaction.\\n\\nSummary. Our method effectively addresses these three limitations of traditional node-to-node messaging mechanisms and achieves better robustness for spatiotemporal OOD problem.\"}", "{\"title\": \"W3. The reason to choose MLP\", \"comment\": \"Because sequential models like LSTM or GRU lack the ability to model spatial features and are unsuitable for handling graph-structured data, and popular node interaction structures like GCN and Transformer have limitations in OOD scenarios, we chose MLP, because this model is lightweight, which **not only provides high computational efficiency but also helps prevent overfitting to training data, thereby achieving higher generalization performance**. Of course, thanks to our clever utilization of MLP, we've enabled it to achieve sufficient spatiotemporal modeling capabilities.\\n\\nSpecifically, in the third paragraph of the introduction, we explain that node-to-node messaging mechanisms, as core elements of popular GCN/Transformer structures, struggle to effectively handle spatiotemporal changes in OOD scenarios. This is because the graph knowledge learned through this propagation mechanism is coupled with the training graph topology. When the test environment undergoes changes in graph topology, alterations in feature aggregation paths hinder the model's ability to effectively represent the graph.\\n\\nTo further clarify the concerns, we design three variants of the temporal module in STOP for temporal modeling, replacing MLP with TCN or LSTM for temporal modeling along the time dimension. Experimental results show that these variants exhibit poor generalization performance, and our competitive performance is attributed to our innovative application of MLP, whose temporal modeling capability surpasses these sequential models.\\n\\n||||||||\\n|-------------------|-------|-------|-------|---------|-------|-------|\\n| Variant | | **SD** | | | **KnowAir** | |\\n| | MAE | RMSE | MAPE | MAE | RMSE | MAPE |\\n| MLP \\u2192 TCN | 24.23 | 37.99 | 17.66 | 25.21 | 37.06 | 52.65 |\\n| MLP \\u2192 LSTM | 24.50 | 38.51 | 16.97 | 25.46 | 38.65 | 52.88 |\\n| Ours | **23.79** | **37.93** | **16.23** | **24.77** | **36.77** | **51.01** |\\n||||||\"}", "{\"title\": \"Clarification on W1\", \"comment\": \"Dear Reviewer BojP,\\n\\nThank you very much for your valuable comments, which are crucial to the improvement of our paper. We would clarify your concerns point by point.\\n\\n\\n > **W1. (a) Anohter paragraph for discussion**\\n\\nBased on your suggestions, we added a discussion paragraph in Appendix D to further emphasize our motivation and highlight the impact of our method on OOD spatiotemporal learning. For your convenience, the added paragraph is as follows:\\n\\n- The effectiveness of traditional spatiotemporal prediction models is typically demonstrated only in test environments very similar to the training environment. While some research on spatiotemporal OOD challenges has recognized the problems caused by distribution changes due to spatiotemporal variations and proposed various strategies, both traditional models and OOD learning models' reliance on node-to-node global interaction mechanisms limits their generalization performance in the face of spatiotemporal changes, as these interaction mechanisms are coupled with the training graph structure and flood representation errors. To address this inherent limitation, we introduce an innovative spatio-temporal interaction mechanism to replace traditional node-to-node approaches. This new mechanism incorporates CPU units that can sense contextual features from nodes, helping maintain high generalization in unknown environments. Additionally, we designed graph perturbation mechanisms to further enhance robustness. Our approach has been validated across eight OOD datasets, showing a +17.01% performance improvement.\\n\\n- More importantly, our research findings provide valuable insights for future OOD researchers: (1) The core message passing mechanisms in GCN and Transformer are limited in OOD scenarios, suggesting the need to explore alternatives beyond traditional GCN/Transformer sequential model architectures; (2) Existing models follow a stacked architecture, which lacks flexibility for OOD scenarios; (3) Lightweight but powerful architectures (such as MLPs) may be more suitable for OOD learning, as complex GCN or Transformer advanced backbones might overfit to the training environment and compromise their generalization ability. \\n\\n> **W1 (b). Figure 1 Layout**\\n\\nThank you for your suggestion. In the revised version, we modified the layout of Figure 1, placing it at the connection point between the two paragraphs to avoid breaks in the reading flow.\"}", "{\"summary\": \"This paper presents the STOP model, designed to address OOD shifts in spatiotemporal prediction tasks. STOP employs an MLP channel-mixing backbone with robust interaction mechanisms, including C2S messaging and graph perturbation, to enhance resilience to spatiotemporal shifts. The C2S messaging mechanism utilizes CPUs for feature interactions, reducing dependence on global node-to-node messaging, while GPUs simulate spatiotemporal shifts to create diverse training environments. A customized DRO is also integrated to improve generalization.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper addresses the important and meaningful problem of spatiotemporal shifts. By introducing a lightweight MLP layer to model spatial and temporal dynamics, it effectively mitigates out-of-distribution (OOD) challenges while maintaining efficiency.\", \"weaknesses\": \"See questions part for more details.\", \"questions\": \"1. Section 5.4 only examines the hyperparameters \\\\( K \\\\) and \\\\( M \\\\), but since this paper focuses on model design, a more thorough exploration of hyperparameter choices (e.g., those discussed in Section 5.1) would be beneficial. Specifically, assessing the ease of finding optimal hyperparameters and their generalizability across downstream tasks is necessary. I feel the current discussion on these points is insufficient. Additionally, was the choice of \\\\( M \\\\) analyzed in Section 5.4 without being mentioned in Section 5.1?\\n\\n2. The STOP model incorporates various component designs, including CPU and GPU interactions. While ablation studies that remove modules individually are common for validating component effectiveness, they often focus solely on the independent impact of specific components and may overlook interactions between modules. I suggest designing more comprehensive experiments, such as dual (or multiple) module ablations, to observe potential synergies or interdependencies among components.\\n\\n3. This paper chooses an MLP as the backbone, differing fundamentally from the graph convolution architectures used in prior work. Aside from being more lightweight, what clear advantages does the MLP offer for addressing these types of problems?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"W1. Abbreviations\", \"comment\": \"Dear Reviewer LDYw,\\n\\nThank you very much for your valuable comments, which are essential for improving our paper. We would like to address your concerns point by point.\\n\\n> **Abbreviations**\\n\\n**(1). CPUs \\u2192 ConAU.** This refers to Context Perception (Aware) Units, to avoid any potential confusion, we will adopt the term ConPU (**Con**text **A**ware **U**nits) instead. \\n\\n**(2). GPUs \\u2192 GenPU.** GPU originally stood for Generalized Perturbation Units, and we will revise this to GenPU (**Gen**eralized **P**erturbation **U**nits). \\n\\n(3). Furthermore, except for widely recognized abbreviations (such as OOD, MLP, or STGNN), we use full terms rather than abbreviations for other terms to avoid potential confusion.\\n\\n\\nConsidering that these abbreviations are closely related to the contributions of our method, we may not be able to implement these changes immediately in the revised version to avoid confusion among reviewers who may not synchronize with the abbreviation changes in a timely manner. We assure you that we will make the necessary modifications after the discussion phase to prevent any overlap with commonly understood terms. Thank you once again for your valuable suggestions.\"}", "{\"comment\": \"Dear Reviewer LDYw,\\n\\nWe sincerely appreciate your acknowledgment and encouraging feedback. Interacting with you has been both enjoyable and invaluable, significantly enhancing the quality of our paper. Thank you once again for your time, effort, and insightful comments.\\n\\nWarm regards,\\n\\nThe authors.\"}", "{\"title\": \"W2 (2). Theoretical analysis and explanation.\", \"comment\": \"We theoretically analyzed STOP's generalization performance. Since STOP's optimization objective belongs to the distributionally robust optimization class, which exhibits good generalization properties. Note that distributionally robust optimization class is a general term for optimization objectives that satisfy specific conditions - our contribution lies in how to implement optimization strategies that meet these conditions in the spatiotemporal OOD problem. First, we will introduce what constitutes a distributionally robust optimization class and the necessary conditions for membership, then analyze its beneficial properties, and finally extend these concepts to STOP.\\n\\n> **What is distributionally robust optimization class?**\\n\\nDistributionally robust optimization [1] refers to a class of optimization objectives that aims to optimize for the worst-case scenario within a certain range of all possible data distributions, and the mathematical formula is expressed as:\\n$$\\n\\\\arg\\\\min_{f}\\\\sup_{e\\\\in\\\\mathcal{E}}\\\\lbrace\\\\mathbb{E}_{(\\\\mathbf{X},\\\\mathbf{Y})\\\\sim p(\\\\mathcal{X},\\\\mathcal{Y}|e)}\\\\[\\\\mathcal{L}(f(\\\\mathbf{X}),\\\\mathbf{Y})\\\\]:\\\\mathcal{D}\\\\left(e,e^{*}\\\\right)\\\\leq\\\\rho\\\\rbrace\\n$$\\n\\nwhere $f$ is the function we optimized, usually a deep neural network with learnable parameters. $\\\\mathcal{D}\\\\left(\\\\cdot,\\\\cdot\\\\right)$ is the distribution distance metric, which is used to calculate the distance between distributions. $e^*$ is the training environment covering multiple data distributions. $\\\\rho$ is a hyperparamer.\\n\\n**Mark**. This equation tell me if an optimization satisfies: (1) Optimize in diverse traing environments, (2) these environments are constrained, and (3) emphasizing the challenging environments for learning, then this optimization belongs to the distributionally robust optimization and possesses the following beneficial properties.\\n\\n> **Properties of distributionally robust optimization**\\n\\nCompared to the IID-only condition of the Empirical Risk Minimisation, distributionally robust optimization explores a certain range of challenging training data distributions, mathematically, distributionally robust optimization is equivalent to adding variance regularization to the standard Empirical Risk Minimisation [1],\\n\\n$$\\n\\\\begin{align}\\n&\\\\sup\\\\_\\\\{e\\\\in\\\\mathcal{E}\\\\}\\\\{\\\\lbrace\\\\mathbb{E}\\\\_\\\\{\\\\(\\\\mathbf{X}\\\\,\\\\mathbf{Y}\\\\)\\\\sim p\\\\(\\\\mathcal{X}\\\\,\\\\mathcal{Y}\\\\|e\\\\)\\\\}\\\\[\\\\mathcal{L}\\\\(f\\\\(\\\\mathbf{X})\\\\,\\\\mathbf{Y}\\\\)\\\\]\\\\:\\\\mathcal{D}\\\\(e\\\\,e\\\\^{*}\\\\)\\\\leq\\\\rho\\\\rbrace} \\\\\\\\\\\\\\\\\\n\\\\geq~&\\\\mathbb{E}\\\\_\\\\{\\\\(\\\\mathbf{X}\\\\,\\\\mathbf{Y}\\\\)\\\\sim p\\\\(\\\\mathcal{X}\\\\,\\\\mathcal{Y}\\\\|e\\\\^\\\\*\\\\)\\\\} \\\\[\\\\mathcal{L}\\\\(f\\\\(\\\\mathbf{X}\\\\)\\\\,\\\\mathbf{Y}\\\\)\\\\] \\\\+ \\\\sqrt{2 \\\\rho \\\\text{Var}\\\\_\\\\{\\\\(\\\\mathbf{X}\\\\,\\\\mathbf{Y})\\\\sim p\\\\(\\\\mathcal{X}\\\\,\\\\mathcal{Y}|e^\\\\*\\\\)\\\\}\\\\[\\\\mathcal{L}\\\\(f\\\\(\\\\mathbf{X}\\\\)\\\\,\\\\mathbf{Y}\\\\)\\\\]\\\\}.\\n\\\\end{align}\\n$$\\n\\n\\nCompared to standard empirical loss functions, variance regularization has stricter bounds. Therefore, distributionally robust optimization class mathematically provides more rigorous constraints than using empirical loss functions alone in OOD environments, preventing the model from over-relying on training data [1]. This enables the model to flexibly adapt to different environments, improving its generalization performance in unknown environments.\\n\\n> **STOP as number of distributionally robust optimization class**\\n\\nWe will demonstrate that our optimization objective of STOP belongs to distributionally robust optimization class, inheriting its good properties. Our optimization objective of STOP is as follows: \\n$$\\n\\\\min_f\\\\sup_{\\\\boldsymbol{g}\\\\in\\\\mathbb{R}^N}\\\\mathbb{E}_{(\\\\mathbf{X},\\\\mathbf{Y})\\\\sim(\\\\mathcal{X},\\\\mathcal{Y}|e^*)}\\\\left[\\\\mathcal{L}\\\\left(f\\\\left(\\\\mathbf{X}\\\\right),\\\\mathbf{Y};\\\\boldsymbol{g}\\\\right)\\\\right],\\\\quad\\\\mathrm{s.t.} ||\\\\widetilde{\\\\boldsymbol{g}}||_0=s\\\\in\\\\left(0,N\\\\right).\\n$$\\nIn fact, this goal is aligned with the necessary conditions of distributionally robust optimization class explaned in Mark.\\n\\n- **Diverse traing environments**. STOP creates a diverse training environment by adding perturbations through a graph perturbation mechanism.\\n\\n- **Applying constraints for the generation of environment**. Our perturbation process follows polynomial distribution sampling, and we strictly control the perturbation ratio, which imposes constraints on the generated environments.\\n\\n- **Exploring challenging environments**. We select the environment with the largest gradients during training for optimization, encouraging the model to be exposed to challenging environments.\\n\\nIn summary, our optimization strategy belongs to distributionally robust optimization, and therefore inherits its good generalization properties. That is, our optimization strategy can theoretically achieve better model generalization performance.\", \"ref\": \"[1] John Duchi and Hongseok Namkoong. Variance-based regularization with convex objectives. Jour-\\nnal of Machine Learning Research, 20(68):1\\u201355, 2019.\"}", "{\"title\": \"W2 (2). Theoretical explanation\", \"comment\": \"We theoretically analyzed STOP's generalization performance. Since STOP's optimization objective belongs to the distributionally robust optimization class, which exhibits good generalization properties. Note that distributionally robust optimization class is a general term for optimization objectives that satisfy specific conditions - our contribution lies in how to implement optimization strategies that meet these conditions in the spatiotemporal OOD problem. First, we will introduce what constitutes a distributionally robust optimization class and the necessary conditions for membership, then analyze its beneficial properties, and finally extend these concepts to STOP.\\n\\n> **What is distributionally robust optimization class?**\\n\\nDistributionally robust optimization [1] refers to a class of optimization objectives that aims to optimize for the worst-case scenario within a certain range of all possible data distributions, and the mathematical formula is expressed as:\\n$$\\n\\\\arg\\\\min_{f}\\\\sup_{e\\\\in\\\\mathcal{E}}\\\\lbrace\\\\mathbb{E}_{(\\\\mathbf{X},\\\\mathbf{Y})\\\\sim p(\\\\mathcal{X},\\\\mathcal{Y}|e)}\\\\[\\\\mathcal{L}(f(\\\\mathbf{X}),\\\\mathbf{Y})\\\\]:\\\\mathcal{D}\\\\left(e,e^{*}\\\\right)\\\\leq\\\\rho\\\\rbrace\\n$$\\n\\nwhere $f$ is the function we optimized, usually a deep neural network with learnable parameters. $\\\\mathcal{D}\\\\left(\\\\cdot,\\\\cdot\\\\right)$ is the distribution distance metric, which is used to calculate the distance between distributions. $e^*$ is the training environment covering multiple data distributions. $\\\\rho$ is a hyperparamer.\\n\\n**Mark**. This equation tell me if an optimization satisfies: (1) Optimize in diverse traing environments, (2) these environments are constrained, and (3) emphasizing the challenging environments for learning, then this optimization belongs to the distributionally robust optimization and possesses the following beneficial properties.\\n\\n> **Properties of distributionally robust optimization**\\n\\nCompared to the IID-only condition of the Empirical Risk Minimisation, distributionally robust optimization explores a certain range of challenging training data distributions, mathematically, distributionally robust optimization is equivalent to adding variance regularization to the standard Empirical Risk Minimisation [1],\\n\\n$$\\n\\\\begin{align}\\n&\\\\sup\\\\_\\\\{e\\\\in\\\\mathcal{E}\\\\}\\\\{\\\\lbrace\\\\mathbb{E}\\\\_\\\\{\\\\(\\\\mathbf{X}\\\\,\\\\mathbf{Y}\\\\)\\\\sim p\\\\(\\\\mathcal{X}\\\\,\\\\mathcal{Y}\\\\|e\\\\)\\\\}\\\\[\\\\mathcal{L}\\\\(f\\\\(\\\\mathbf{X})\\\\,\\\\mathbf{Y}\\\\)\\\\]\\\\:\\\\mathcal{D}\\\\(e\\\\,e\\\\^{*}\\\\)\\\\leq\\\\rho\\\\rbrace} \\\\\\\\\\\\\\\\\\n\\\\geq~&\\\\mathbb{E}\\\\_\\\\{\\\\(\\\\mathbf{X}\\\\,\\\\mathbf{Y}\\\\)\\\\sim p\\\\(\\\\mathcal{X}\\\\,\\\\mathcal{Y}\\\\|e\\\\^\\\\*\\\\)\\\\} \\\\[\\\\mathcal{L}\\\\(f\\\\(\\\\mathbf{X}\\\\)\\\\,\\\\mathbf{Y}\\\\)\\\\] \\\\+ \\\\sqrt{2 \\\\rho \\\\text{Var}\\\\_\\\\{\\\\(\\\\mathbf{X}\\\\,\\\\mathbf{Y})\\\\sim p\\\\(\\\\mathcal{X}\\\\,\\\\mathcal{Y}|e^\\\\*\\\\)\\\\}\\\\[\\\\mathcal{L}\\\\(f\\\\(\\\\mathbf{X}\\\\)\\\\,\\\\mathbf{Y}\\\\)\\\\]\\\\}.\\n\\\\end{align}\\n$$\\n\\nCompared to standard empirical loss functions, variance regularization has stricter bounds. Therefore, distributionally robust optimization class mathematically provides more rigorous constraints than using empirical loss functions alone in OOD environments, preventing the model from over-relying on training data [1]. This enables the model to flexibly adapt to different environments, improving its generalization performance in unknown environments.\\n\\n> **STOP as number of distributionally robust optimization class**\\n\\nWe will demonstrate that our optimization objective of STOP belongs to distributionally robust optimization class, inheriting its good properties. Our optimization objective of STOP is as follows: \\n$$\\n\\\\min_f\\\\sup_{\\\\boldsymbol{g}\\\\in\\\\mathbb{R}^N}\\\\mathbb{E}_{(\\\\mathbf{X},\\\\mathbf{Y})\\\\sim(\\\\mathcal{X},\\\\mathcal{Y}|e^*)}\\\\left[\\\\mathcal{L}\\\\left(f\\\\left(\\\\mathbf{X}\\\\right),\\\\mathbf{Y};\\\\boldsymbol{g}\\\\right)\\\\right],\\\\quad\\\\mathrm{s.t.} ||\\\\widetilde{\\\\boldsymbol{g}}||_0=s\\\\in\\\\left(0,N\\\\right).\\n$$\\nIn fact, this goal is aligned with the necessary conditions of distributionally robust optimization class explaned in Mark.\\n\\n- **Diverse traing environments**. STOP creates a diverse training environment by adding perturbations through a graph perturbation mechanism.\\n\\n- **Applying constraints for the generation of environment**. Our perturbation process follows polynomial distribution sampling, and we strictly control the perturbation ratio, which imposes constraints on the generated environments.\\n\\n- **Exploring challenging environments**. We select the environment with the largest gradients during training for optimization, encouraging the model to be exposed to challenging environments.\\n\\nIn summary, our optimization strategy belongs to distributionally robust optimization, and therefore inherits its good generalization properties. That is, our optimization strategy can theoretically achieve better model generalization performance.\", \"ref\": \"[1] John Duchi and Hongseok Namkoong. Variance-based regularization with convex objectives. Jour-\\nnal of Machine Learning Research, 20(68):1\\u201355, 2019.\"}", "{\"title\": \"Weakness clarification\", \"comment\": \"Dear Reviewer 4myW,\\n\\nWe greatly appreciate your professional and valuable feedback, which is crucial for improving the quality of our paper. We would like to address your concerns point by point. First, please allow us to provide an initial clarification regarding your summary of weaknesses.\\n\\n> **Although this method avoids the problem of structural distribution drift by discarding the original structural information and modeling the global correlation between features, it wastes data information to a certain extent.** \\n\\nIndeed, we don't use graph structural information in a fine-grained way; unlike traditional models, we don't perform thorough interaction through node-to-node messaging along the graph structural topology. However, we don't consider this a waste. On the contrary, it is precisely such careful use of graph structural information that causes the insufficient generalization capability of these models in structural shifts.\\n\\nTo illustrate this point, we conducted an experiment using two OOD dataset used in the paper: SD and KnowAir. We removed the graph convolution operations in STGCN and self-attention operations in STAEformer to eliminate the precise use of graph structural information, naming these variants STGCN-graph and STAEformer-graph. We replace their original components with our proposed interaction mechanism, and define the variables as STGCN+Ours and STAEformer+Ours. We report the performance of both variants and naive models in OOD scenarios in the following table:\\n\\n| | | SD| | | | KnowAir | |\\n|:----:|:--:|:--:|:--:|--|-----|:--:|:--:|\\n| Model| MAE| RMSE | MAPE | |MAE| RMSE| MAPE |\\n| STGCN| 25.72 | 40.03| 18.21 | |29.49 | 40.93| 63.85 |\\n| STGCN-graph| 25.45 | 39.62| 17.98 | |26.18 | 38.03| 55.75 |\\n| STGCN+Ours| **24.87** | **38.98**| **17.65** | |**25.44** | **37.42** | **52.80** |\\n|| | | | ||| |\\n| STAEformer | 26.20 | 41.18 | 18.39 | |27.25 | 38.93 | 56.48 |\\n| STAEformer-graph | 25.80 | 40.84 | 17.45 | |25.82 | 37.28 | 55.65 |\\n| STAEformer+Ours | **24.65** | **38.46** | **17.20** | |**25.46** | **37.25** | **55.04** |\\n||||\\n\\nThe above results demonstrate that their meticulous utilization of graph structural information actually reduced generalization performance. Therefore, we abandoned the detailed utilization of graph information and proposed a more robust spatial interaction mechanism. As the table shows, our interactive component is more efficient.\"}", "{\"comment\": \"Thanks to the authors for their informative response. It is interesting to find that discarding the original structure is beneficial. After reading the other reviewers' comments, I will maintain my original positive score.\"}", "{\"title\": \"W2 (1). Original innovation and theoretical justification.\", \"comment\": \"Our contribution lies in developing a new spatial interaction mechanism equipped with a novel graph perturbation mechanism and aspatiotemporal DRO optimization strategy, addressing the a limitation that has not yet been discussed: node-to-node interactionmechanism that existing spatiotemporal models rely on. **This mechanism is not an engineering patchwork; instead, each component is configured with high synergy and purposeful motivation in addressing OOD challenges**. Below, I will explain the design logic of each component.\\n\\n(1) Unexplored motivation. We argue that the node-to-node interaction mechanism used in existing spatiotemporal prediction models is the potential reason for their poor robustness to OOD distributions, as it is sensitive to spatiotemporal changes. This limitation has not been explored.\\n\\n(2) Novel core methodology. We attempt to design a robust spatial interaction mechanism that incorporates context-aware units for node interaction, rather than through direct node-to-node interaction, thereby enhancing resilience to spatiotemporal shift. These context-aware units can sense high-level contextual features from nodes.\\n\\n(3) Effective enhancement strategies. We recognize that diverse training environments enable models to extract robust contextual features that can generalize to unknown scenarios; however, data-level perturbations are computationally expensive. Therefore, we propose a novel graph perturbation mechanism that perturbs the aforementioned spatial interaction process during training, effectively modeling diverse environments.\\n\\n(4) Customized optimization strategy. Recognizing that excessive training environments can complicate model learning, we introduce a specialized optimization strategy that carefully exposes the model to challenging environments to enhance robustness.\\n\\nIn summary, we systematically build a comprehensive OOD learning framework, progressing from fundamental observations to sophisticated technical innovations.\\n\\nTo avoid any further confusion, we will refine the method overview section in Section Introduction to emphasize our key contributions clearly.\"}", "{\"summary\": \"In this paper, the authors focus on the problem of spatial-temporal out-of-distribution learning. Specifically, they model temporal and spatial information separately and combine them with DRO loss for prediction. At the same time, corresponding modules are designed for the capture of temporal and spatial correlation. For the structural drift problem, this method attempts to model the global correlation between node features. Empirically, experiments have been conducted on a large number of datasets to verify their effectiveness.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The research question is critical\\n2. The technical description is clear\\n3. Comparative experiments with SOTA methods were conducted on multiple datasets\", \"weaknesses\": \"In general, from my point of view, this is good work in terms of technology. However, the connection between the problem scenario and the technology used is weak, and there is a lack of certain verification and explanation. Although this method avoids the problem of structural distribution drift by discarding the original structural information and modeling the global correlation between features, it wastes data information to a certain extent. At the same time, this paper lacks ablation experiments on DRO. Specifically:\\n\\n1) The author mentioned that the reason why STGNN performs poorly under distribution drift is: global node-to-node messaging for spatiotemporal interaction. Is this verified? Can the attention propagation method mentioned in this paper be avoided?\\n\\n2) In the field of spatial-temporal graph learning, many operations model temporal information and spatial information separately. At the same time, why can this operation alleviate the error accumulation mentioned? Further explanation is needed here, or corresponding references are provided.\\n\\n3) Some nouns are not named properly. For example, CPUs and GPUs are proper nouns. Also, Client-Server (C2S), is there any scenario involving federated learning here?\\n\\n4) The description of related work is too simple and unsystematic.\\n\\n5) In the methods section, where is the low-rank reflected? Is it the low-rank of the similarity matrix? What is its advantage in this scenario? This needs explanation, and the intuition of using this technique is not clear.\\n\\n6) There are no ablation experiments for the DRO learning strategy.\", \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"W2. Multiple module ablation experiment.\", \"comment\": \"We conduct thorough ablation experiments to evaluate the effectiveness of each component. The variants we created are shown below.\\n|||\\n|:--------:|:---------:|\\n| **Variant** | **Definition** |\\n| w/o decom | Remove the decoupling mechanism |\\n| w/o prompt | Remove the temporal prompt learning |\\n| w/o (decom & prompt) | Remove the decoupling mechanism and temporal prompt learning |\\n|w/o $\\\\mathbf{Y}_t$ | Remove the temporal prediction component|\\n|w/o $\\\\mathbf{Y}_s$| Remove the spatiotemporal prediction component|\\n| w/o DRO | Remove spatiotemporal DRO |\\n| w/o (GPU&DRO) | Remove spatiotemporal DRO and GPU |\\n| w/o (GPU&DRO)+RandomDrop | Remove spatiotemporal DRO and GPU and randomly mask 20% training nodes to simulate temporal and spatial shifts |\\n| w/o LA | Use Naive self-attention mechanism to replace Low-rank attention |\\n| w/o LA+GPU | Add GPU term with the variant w/o LA |\\n| w/o LA +GPU +DRO | Add GPUs and spatiotemporal DRO with the variant w/o LA |\\n| w/o LA+RandomDrop | Randomly mask 20% training nodes and then train variant w/o LA |\\n| w/o CPU | Completely remove the spatial interaction mechanism |\\n||||||||\\n\\nThe experiments on two datasets are shown below. Regarding the C\\\\&S messaging mechanism, the \\\"w/o CPU\\\" variant, which removes the spatial interaction module, resulted in a significant increase in error, indicating that the spatial interaction is still necessary in OOD scenarios. The \\\"w/o LA\\\" variant, which removes the low-rank attention mechanism in the C\\\\&S spatial interaction module, performed poorly in prediction, as the traditional node-to-node messaging mechanism is less robust to spatio-temporal shifts. The \\\"w/o LA+DRO\\\" variant performed better than the \\\"w/o LA+RandomDrop\\\" variant, demonstrating that the proposed graph perturbation mechanism is more effective than directly perturbing the dataset to generate diverse training environments in helping the model extract robust representations.\\n\\nThe \\\"w/o DRO\\\" variant exhibited a larger prediction error, suggesting that the inability to effectively optimize the deployed GPU mask matrix increased the complexity of the model's learning process. The \\\"w/o (GPU\\\\&DRO)\\\" variant also showed a considerable increase in error, further highlighting the crucial importance of the proposed graph perturbation mechanism in enhancing the model's robustness, as it allows the model to learn resilient representations from the perturbed environments.\\n\\nThese ablation studies can demonstrate the positive impact of each designed component on enhancing the overall performance of the model in out-of-distribution scenarios.\\n\\n| | | | | | | |\\n|:------------------------------:|:------:|:------:|:------:|:-------:|:-----:|:-----:|\\n| | | **SD** | | | **KnowAir** | |\\n| Variant | MAE | RMSE | MAPE | MAE | RMSE | MAPE |\\n| w/o decom | 24.09 | 38.49 | 17.53 | 25.10 | 37.10 | 54.16 |\\n| w/o prompt | 24.67 | 39.83 | 18.20 | 25.27 | 36.78 | 51.42 |\\n| w/o decom & prompt | 25.23 | 40.46 | 19.01 | 25.83 | 37.25 | 54.33 |\\n| w/o $\\\\mathbf{Y}_t$ | 23.87 | 38.02 | 16.86 | 25.70 | 36.99 | 53.10 |\\n| w/o $\\\\mathbf{Y}_s$| 26.25|41.25|18.76|27.04|39.21|63.68|\\n|||||||\\n|w/o CPU |26.06 |41.47 |17.56|26.88| 38.22| 58.23|\\n| w/o LA | 26.14 | 41.86 | 18.26 | 25.62 | 37.10 | 53.12 |\\n| w/o LA+GPU | 26.29 | 42.15 | 18.71 | 25.61 | 36.86 | 55.81 |\\n| w/o LA +GPU +DRO | 26.11 | 41.73 | 17.58 | 25.10 | 36.91 | 54.73 |\\n| w/o LA+RandomDrop | 27.41 | 43.11 | 18.32 | 25.77 | 37.16 | 59.09 |\\n|||||||\\n| w/o DRO | 24.08 | 38.17 | 17.06 | 24.93 | 36.92 | 54.86 |\\n| w/o (GPU&DRO) | 24.52 | 38.65 | 18.13 | 25.26 | 36.98 | 55.12 |\\n| w/o (GPU&DRO)+RandomDrop| 24.77 | 38.90 | 18.48 | 25.45 | 36.87 | 55.90 |\\n|||||||\\n| Ours | **23.79** | **37.94** | **16.24** | **24.78** | **36.77** | **51.02** |\\n|||||||\\n\\nWe have added this ablation experiment to Section C.9 of the Appendix.\"}", "{\"title\": \"W2. Error accumulation\", \"comment\": \"In the introduction of this paper, we emphasize that **error accumulation stems from the stacked structure of existing spatiotemporal learning models**. These models stack multiple spatiotemporal modules sequentially, with each module depending on the output of the previous one. After spatiotemporal learning, these models only produce one representation, denoted as the prediction representation, which is finally input to the decoder for prediction. The weakness of this architecture is that errors caused by spatiotemporal shift accumulate and lead to large deviations in label representations, resulting in suboptimal accuracy.\\n\\n**Although temporal and spatial layers in each spatiotemporal module model temporal or spatial information separately , it cannot fundamentally address such issue**. Next, we illustrate the error accumulation due to structural and temporal shifts.\\n\\n### **1. Error accumulation due to structural shift**\\n\\nFirst, we create two datasets using the SD dataset with GWNet and D2STGNN. For the first dataset, we select the test data of 550 nodes and then input this data into the backbones, then we extract their output representations from each spatial or temporal layer. For the second dataset, we remove 55 (10%) nodes of thse 550 nodes and add 55 (10%) new nodes, and take the new data into models again. Finally, we calculate the representation error percentage of each temporal or spatial layer, which shown in below:\\n\\n| $~~~$Layer | 1 | $~~~$2| $~~~~$3| $~~~~$4| $~~~~$5 | $~~~~$6 | $~~~~$7 | Prediction representation |\\n| :--: | :--: | :--: | :--: | :---: | :---: | :---: | :---: | :------------------------------: |\\n| GWNet | 0 | 2.77% | 8.61% | 13.28% | 17.14% | 20.88% | 23.52% | 27.20% |\\n|D2STGNN| 0| 8.43%| 15.90%|19.03%|\\t 28.10%| - | - | -|\\n|||\\n\\nThe first layer is the temporal layer (TCN in GWNet and Transformer in D2STGNN) to model temporal information for each node, thus, errors is 0. Subsequently, they use node-to-node mechanism for node interaction, leading to representation errors due to structural shifts, and these error-containing representations are then fed into the next temporal module, affecting the learning of this layer. \\n\\n### **2. Error accumulation due to temporal shift**\\n\\nWe further randomly selected 30% of nodes from the first data set above and added random noise to their data for simulating temporal shift of nodes. The representation deviation at each layer is shwon below:\\n\\n| $~~~$Layer | $~~~$1 | $~~~$2 | $~~~$3 | $~~~$4 | $~~~$5 | $~~~$6 | $~~~$7| Prediction representation |\\n|:---------:|:-----:|:-----:|:-----:|:------:|:------:|:------:|:------:|:------:|\\n| GWNet | 9.85% | 11.87% | 14.40% | 16.01% | 17.44% | 18.62% | 19.26% | 23.43% |\\n|D2STGNN| 23.39%| 25.45%| 26.84%|29.88%|\\t 34.00%| - | - | -|\\n|||\\n\\nChanges in the data distribution of nodes lead to incorrect representation of these nodes by the model, and the message passing mechanism propagates these errors to other nodes.\\n\\n**Summary**. We can see that errors caused by either structural shifts or temporal shifts accumulate significant errors in prediction representation, ultimately leading to decreased prediction performance.\\n\\n### **3. STOP for error accumulation**\\n\\nIn response, **we simplify this stacked structure by using only one temporal layer and one spatial layer for spatiotemporal learning. These two layers each generate a prediction representation component, separately**. And the final prediction of our STOP is jointly determined by these two prediction components. Next, we verify the roles of these two prediction components in spatiotemporal shifts. \\n\\nUsing the SD dataset and KnowAir dataset as examples, following the paper's settings, we created one dataset with only temporal shift and another with structural shift. We use the Shapley Value to analyze the contributions of two prediction components in different OOD scenarios. The Shapley Value is a common indicator used to measure individual contributions in collaboration. We denote $\\\\mathbf{Y}_t$ as the temporal prediction component and $\\\\mathbf{Y}_s$ as the spatial prediction component.\", \"the_results_are_shown_in_the_following_table\": \"| Component | | SD | KnowAir | |\\n|:-----------:|:-------:|:-------:|:---------:|:-------:|\\n| | SOOD | TOOD | SOOD | TOOD |\\n| $\\\\mathbf{Y}_t$ | 64.04 | 41.95 | 72.62 | 36.23 |\\n| $\\\\mathbf{Y}_s$ | 35.96 | 58.05 | 27.38 | 63.77 |\\n|||\\n\\nThe results reveal that in the SOOD dataset with structural distribution shift, the temporal prediction of STOP experiences less interference and its prediction contribution increases. Conversely, in TOOD scenarios characterized by temporal distribution shift, STOP would rely on spatial prediction components to make predictions. This separate prediction architecture allows STOP to maintain robustness across various OOD scenarios, as demonstrated in Tables 2 and 4 of the paper.\"}", "{\"comment\": \"Dear Reviewer 4myW,\\n\\nThank you very much for your professional suggestions. Your valuable and detailed review comments have significantly improved the quality of our paper. We sincerely thank you again for your time and effort.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"W6. Spatiotemporal DRO ablation\", \"comment\": \"Sorry for the confusion. Since spatiotemporal DRO is a customized optimization strategy for generalized perturbation units ( temporarily GPU), our initially created variant 'w/o GPU' in the ablation experiments removed both DRO and generalized perturbation units together, and we overlooked conducting separate ablation experiments for DRO. To address your concern, we conducted ablation experiments across all datasets to evaluate spatiotemporal DRO's effectiveness, with the average MAE for 12 time steps shown in the following table.\\n\\n\\n| $~$Variant | $~~$SD | $~$GBA | $~$GLA | $~~$CA | PEMS3-Stream | KnowAir |\\n|:-------:|:-----:|:-----:|:-----:|:-----:|:------------:|:-------:|\\n| w/o DRO | 24.08 | 25.94 | 26.86 | 24.10 | 12.71 | 24.93 |\\n| w/o GPU | 24.52 | 26.10 | 27.24 | 24.39 | 12.99 | 25.26 |\\n| Ours | **23.79** | **25.09** | **26.53** | **23.84** | **12.54** | **24.78** |\\n||||||\\n\\nWe can find that the prediction performance of w/o DRO is inferior because removing the proposed spatiotemporal DRO optimization strategy increases the model's learning complexity due to multiple training environments created through GPU's series of perturbations. spatiotemporal DRO, on the other hand, selects the most challenging one from these environments for training, thereby guiding the model to extract more robust knowledge. We have synchronized this discussion into the manuscript.\"}", "{\"title\": \"W5. Low-rank\", \"comment\": \"Sorry for the confusion. Based on the design inspiration of our spatial interaction mechanism, the attention matrix calculated by the corresponding mathematical formula is low-rank, as its rank is significantly lower than its maximum possible rank, hence we name it low-rank attention. Specifically, our attention matrix can be calculated as follows:\\n\\n$$\\nS=\\\\operatorname{softmax}(\\\\alpha QK^\\\\top)\\\\operatorname{softmax}(\\\\alpha KQ^\\\\top)\\n$$\\n\\nwhere $\\\\alpha$ is a scaling factor and equals to $1 / \\\\sqrt{d_{h}}$. $Q\\\\in R^{N\\\\times d_h}$, $K\\\\in R^{K_n\\\\times d_h}$, and $Q\\\\in R^{N\\\\times d_h}$ are the query, key, and value vectors, respectively $N$ is the number of nodes, and $K_n$ is the number of context perception units. Let $S_d=softmax(\\\\alpha QK^\\\\top)$ and $S_a=softmax(\\\\alpha KQ^\\\\top)$. The final attention matrix $S\\\\in R^{N\\\\times N}$ is a low-rank matrix because its rank is satisfied:\\n$$\\n\\\\operatorname{rank}\\\\left(\\\\mathbf{S}\\\\right)=\\\\operatorname{rank}\\\\left(\\\\mathbf{S}_d\\\\times\\\\mathbf{S}_a\\\\right)\\\\leq\\\\min\\\\left(\\\\operatorname{rank}\\\\left(\\\\mathbf{S}_d\\\\right), \\\\operatorname{rank}\\\\left(\\\\mathbf{S}_a\\\\right)\\\\right)\\\\leq K_n\\\\ll N\\n$$\\nThe final inequality is a consequence of the fact that the maximum rank of a matrix is no more than the minimum of the ranks of its rows and columns. The rank of $\\\\mathbf{S}$, which is up to $K_n$, is much lower than its size $N$, i.e., the number of rows and columns, hence the computed attention score matrix of our method is a low-rank matrix. \\n\\nThe benefit of our used low-rand attention mechanism is that it is more efficient than the naive Transformer. The complexity of the traditional Transformer for spatial interaction is $\\\\mathcal{O}\\\\left(N^{2}d_h\\\\right)$. In our method, the complexity of calculating both $S_a$ and $S_b$ is $\\\\mathcal{O}\\\\left(K_n Nd_h\\\\right)$. In this paper, $K_n\\\\ll N$, thus, our low-rank attention has lower complexity. We thoroughly evaluated $K_n$ in Appendix C.10, which is generally 1% of $N$. We will emphasize this part in Appendix E.1 of the manuscript.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear Reviewer f4as,\\n\\nWe sincerely appreciate your detailed feedback and encouraging comments. We are delighted that our revisions have addressed your concerns satisfactorily. Your thoughtful insights have been invaluable in enhancing the quality of our paper. Thank you once again for your time and review.\\n\\nWarm regards,\\n\\nThe authors.\"}", "{\"comment\": \"Thank you for your clarification and thoughtful responses. They have addressed my concerns, and I will maintain my current score.\"}", "{\"comment\": \"Dear Reviewer BojP,\\n\\nWe sincerely thank you for your recognition and constructive feedback. Our interaction with you has been both enjoyable and invaluable, greatly enhancing the quality of our paper. Thank you once again for your time, effort, and insightful comments.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"title\": \"W1.Detailed analysis of M and K\", \"comment\": \"Dear Reviewer LDYw,\\n\\nThank you very much for your valuable comments, which are essential for improving our paper. We would like to address your concerns point by point.\\n\\nWe conduct experiments on six datasets to analyze the sensitivity of two hyperparameters. The spatial scales for training in these six datasets range from 141 to 6615 nodes. And we report the average performance over 12 time steps below.\\n\\n> **The number of CPUs $K$**\\n\\nThe CPUs are coarsening units used for perceiving contextual features through interactions with nodes. Consequently, the number of CPUs $K$ is closely related to the spatial scale. Based on our experimental results, we find that setting $K$ to approximately 1% of the spatial scale is an effective choice. A larger number of CPUs can hinder the model's ability to focus on capturing generalizable contextual features.\\n\\n| | | | K | | | | |\\n|:------------:|:------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|\\n| Dataset (The numer of training nodes) | 2 | 4 |8 | 16 | 32 | 64 | 128 |\\n| SD (550) | 24.40 | 23.88 | **23.79** | 24.03 | 24.59 | 24.65 | 24.66 |\\n| | | | | | | | |\\n| Dataset (The numer of training nodes) | 8 | 16 | **24** | 32 | 48 | 64 | 128 |\\n| GBA (1809) | 25.55 | 24.65 | **24.09** | 24.72 | 25.17 | 24.62 | 25.6 |\\n| | | | | | | | |\\n| Dataset (The numer of training nodes) | 16 | 24 | **32** | 48 | 64 | 128 | 256 |\\n| GLA (2949) | 27.05 | 27.28 | **26.53** | 26.98 | 27.68 | 27.65 | 27.46 |\\n| | | | | | | | |\\n| Dataset (The numer of training nodes) | 16 | 32 | **64** | 128 | 256 | 512 | 1024 |\\n| CA (6615) | 25.52 | 25.05 | **23.84** | 23.96 | 24.65 | 24.46 | 25.23 |\\n| | | | | | | | |\\n| Dataset (The numer of training nodes) | 2 | 4 | **8** | 16 | 32 | 64 | 128 |\\n| PEMS3-Stream (655) | 12.52 | 12.99 | **12.54** | 12.96 | 13.02 | 13.25 | 13.16 |\\n| | | | | | | | |\\n| Dataset (The numer of training nodes) | 2 | **4** | 8 | 16 | 32 | 64 | 128 |\\n| KnowAir (141) | 25.01 | **24.78** | 25.27 | 25.55 | 25.61 | 25.48 | 27.68 |\\n|||||\\n\\n> **The number of GPUs $M$.** \\n\\nThe hyperparameter $M$ represents the number of GPU, which are used to modulate the interaction process between nodes and CPUs. Each GPU corresponds to a different training environment. The MAE is shown in Table below, and we have observed that the number of GPUs $M$ is universally effective when set to between 2 and 4. When $M$ is set to a smaller value, an overly complex training environment can disrupt learning stability. Conversely, if there are too few GPUs, the limited training environments may not provide sufficient diversity for the model to extract invariant knowledge. Interestingly, this hyperparameter is insensitive to spatial scale. And $M$ is not highly correlated with the time span of the data. In the experiments, Long-range SD, GBA, GLA, and CA datasets contain a full year of training data, and TrafficStream is a short-range dataset containing one month data for training. \\n\\n| GPU (M) | 1 | 2 | 3 | 4 | 5 | 6 |7| 8|\\n|:------------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|\\n| SD | 24.02 | 23.90 | **23.79** | 23.80 | 23.87 | 23.89 |24.31|24.79|\\n| GBA | 24.55 | 24.82 | **24.09** | 25.72 | 26.17 | 26.60 |27.32|28.01|\\n| GLA | 26.77 | 26.98 | **26.53** | 26.68 | 26.65 | 28.46 |28.78|30.25|\\n| CA | 24.00 | 23.90 | **23.84** | 23.84 | 23.87 | 24.85 |25.64|26.30|\\n| PEMS3-Stream | 12.64 | **12.54** | 12.55 | 13.63 | 14.45 | 14.64 |14.90|15.21|\\n| KnowAir | 25.21 | 25.27 | 25.24 | **24.78** | 24.81 | 24.92 |25.69|25.97|\\n|||||\\n\\n\\nSummary. Based on the above analysis, we recommend setting the initial values $K$ to 1\\\\% of the number of training nodes and the initial values of $M$ between 2 and 4 for hyperparameter tuning in OOD scenarios.\\n\\nThis discussion is supplemented in the Appendix Section C.10. We also have supplemented the missing report on $M$ in Section 5.1. Thank you for your kind reminder about our oversight.\"}" ] }
85VWxAwsaF
Amortized Posterior Sampling with Diffusion Prior Distillation
[ "Abbas Mammadov", "Hyungjin Chung", "Jong Chul Ye" ]
We propose Amortized Posterior Sampling (APS), a novel variational inference approach for efficient posterior sampling in inverse problems. Our method trains a conditional flow model to minimize the divergence between the variational distribution and the posterior distribution implicitly defined by the diffusion model. This results in a powerful, amortized sampler capable of generating diverse posterior samples with a single neural function evaluation, generalizing across various measurements. Unlike existing methods, our approach is unsupervised, requires no paired training data, and is applicable to both Euclidean and non-Euclidean domains. We demonstrate its effectiveness on a range of tasks, including image restoration, manifold signal reconstruction, and climate data imputation. APS significantly outperforms existing approaches in computational efficiency while maintaining competitive reconstruction quality, enabling real-time, high-quality solutions to inverse problems across diverse domains.
[ "Inverse Problems", "Diffusion Models", "Variational Inference" ]
https://openreview.net/pdf?id=85VWxAwsaF
https://openreview.net/forum?id=85VWxAwsaF
ICLR.cc/2025/Conference
2025
{ "note_id": [ "pVly19520K", "SRZk3MyXfx", "Nkk7zdA7Eq", "MJ6FkOocG1", "LETjgnfkgW" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1730134014035, 1730446231142, 1730408532415, 1730558107937, 1731653295494 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4562/Reviewer_oR42" ], [ "ICLR.cc/2025/Conference/Submission4562/Reviewer_KaqV" ], [ "ICLR.cc/2025/Conference/Submission4562/Reviewer_KEbV" ], [ "ICLR.cc/2025/Conference/Submission4562/Reviewer_wiGW" ], [ "ICLR.cc/2025/Conference/Submission4562/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents APS, an efficient variational inference-based posterior sampling method that combines conditional normalizing flows with diffusion models to achieve one-step posterior samples. APS is unsupervised, does not require paired training data, and is capable of tackling inverse problems in both Euclidean and non-Euclidean spaces. Experimentally, APS is evaluated on image denoising, inpainting, and data imputation tasks, demonstrating speed improvements while maintaining competitive accuracy.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The method allows one-step inference for posterior sampling, offering substantial time savings over traditional DIS methods\\n\\nAPS is tested on both Euclidean and non-Euclidean data, showcasing its potential in various fields.\\n\\nThe integration of conditional NFs with diffusion priors, particularly in unsupervised settings, is promising.\", \"weaknesses\": \"Limited to specific models, such as MCG, DPS, and Noise2Score, but does not benchmark against other efficient variational methods recently introduced for inverse problems. Also benchmarking against similar methods would help demonstrate the behavior of this approach to others: DDNM, Pi-GDM, FPS, FPS-SMC, ect. For example, it\\u2019s unclear how the model can fail and what the failure cases may look like.\", \"questions\": \"Have the authors considered testing APS on broader inverse problem domains, beyond image and environmental data?\\n\\nCould the authors clarify the performance of APS in terms of generalizability to entirely unseen problem classes?\\n\\nGiven APS\\u2019s unsupervised nature, how does it perform on ill-posed inverse problems where the prior distribution diverges significantly from the true posterior?\\n\\nIs the observed background noise in reconstructions specific to the choice of vectorized input representation, or might this indicate a broader limitation of APS?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Amortized Posterior Sampling (APS), a variational inference method designed for efficient posterior sampling in inverse problems. APS trains a conditional flow model to align the variational distribution with the posterior distribution defined by a diffusion model, enabling diverse posterior samples with a single neural function evaluation. The authors demonstrate APS\\u2019s effectiveness across tasks like image restoration, manifold signal reconstruction, and climate data imputation, showing it significantly improves computational efficiency while maintaining competitive reconstruction quality for real-time applications across various fields.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"Clarity: The paper is clearly written and well-structured, making the methodology and key contributions easy to understand.\", \"Significance: Previous approaches to solving inverse problems often require multiple neural network evaluations, leading to high computational costs. This paper\\u2019s proposed method addresses this limitation by using an amortized variational inference framework, enabling efficient, single-step sampling. This improvement in computational efficiency represents a meaningful step forward for practical applications of inverse problem-solving.\"], \"weaknesses\": \"- Limited novelty: The primary contribution of the paper is the use of amortization and diffusion model distillation to enable efficient, single-step sampling. However, this concept has already been explored in recent work [1] with a very similar approach, also leveraging **unsupervised diffusion prior distillation** with **amortized variational inference** for inverse problems. This challenges the paper's claim of novelty (i.e., \\\"the first diffusion prior distillation\\\" in line 101 or \\\"the first proof of concept\\\" in line 515). This highly relevant work is neither discussed in the paper nor compared experimentally. Thorough discussion and performance comparison with this prior work to highlight meaningful distinctions would strengthen the methodological contribution.\\n\\n- Several existing frameworks, such as RED-Diff, also utilize variational inference objectives for inverse problem-solving. The primary distinction between RED-Diff and APS seems to be the replacement of the posterior distribution with a neural network conditioned on y; the diffusion prior loss remains similar. While the amortization of neural networks does improve efficiency, both methods are based on similar principles. Thus, an experimental comparison between RED-Diff and APS could highlight potential trade-offs in accuracy and generalization, providing a clearer picture of APS\\u2019s advantages.\\n\\n- Limited performance and baselines: The paper lacks experimental comparisons with key baseline methods [1-5] that are also designed to solve inverse problems. For instance, while RED-Diff [2] is mentioned in Table 1 as a related framework, there is no direct comparison of its performance with APS. This gap in baseline evaluations weakens the empirical foundation of the paper, making it challenging to assess the benefits of APS. Additionally, the experimental results of APS do not seem strong in table 2 and qualitatively in Figure 6; artifacts and noise are still noticeable, even with limited baselines.\\n\\n[1] Diffusion Prior-Based Amortized Variational Inference for Noisy Inverse Problems, ECCV24 \\\\\\n[2] A variational perspective on solving inverse problems with diffusion models, ICLR23 \\\\\\n[3] Pseudoinverse-guided diffusion models for inverse problems, ICLR23 \\\\\\n[4] Zero-shot image restoration using denoising diffusion null-space model, ICLR23 \\\\\\n[5] Denoising diffusion models for plug-and-play image restoration, CVPRW23\", \"questions\": [\"Please clarify the points raised in the weaknesses section.\", \"Is the neural network for the variational distribution optimized separately for each task, or can a single network generalize across multiple inverse problem tasks?\", \"The ability of APS to generalize to out-of-distribution datasets (e.g., applying a CelebA-trained model to the FFHQ dataset) is an appealing property. Can the authors compare these results to relevant baselines? Recent work, such as [1], has demonstrated similar generalization capabilities, and a direct comparison would provide useful insights.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Amortized Posterior Sampling (APS), a novel unsupervised variational inference approach for efficient posterior sampling in inverse problems using a conditional flow model. APS enables fast, diverse posterior sampling with a single neural evaluation and adapts to both Euclidean and non-Euclidean data. It achieves superior computational efficiency across tasks like image restoration and climate data imputation, providing real-time, high-quality solutions to various inverse problems.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well-written and easy to follow. It employs a conditional normalizing flow model to achieve one-step posterior inference. Experiments on both Euclidean and non-Euclidean datasets across various tasks\\u2014including inpainting, denoising, Gaussian deblurring, and super-resolution\\u2014demonstrate the proposed distillation method's fast speed and competitive performance.\", \"weaknesses\": \"1. Though the proposed method enables one-step inference, training such a model using equation (11) in the main text is time-consuming. Additionally, it requires training on test images, whereas previous methods like DPS perform zero-shot inference on test images and achieve strong performance. Given the strong performance in the field of diffusion-based inverse problems, I find APS's performance underwhelming. Furthermore, including more challenging tasks on higher-resolution images would be beneficial.\\n\\n2. Although the paper highlights its novelty in variational inference, it lacks comparisons with relevant baselines. Methods like RED-DIFF and the score prior have not been included, likely because they are not distillation methods. More distillation-based approaches for posterior sampling should be considered.\\n\\n3. The method appears to combine the reconstruction step and the normalizing-flow step in the paper, score prior, into a single step, which limits its novelty.\", \"questions\": \"Please see the weakness for some of my concerns.\\n\\n1. Can you clarify how many test images you use to obtain the metrics like PSNR and SSIM?\\n\\n2. The use of varying metrics is a bit confusing; sometimes you report MSE, and other times FID or SSIM. A unified framework of metrics would be helpful.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes to accelerate inverse problem-solving with diffusion priors. This is achieved by distilling the unconditional diffusion prior for a given set of inverse problems into a conditional normalizing flow model $q_{\\\\phi}(\\\\boldsymbol{x}|\\\\boldsymbol{y})$ that can (approximately) sample from the posterior distribution $p(\\\\boldsymbol{x}|\\\\boldsymbol{y})$ given any input $\\\\boldsymbol{y}$ with a single NFE. Training is performed with a principled variational objective that is mathematically sound, and experiments are performed on multiple data domains (euclidean/non-euclidean) showing $\\\\approx$comparable results with significantly faster inference time.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is very well written and mathematically sound.\", \"The results cover diverse settings and show clear favorable runtime for APS.\", \"Related work is addressed well and the paper\\u2019s contribution is put in proper context.\"], \"weaknesses\": [\"The method requires training at test time (e.g. on the test set). This stands in contradiction to the speed claim of the method being a fast sampler. The question then becomes: How large should the test set be to compensate for this prolonged training?\", \"The method is claimed to produce high-quality and diverse posterior samples. Nonetheless, the resulting samples look rather noisy (e.g. CelebA denoising in Fig. 1, row 1), and they seem to have little to no diversity.\", \"The datasets used in the experiments are rather unorthodox and make it hard to properly appreciate the performance of the method. For example, 32x32 CelebA is tiny \\\\- it is hard to notice any meaningful differences at this resolution in the resulting samples. Similarly, the non-euclidean datasets seem to be relatively simple and quite constrained.\", \"The main reason for my negative score is the relatively limited experiments and my current lack of understanding of the test scenario where APS could be advantageous. Nonetheless, the paper is well structured and the overall idea is interesting. I'm willing to raise my score if I'm provided convincing answers to these two critical issues.\"], \"a_few_caught_minor_typos\": [\"L093 \\\\- **one** the Euclidean manifold \\\\-\\\\> **on** the Euclidean manifold\", \"L356 \\\\- we set the batch size to **the** 64 \\\\-\\\\> we set the batch size to 64\", \"L364 \\\\- **convergency** \\\\-\\\\> **convergence**\", \"L377 \\\\- \\u201cdue to dimensionality\\u201d?\", \"L467 \\\\- leveraging **the** Tweedie\\u2019s formula \\\\-\\\\> leveraging Tweedie\\u2019s formula\", \"L523 \\\\- unlike recent DIS methods require \\\\-\\\\> unlike recent DIS methods **which** require\"], \"questions\": [\"The method seems to require a rather large test set to learn a proper approximation of the conditional normalizing flow. How does the size of the test set affect the performance of APS?\", \"Do you think it is mainly the architecture of $G\\\\_{\\\\\\\\phi}(z,y)$ that resulted in the reduced sample quality or is there a more fundamental reason related perhaps to the ELBO on $\\\\\\\\log p\\\\_{\\\\\\\\theta}(\\\\\\\\hat{x}\\\\_0)$?\", \"Did you benchmark the diversity of the samples from APS? For example, against the baseline Score-Prior by Feng et al. (2023)? Or even against \\u201cstandard\\u201d zero-shot diffusion posterior samplers such as DPS/DDNM?\", \"Does APS lose the great advantage of zero-shot DIS methods being adaptable to any operator at test time without additional training?\", \"Is there an inherent limitation with respect to the resolution of images that APS can handle? If so, it is recommended to mention this explicitly in your discussion. If not, then perhaps APS can be applied to larger images (e.g. 128x128) for better benchmarking?\", \"The distortion (e.g. PSNR/SSIM) of APS can be improved by sampling N=128 posterior samples and averaging them to approximate the posterior mean. However, shouldn\\u2019t a fair comparison do the same for the baselines?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}" ] }
85Ik12q2hP
Do Think Tags Really Help LLMs Plan? A Critical Evaluation of ReAct-Style Prompting
[ "Mudit Verma", "Siddhant Bhambri", "Subbarao Kambhampati" ]
The reasoning abilities of Large Language Models (LLMs) remain a topic of debate, which are critically tested in sequential decision-making problems. ReAct, a recently popular method has gained popularity for claiming to enhance LLM reasoning abilities while directly prompting them by $``\textit{interleaving reasoning trace with action execution}"$ in text-based planning domains such as AlfWorld and WebShop. However, given the different components of ReAct-style prompting, it remains unclear what the source of improvement in LLM performance is. In this paper, we critically examine the claims of ReAct-style prompting for sequential decision-making problems. By introducing systematic variations to the input prompt, we perform a sensitivity analysis along the original claims of ReAct. Contrary to these claims and common use-cases that utilize ReAct-style prompting, we find that the performance is minimally influenced by the interleaved reasoning trace or by the content of these generated reasoning traces. Instead, the performance of LLMs is primarily driven by the unreasonably high degree of similarity between input example tasks and queries, implicitly forcing the prompt designer to provide instance-specific examples which significantly increases the cognitive burden on the human. Our empirical results, on the same suite of domains as ReAct, show that the perceived reasoning abilities of LLMs stem from the exemplar-query similarity and approximate retrieval rather than any inherent reasoning abilities.
[ "Large Language Models", "ReAct", "Sequential Decision-Making" ]
https://openreview.net/pdf?id=85Ik12q2hP
https://openreview.net/forum?id=85Ik12q2hP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z8YJuVXDdX", "yOot0B7cqZ", "oV0EFmnFrH", "jkez3dMMug", "ZsPjSGbq9y", "WRSF7e55id", "Oztsqb1LW7", "Im5d8fGxNw", "GsakNXXEzm", "FqySmD8tMd", "C67GEsdgvn", "BpDQh709Bd", "AJv10cnW0D", "82BoERS1do", "1nQNi8nm8g" ], "note_type": [ "official_comment", "official_review", "official_review", "comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732431156771, 1729866743277, 1729883498607, 1733386174976, 1730694839109, 1732163260122, 1732163095183, 1732554320573, 1732163239569, 1732464887116, 1732163042361, 1730723106113, 1732163456332, 1732640440427, 1732163332761 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12770/Reviewer_GqdU" ], [ "ICLR.cc/2025/Conference/Submission12770/Reviewer_5H7H" ], [ "ICLR.cc/2025/Conference/Submission12770/Reviewer_7upD" ], [ "ICLR.cc/2025/Conference/Submission12770/Authors" ], [ "ICLR.cc/2025/Conference/Submission12770/Reviewer_C2ur" ], [ "ICLR.cc/2025/Conference/Submission12770/Authors" ], [ "ICLR.cc/2025/Conference/Submission12770/Authors" ], [ "ICLR.cc/2025/Conference/Submission12770/Reviewer_7upD" ], [ "ICLR.cc/2025/Conference/Submission12770/Authors" ], [ "ICLR.cc/2025/Conference/Submission12770/Reviewer_5H7H" ], [ "ICLR.cc/2025/Conference/Submission12770/Authors" ], [ "ICLR.cc/2025/Conference/Submission12770/Reviewer_GqdU" ], [ "ICLR.cc/2025/Conference/Submission12770/Authors" ], [ "ICLR.cc/2025/Conference/Submission12770/Reviewer_C2ur" ], [ "ICLR.cc/2025/Conference/Submission12770/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thank you for the rebuttal\", \"comment\": \"Thank you for the rebuttal. However, I didn't find the findings in the paper especially interesting or enlightening. Thus I will maintain my score.\"}", "{\"summary\": \"This paper examines the impact of different components of ReAct-style prompting for planning problems. The authors propose several kinds of alterations to the ReAct prompts, and find that 1) the LLM's performance is minimally influenced by the interleaved reasoning trace or the content of the reasoning trace in the prompt; 2) the performance heavily depends on a high degree of similarity between the demonstrations and queried problems. These findings indicate that LLM's reasoning/planning abilities are more fragile than normally perceived.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This work studies a wide range of settings to understand how different components of the ReAct prompts affect the model's performance. The results are insightful for researchers and practitioners to better interpret what ReAct prompts are doing and the current LLMs' reasoning/planning performance.\", \"weaknesses\": [\"**Incomplete analysis for supporting the claims**. The paper does not include enough details/analyses about what the LLMs' actual generations are like and their comparison with the (altered) demonstrations. Are LLMs following the changed demonstrations when addressing new problems, or are they still doing things in the original ReAct style? For example, is it possible that the LLMs are still doing interleaved reasoning and acting, even though the demonstrations are changed to have all thought steps together in the beginning? For cases where the performance drops a lot (e.g., the Domain, Instance variations), are these new errors caused by the model's decreased reasoning abilities, or simple mistakes around the surface-form symbols? Relatedly, the authors often make claims that are somewhat ambiguous on the target. For example, in lines 416-418: \\\"...the performance of LLMs either improved or remained consistent when provided with weaker or irrelevant guidance information. This refutes ReAct\\u2019s claim that task-specific reasoning trace is the source of LLM agent performance\\\". Is this \\\"task-specific reasoning trace\\\" the ones in demonstrations or those generated by the model? The results only show that LLMs don't need such reasoning traces in the demonstrations, but the LLMs could still generate good traces during inference.\", \"**Results are overall not surprising**. It is well known that LLMs are usually not \\\"learning\\\" how to perform the task from the demonstration examples, rather, the prompt mostly provides the overall format and some key anchors such as label/action space related to the test problems to shape the model generation [1, 2]. There are prior works showing that one doesn't even need to provide these explicit demonstrations for the model to work, e.g., just saying \\\"think step by step\\\" could elicit CoT behaviors of LLMs. It is also well-known that providing examples that are more similar to the queried problems brings better performance, and many prior efforts on demonstration selection are exactly about closing the gap between the demonstrations and queries, e.g., [3, 4]. LLMs, or ML models more broadly, generally suffer from distributional shifts which is one of the open research problems. Reporting this in some specific tasks/settings is not very significant in my view.\", \"--\", \"Citations\", \"[1] Min et al. Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? EMNLP-22.\", \"[2] Wang et al. Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters. ACL-23.\", \"[3] Liu et al. What Makes Good In-Context Examples for GPT-3? DeeLIO-22.\", \"[4] Rubin et al. Learning To Retrieve Prompts for In-Context Learning. NAACL-22.\"], \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper examines how effective ReAct-style prompting is for improving Large Language Models (LLMs) in decision-making tasks. While ReAct is known for boosting LLM reasoning by combining reasoning and actions, this study questions where these improvements really come from. The authors conduct experiments in planning tasks in AlfWorld and WebShop to analyze the effects of reasoning traces, plan guidance, and how similar examples are to the queries.\", \"contributions\": \"The paper tests ReAct's claims and finds that LLM performance doesn't significantly improve by manipulating reasoning with action execution. The results show that the main reason for LLM performance is the high similarity between example tasks and queries, rather than reasoning abilities from reasoning traces. A detailed sensitivity analysis shows that LLM performance drops when the example-query similarity is reduced, highlighting ReAct's fragility. The study challenges the belief that LLMs develop reasoning abilities through ReAct-style prompting, showing that success is mostly due to pattern matching or retrieving similar examples.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Originality: The paper offers a new look at ReAct-style prompting by questioning if interleaving reasoning with actions truly improves LLM performance. It highlights example-query similarity as the main factor driving success, offering a different perspective.\", \"quality\": \"The paper is well-structured, with clear experiments tested across different LLM models and domains. The use of varied tests and detailed analysis strengthens the reliability of the findings. The overall experiment setups are convincing and comprehensive.\", \"clarity\": \"The paper is easy to understand and well-organized. Proposed ideas are reasonable, and tables and figures effectively explain the results, making the findings accessible to a wide audience.\", \"significance\": \"The paper raises important questions about ReAct-style prompting, showing that example-query similarity may play a bigger role than reasoning in LLM performance. This insight could help guide future research and improve LLM prompting methods.\", \"weaknesses\": \"While the paper offers important insights into the limitations of ReAct-style prompting, it doesn\\u2019t fully address whether these findings apply across different scenarios and domains. The study focuses on specific tasks in AlfWorld and WebShop, but it\\u2019s unclear how generalizable the results are to other environments or more complex tasks. For example, would the same reliance on example-query similarity hold in tasks with more diverse or less structured action spaces? The lack of broader applicability raises concerns about the scalability of the conclusions, making it hard to know if the findings can be generalized to all situations where ReAct prompting is used.\\n\\nThe study uses black-box LLMs without fully clarifying what matters more for performance\\u2014the ReAct-style prompting itself or the inherent capabilities of the LLMs. Since LLMs differ in architecture and training, it's difficult to separate the effects of the prompt from the underlying model\\u2019s abilities. This makes it unclear whether the performance issues are caused by the prompting technique or limitations within the models themselves. Ideally, more transparency in how the LLMs are interacting with the prompts and what exactly drives the observed behavior would help strengthen the conclusions.\\n\\nThe paper also lacks a strong technical contribution beyond critiquing ReAct. There is no obvious novel method or solution offered to address the identified weaknesses in ReAct-style prompting. While finding flaws in prompting techniques is important, the absence of a proposed solution limits the paper\\u2019s impact. Readers are left with an understanding of what is wrong but without a clear path to improve or resolve these issues. A more balanced approach would include recommendations or a framework for improving prompting techniques, which would provide more concrete value to readers.\", \"questions\": \"Regarding scalability: Could you clarify how your findings apply to more complex environments or tasks beyond AlfWorld and WebShop? For instance, would the same dependency on example-query similarity hold in domains with less structured actions or greater variability?\", \"regarding_justification_of_approaches\": \"In your experiments, how much do you attribute the performance outcomes to the prompting method versus the underlying capabilities of the LLMs themselves?\", \"regarding_workflow_completeness\": \"Did you explore how the size of the context window or the amount of training data provided to the LLMs impacts performance in your scenarios? While the analysis of ReAct\\u2019s limitations is thorough, do you have suggestions or possible remedies for reducing the reliance on example-query similarity?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper explores the impact of different components of ReAct-style prompting on the final outcomes, showing that both interleaved reasoning traces and generated reasoning traces have minimal influence on results. In contrast, the similarity between examples has a much more significant effect.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. This paper provides a valuable exploration of how ReAct functions in decision-making tasks.\\n\\n2. I highly appreciate the motivation for existing work, which holds great importance for the research community.\", \"weaknesses\": \"1. The writing in the paper is difficult to follow. Prompts take up too much space in each paragraph, and some words are struck through without clear explanations. The font size in the figures is too small. I am confused about the names \\\"GPT-3.5-instruct\\\" and \\\"GPT-3.5-Turbo\\\" as the authors refer to \\\"gpt-3.5-turbo-0125\\\" and \\\"gpt-3.5-turbo-instruct\\\". I also struggled to understand what \\\"Act\\\" is in the tables 1 as it's hard to find the introduction of baselines. Overall, the paper is not ready for publication.\\n\\n2. The examples provided, such as finding an item, do not seem to have sufficient changes in the environment. The authors suggest that giving LLMs a complete action plan at the outset (e.g., if A happens, do X; if B happens, do Y) is feasible, as shown in Figure 2. I disagree as many situations could change the environment, making it impossible to provide all potential scenarios upfront.\\n\\n3. Regarding RQ1 and its two variants, I didn't find any intrinsic difference between these variants and React. I feel that the variants are just human-rewritten versions of React. A more reasonable comparison would be to directly present the user\\u2019s query to the model and let it generate a similar plan prompt. The current setup, where the authors design prompts based on potential behaviors, seems unfair.\\n\\n4. The conclusion that the \\\"similarity between the example and the query\\\" is important feels rather obvious. Since LLMs rarely encounter agentic structure data in their training, the significance of in-context learning is naturally high. This conclusion lacks sufficient insight for me.\", \"questions\": \"1. In Table 1, the explanation for GPT-3.5 outperforming GPT-4o is \\u201chighlighting the brittleness of claims of ReAct.\\u201d I don\\u2019t quite understand this rationale, as it seems to imply that weaker prompting leads to more significant performance drops in stronger models. Shouldn\\u2019t stronger LLMs be more robust to suboptimal prompting? Could you provide some examples to clarify this counterintuitive result?\\n\\n2. The Webshop dataset shows a low accuracy. I believe analyzing such low accuracy is not that meaningful. Other papers report much higher figures. Can you explain the reason?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal (2/2)\", \"comment\": \"We address the reviewer\\u2019s questions below:\\n\\n$$\\\\textbf{Robustness to suboptimal prompting (Q1):}$$ We thank the reviewer for bringing up this point where newer and larger (in terms of trained parameters) models are expected to be more robust to prompting strategies and that the results obtained on smaller/weaker models should be further improved. We believe that the reviewer refers to line 422-425 in the paper where we mention that GPT-4-0314 (Old), and not GPT-4o, performs the worst in the GPT-X family. To our surprise too, ReAct\\u2019s performance is not consistent with this expectation thereby raising further concerns on its scalability and generalizability. Given that this work has gained a significant number of eyes to build upon, we believe it is an important observation. To recall, ReAct originally showed all results only on the PaLM model (currently decommissioned) and not on any other LLM. Hence, in an effort to understand the working of ReAct while giving it all the possible benefits of doubt, we tested with multiple LLM models (including the most recent OpenAI and Claude models accessible at the time of writing this paper).\\n\\n$$\\\\textbf{WebShop results (Q2):}$$ Firstly, we incorporate results and the same sensitivity analysis as we did on AlfWorld, for the sake of completeness to strengthen our argument regarding the study on ReAct for decision-making domains. Since ReAct originally showed results on these two domains and that these two domains have been relatively popular for studying text-based decision-making domains, we wanted to highlight that our results are consistent across both domains. Secondly, the reason behind low accuracies in the WebShop is possibly due to two reasons: A) we do not have access to the exact test set used in the original ReAct paper (which shows 40% performance on WebShop) as the authors did not make that public, we had to randomly sample test problems from the WebShop dataset comprising 12,000 problems (line 452-457 in the paper). B) As we mention above in our response to Q1, ReAct originally showed all results only on the PaLM model (currently decommissioned) and not on any other LLM. Hence, the lack of scalability and robustness in the original approach across different LLM models could have also resulted in this diminished performance.\\n\\n[1] Mohit Shridhar, Xingdi Yuan, Marc-Alexandre C\\u00f4t\\u00e9, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. Alfworld: Aligning text and embodied environments for interactive learning. arXiv preprint arXiv:2010.03768, 2020.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for their thoughtful comments and feedback, and that they find our study detailed and effective in analyzing ReAct. We have tried addressing each of the concerns below:\\n\\n$$\\\\textbf{Citations to ICL literature (W1):}$$ We do cite some of the relevant papers on ICL in our paper in lines 102-104 in Section 2 when talking about the different prompting strategies, and have also included the paper mentioned by the reviewer in lines 131-132. We would be happy to include any other specific references that the reviewer suggests can be helpful for our work.\\n\\n$$\\\\textbf{Uselessness of think tags (W2):}$$ In this work, we primarily intended to focus on the supposed usefulness of think tags, i.e., interleaved reasoning traces in multi-step text-based decision making problems. Our hypothesis behind the analysis was to give ReAct the benefit of doubt assuming that it is the reasoning trace (its content and location in the prompt) that leads to increased LLM performance on decision-making domains such as AlfWorld and WebShop. To recall, we note from Table 3 (RQ3) that changing the examples in the prompt let alone drops the performance across multiple LLM models in the AlfWorld task, which is completely opposite to the case when we modify the location and content of the think tag in Table 1 (RQ1) and Table 2 (RQ2). While our results re-iterate on other findings regarding the role of examples in the few-shot settings, we wanted to systematically study and show how the reasoning traces, which is the primary claim behind ReAct and all the follow-up works that build on ReAct (see lines 102-104), are of practically no use and only lead to requiring prompt engineers include these reasoning traces in the examples.\"}", "{\"comment\": \"I greatly appreciate the authors' detailed and prompt responses. However, after thoroughly reviewing their responses and considering the concerns raised by other reviewers, I have decided to maintain my current score.\"}", "{\"title\": \"Rebuttal (1/2)\", \"comment\": \"We thank the reviewer for their thoughtful comments and feedback, and that they find our study effective and the motivation valuable with respect to the ongoing research. We have tried addressing each of the concerns below:\\n\\n$$\\\\textbf{Prompts\\u2019 space usage and terminology (W1):}$$ We noted the reviewer\\u2019s concerns regarding the difficulty in following some parts of the paper, and have tried addressing the suggested changes for improved readability and easier understanding. Our intention behind including the important prompt changes in Section 4.1 was to provide an easy-to-reference example for each prompt variation. Based on the reviewer\\u2019s suggestion, we have updated the respective paragraphs (see blue text in Section 4.1 and Section 4.2) and refer the readers to Figure 2 which includes examples for all the variations in one place. Furthermore, gpt-3.5-turbo/gpt-3.5-turbo-0125 and gpt-3.5-turbo-instruct/gpt-3.5-instruct were used interchangeably in the paper. We included a description of the \\u2018Act\\u2019 baseline (originally used as a baseline in ReAct paper) in line 401 in our paper.\\n\\n$$\\\\textbf{Clarification on environment dynamics (W2):}$$ We would first like to clarify, based on the original paper on AlfWorld [1] that the environment is not dynamic, but partially observable which means that the agent may only find out if an object is present in a location or not after reaching that location. Moreover, as we note from our results in Table 1 (RQ1) and Table 2 (RQ2), the idea of having these reasoning traces for a couple of examples in the prompt pushes the LLM to explore the wrong rooms (as it is merely trying to replicate a similar flow of actions as shown in the example prompt). This further emphasizes our argument that these traces are not helpful. Coming back to the reviewer\\u2019s claim on the difficulty of providing all the information upfront, we surely would not be able to construct similar prompt variants for environments which are dynamic (for example, a modified AlfWorld). We have further included this as an additional comment in our paper in lines 211-213.\\n\\n$$\\\\textbf{Clarification on RQ1 variants (W3):}$$ The primary motivation behind our work is to point out the reliance on human-written prompts in ReAct. While we agree with the reviewer\\u2019s point that the LLM should generate a prompt given a user query, all the prompts used in ReAct originally require human effort to design prompts with examples that can potentially help the LLM agent to replicate the behavior on a new query problem. Our sensitivity analysis is focused on revealing this exact behavior, such that it can be easily understood which elements in the prompt indeed help with the downstream task.\\n\\n$$\\\\textbf{Conclusion and takeaways (W4):}$$ In this work, we primarily intended to focus on the supposed usefulness of think tags, i.e., interleaved reasoning traces in multi-step text-based decision making problems. Our hypothesis behind the analysis was to give ReAct the benefit of doubt assuming that it is the reasoning trace (its content and location in the prompt) that leads to increased LLM performance on decision-making domains such as AlfWorld and WebShop. To recall, we note from Table 3 (RQ3) that changing the examples in the prompt let alone drops the performance across multiple LLM models in the AlfWorld task, which is completely opposite to the case when we modify the location and content of the think tag in Table 1 (RQ1) and Table 2 (RQ2). While our results re-iterate on other findings regarding the role of examples in the few-shot settings, we wanted to systematically study and show how the reasoning traces, which is the primary claim behind ReAct and all the follow-up works that build on ReAct (see lines 126-130), is of practically no use and only leads to requiring prompt engineers include these reasoning traces in the examples.\"}", "{\"comment\": \"Thank you for the response. While it resolves some of my concerns, my main assessment still holds - the findings in this paper are not surprising/enlightening enough from my view given prior studies in similar directions, and hence I will maintain my score.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank all the reviewers for their thoughtful reviews. We are gratified to note that all the reviewers acknowledge the fact that our systematic analysis raises significant questions about the claims in the ReAct paper. Some of the reviewers expressed reservations about the fact that we only point out the flaws in the claims made in the ReAct paper, and don\\u2019t ourselves provide solutions to rectify those flaws. We offer two rebuttals on this: first ReAct is not just any paper\\u2013but a rather influential paper (currently 1739 citations) whose claims have largely been accepted at face value. There are very recent papers [eg. 1-6] that repeat ReACT claims and claim to further extend them. So pointing out inherent flaws in ReAct is, we believe, an important addition to the science of LLM prompting. Second, the kind of claims ReAct makes are such that there is no actual way to make them true, as has perhaps been shown by recent work evaluating the planning (in)capabilities of LLMs [7, 8]. We hope the reviewers take this view into consideration in evaluating the strength of our contribution.\\n\\n\\n[1]Yao Yao, Zuchao Li, and Hai Zhao. Beyond chain-of-thought, effective graph-of-thought reasoning in large language models. arXiv preprint arXiv:2305.16582, 2023.\\n\\n[2]Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023.\\n\\n[3] Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. Mm-react: Prompting chatgpt for multimodal reasoning and action. arXiv preprint arXiv:2303.11381, 2023.\\n\\n[4] Lluis Castrejon, Thomas Mensink, Howard Zhou, Vittorio Ferrari, Andre Araujo, and Jasper Uijlings. Hammr: Hierarchical multimodal react agents for generic vqa. arXiv preprint arXiv:2404.05465, 2024.\\n\\n[5] Can Cui, Yunsheng Ma, Xu Cao, Wenqian Ye, and Ziran Wang. Receive, reason, and react: Drive as you say, with large language models in autonomous vehicles. IEEE Intelligent Transportation Systems Magazine, 2024.\\n\\n[6] Yunjia Zhang, Jordan Henkel, Avrilia Floratou, Joyce Cahoon, Shaleen Deep, and Jignesh M Patel. Reactable: Enhancing react for table question answering. arXiv preprint arXiv:2310.00815, 2023.\\n\\n[7] Karthik Valmeekam, Matthew Marquez, Sarath Sreedharan, and Subbarao Kambhampati. On the planning abilities of large language models-a critical investigation. Advances in Neural Information Processing Systems, 36, 2024.\\n\\n[8] Kaya Stechly, Karthik Valmeekam, and Subbarao Kambhampati. Chain of thoughtlessness: An analysis of cot in planning. arXiv preprint arXiv:2405.04776, 2024\"}", "{\"summary\": \"This paper studies into what makes ReAct-type of prompting works. It studies into 3 factors: (1) where the guidance is provided, (2) the different types and structure of this guidance (3) on varying the resemblance of example prompt to the queried problem.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"It conducts a very detailed sensitivity study into the effectiveness of ReAct prompting.\", \"weaknesses\": \"Studies on prompt engineering and what parts of prompt work for final result has been studied for a long time. Various research on why and what types of intermediate thinking chains/in-context learning can work (such as \\\"Rethinking the Role of Demonstrations\\\") and on what models they work have been studied by many papers. However, no such papers are cited or discussed.\\n\\nMany of the observational conclusions such as similarity to in-context examples and uselessness of thinking trace are also discussed by various papers, and thus renders this paper's conclusion not so exciting.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for their thoughtful comments and feedback, and that they find our study effective, well-structured, and the motivation valuable with respect to future research in LLM prompting methods. We have tried addressing each of the concerns below:\\n\\n$$\\\\textbf{Experiment analysis (W1a):}$$ We thank the reviewer for this insightful question. In all our experiment settings, we note that LLMs replicate the exact steps as shown for the examples in the prompts. Hence, they do NOT output what ReAct claims as think tags if those tags are not present in the original prompt. We agree that this needs to be clarified clearly in the paper\\u2019s results, and thus, we have updated our Results section for lines 425-427 to further emphasize on this point.\\n\\n$$\\\\textbf{Failure analysis (W1b):}$$ We would like to highlight that these failures are due to LLMs being unable to correctly reason possibly due to the mismatch with the prompt examples as shown by our results in Table 3, and clearly not due to simpler syntactic mistakes. This is also true in the case even if one of the two prompt examples is the same as the problem query (see variation descriptions in Section 5.3). \\n\\n$$\\\\textbf{Conclusion and Takeaways (W2):}$$ In this work, we primarily intended to focus on the supposed usefulness of think tags, i.e., interleaved reasoning traces in multi-step text-based decision making problems. Our hypothesis behind the analysis was to give ReAct the benefit of doubt assuming that it is the reasoning trace (its content and location in the prompt) that leads to increased LLM performance on decision-making domains such as AlfWorld and WebShop. To recall, we note from Table 3 (RQ3) that changing the examples in the prompt drops the performance across multiple LLM models in the AlfWorld task, which is completely opposite to the case when we modify the location and content of the think tag in Table 1 (RQ1) and Table 2 (RQ2). While our results re-iterate on other findings regarding the role of examples in the few-shot settings, we wanted to systematically study and show how the reasoning traces, which is the primary claim behind ReAct and all the follow-up works that build on ReAct [1-6], is of practically no use and only leads to requiring prompt engineers include these reasoning traces in the examples. \\n\\nTo conclude, we believe that it will be helpful for future works to take these results into account, particularly when designing prompts for text-based decision-making problems, and benefit from avoiding putting any efforts into constructing reasoning traces but rather select the right examples for subsequent problems. We have included this point in our Conclusion section in lines 536-539.\\n\\n[1]Yao Yao, Zuchao Li, and Hai Zhao. Beyond chain-of-thought, effective graph-of-thought reasoning in large language models. arXiv preprint arXiv:2305.16582, 2023.\\n\\n[2]Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023.\\n\\n[3] Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. Mm-react: Prompting chatgpt for multimodal reasoning and action. arXiv preprint arXiv:2303.11381, 2023.\\n\\n[4] Lluis Castrejon, Thomas Mensink, Howard Zhou, Vittorio Ferrari, Andre Araujo, and Jasper Uijlings. Hammr: Hierarchical multimodal react agents for generic vqa. arXiv preprint arXiv:2404.05465, 2024.\\n\\n[5] Can Cui, Yunsheng Ma, Xu Cao, Wenqian Ye, and Ziran Wang. Receive, reason, and react: Drive as you say, with large language models in autonomous vehicles. IEEE Intelligent Transportation Systems Magazine, 2024.\\n\\n[6] Yunjia Zhang, Jordan Henkel, Avrilia Floratou, Joyce Cahoon, Shaleen Deep, and Jignesh M Patel. Reactable: Enhancing react for table question answering. arXiv preprint arXiv:2310.00815, 2023.\"}", "{\"comment\": \"Thank you for the author's response. It addressed some of my concerns. However:\\n\\n1. The authors still did not provide a reasonable explanation for Q1. We observed that GPT-4 is worse than GPT-3, which is interesting and my question is about the reason behind this observation. However, the reply is only about the \\\"importance\\\"\\n rather than the \\\"reason\\\".\\n\\n2. Regarding Q2, if the results from the original paper cannot be reproduced, alternative reasonable approaches should be taken, e.g., contacting the authors, rather than presenting results with significant discrepancies. This makes it difficult for me to determine whether the differences arise from errors in your own implementation. Furthermore, I do not think replacing PALM with GPT would result in such a large performance gap.\\n\\n3. The importance of \\\"in-context example similarity\\\" has already been extensively discussed and studied in the community. I recommend that the authors focus on analyzing why the \\\"interleaved reasoning trace\\\" is not effective in future versions.\\n\\nTherefore, I will maintain my current score.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for their thoughtful comments and feedback, and that they find our study effective, well-structured, and the motivation valuable with respect to future research in LLM prompting methods. We have tried addressing each of the concerns below:\\n\\n$$\\\\textbf{Generalizability of results to other tasks (W1/Q1):}$$ We would like to first highlight that we show experiments on both decision-making benchmarks that were originally used in ReAct. Furthermore, to recall the working of ReAct using the example of Put task (e.g., put some bottle on toilet) in the AlfWorld domain, the LLM needs to be provided the exact same action space via the examples to solve a subsequent Put task problem. As we show in our results for changing the examples in the prompt in Table 3, the LLM performance significantly diminishes. Note that there are cases where the examples can be from the Heat task (where the agent has to put some object in the microwave and heat it) that can be expected to work for a Put task problem (where the agent only has to put some object. The reason is that the action space for the Heat task already subsumes the action space required for solving the Put task, and yet, the LLM\\u2019s sensitivity is revealed by a significant drop in performance due to lack of generalizability. In this case, our argument is further pronounced that the LLM\\u2019s performance has little to no correlation with the interleaved reasoning traces (the example for Heat task contained the interleaved reasoning trace or the think tag which is similar to the think tag used for a Put task example).\\n\\n$$\\\\textbf{Approach justification/LLM performance issues (W2/Q2):}$$ We do include LLAMA-3.1-8B results for our RQ1 and RQ2 results on Webshop in Table 2, and observe a similar trend in results as we see in the black-box models from the GPT and Claude family. We agree that while these models may have different architectures and abilities, we tried establishing baseline ReAct results for each of these models and analyzed the gain or drop in performance across our different prompt variations. In this work, we do not aim to benchmark or analyze any single LLM\\u2019s reasoning abilities on decision-making tasks (to understand its limitations as rightly pointed by the reviewer), but rather intend to understand the robustness/brittleness of various LLMs with respect to different components in the ReAct-style prompting method for these tasks. Taking the reviewer\\u2019s suggestion, we have further updated our paper in lines 374-377 to reflect this point clearly for the readers.\\n\\n$$\\\\textbf{Takeaways and workflow completeness (W3/Q3):}$$ In this work, we primarily intended to focus on the supposed usefulness of think tags, i.e., interleaved reasoning traces in multi-step text-based decision making problems. Our hypothesis behind the analysis was to give ReAct the benefit of doubt assuming that it is the reasoning trace (its content and location in the prompt) that leads to increased LLM performance on decision-making domains such as AlfWorld and WebShop. To recall, we note from Table 3 (RQ3) that changing the examples in the prompt let alone drops the performance across multiple LLM models in the AlfWorld task, which is completely opposite to the case when we modify the location and content of the think tag in Table 1 (RQ1) and Table 2 (RQ2). While our results re-iterate on other findings regarding the role of examples in the few-shot settings, we wanted to systematically study and show how the reasoning traces, which is the primary claim behind ReAct and all the follow-up works that build on ReAct (see lines 126-130), is of practically no use and only leads to requiring prompt engineers include these reasoning traces in the examples. \\nRegarding the different prompt and context window size, we do include the experiments (results shown in Table 3 for the column \\u2018All\\u2019) where we incorporate one example each from all AlfWorld tasks, thereby increasing the prompt length significantly as compared to the other settings. To include the point on the size of the context window clearly in our experiment section, we have now shown the context window sizes for each of the LLMs that we use for our results in lines 366-374.\\n\\nTo conclude, we believe that it will be helpful for practitioners and future works to take these results into account, particularly when designing prompts for text-based decision-making problems, and benefit from avoiding putting any efforts into constructing reasoning traces but rather select the right examples for subsequent problems. We have included this point in our Conclusion section in lines 536-539.\"}" ] }
85G2t3yklD
Towards Unbiased Learning in Semi-Supervised Semantic Segmentation
[ "Rui Sun", "Huayu Mai", "Wangkai Li", "Tianzhu Zhang" ]
Semi-supervised semantic segmentation aims to learn from a limited amount of labeled data and a large volume of unlabeled data, which has witnessed impressive progress with the recent advancement of deep neural networks. However, existing methods tend to neglect the fact of class imbalance issues, leading to the Matthew effect, that is, the poorly calibrated model’s predictions can be biased towards the majority classes and away from minority classes with fewer samples. In this work, we analyze the Matthew effect present in previous methods that hinder model learning from a discriminative perspective. In light of this background, we integrate generative models into semi-supervised learning, taking advantage of their better class-imbalance tolerance. To this end, we propose DiffMatch to formulate the semi-supervised semantic segmentation task as a conditional discrete data generation problem to alleviate the Matthew effect of discriminative solutions from a generative perspective. Plus, to further reduce the risk of overfitting to the head classes and to increase coverage of the tail class distribution, we mathematically derive a debiased adjustment to adjust the conditional reverse probability towards unbiased predictions during each sampling step. Extensive experimental results across multiple benchmarks, especially in the most limited label scenarios with the most serious class imbalance issues, demonstrate that DiffMatch performs favorably against state-of-the-art methods.
[ "Semi-Supervised Semantic Segmentation" ]
Accept (Poster)
https://openreview.net/pdf?id=85G2t3yklD
https://openreview.net/forum?id=85G2t3yklD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "njZzcjzUJ9", "ZnpO4wufg0", "SR2Yp32Al4", "QYnMdFhyyj", "4SKzsE97T5" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review", "decision" ], "note_created": [ 1734770145478, 1730596640427, 1730559515059, 1729167455285, 1737523710526 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5501/Area_Chair_FC6E" ], [ "ICLR.cc/2025/Conference/Submission5501/Reviewer_iDj4" ], [ "ICLR.cc/2025/Conference/Submission5501/Reviewer_w9jN" ], [ "ICLR.cc/2025/Conference/Submission5501/Reviewer_nyBx" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"metareview\": \"This paper addresses semi-supervised semantic segmentation using diffusion models in a teacher-student SSL framework. The authors claim that generative models are more robust to class imbalance and, hence, developed a novel and interesting discrete data generation pipeline for semi-supervised semantic segmentation, which is novel and interesting. The paper is well-written with sufficient details for reproducing experimental results. The proposed pipeline is easy to implement with minor changes to standard diffusion model training. The paper also gave theoretical justification from an information-theoretical perspective for why DiffMatch improves the quality of pseudo labels, which is appreciated by reviewers.\\nThe downside of the method is increased inference cost compared to discriminative models due to the multi-step denoising process in diffusion models. Also, diffusion models are data-driven and have problems when trained with imbalanced data.\\nThe AC recommends this paper for acceptance primarily because it successfully adapts diffusion models for semi-supervised semantic segmentation and provides sound theoretical justification. All three reviewers have positive ratings for this paper.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer nyBx was not sure why a diffusion model was used for this problem, as SOTA models for semantic segmentation are not based on diffusion models.\\nThe authors did not fully address the concern but mentioned that the backbone of diffusion models here is lightweight and similar to DeepLabV3+. Hence, the model architecture is not more complex yet its performance is superior to discriminative models.\"}", "{\"summary\": \"This paper analyzes the issue of class imbalance, which leads to model predictions being biased towards head classes and away from tail classes, summarizing it as the Matthew effect. To address this problem, the authors propose a new method called DiffMatch, which formulates the semi-supervised semantic segmentation task as a conditional discrete data generation problem, alleviating the Matthew effect from a generative perspective. Furthermore, based on the mathematical derivations, the authors introduce a debiased adjustment to adjust the conditional reverse probability, reducing the risk of overfitting to head classes during the generation process. Experimental results demonstrate the effectiveness of their proposed method in the challenging dense pixel-level classification task.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe authors establish a well-thought-out framework to integrate generative models into semi-supervised learning (SSL) for the challenging semi-supervised semantic segmentation task, mitigating the class imbalance issue from an interesting generative perspective, which is quite novel.\\n\\n2.\\tThe mathematical derivation of the debiased adjustment is ingenious and thoughtful. The theoretical analysis not only explains the reasons behind the proposed method's effectiveness in addressing the class imbalance issue in generative modeling but also provides valuable insights and guidance for future research in SSL techniques.\\n\\n3.\\tThe paper is presented in a clear and concise manner, effectively conveying the intuitive motivation and ideas behind the proposed method. The authors thoroughly analyze the Matthew effect present in previous methods and leverage mathematical tools to alleviate the issue from a generative perspective.\\n\\n4.\\tThe evaluation in this paper is sufficient and comprehensive. In addition to evaluating the effectiveness of the proposed method on common semi-supervised semantic segmentation benchmarks, the authors extend their analysis to other domains, including remote sensing and medical images.\", \"weaknesses\": \"1.\\tAlthough the authors report the trade-off between performance and computational cost of the proposed method during inference, the computational cost during the *training* stage compared to the baseline should also be reported to demonstrate the method's effectiveness.\\n\\n2.\\tThe specific implementation details of the scheme adopted in this work have not been mentioned. Is it a teacher-student network scheme or another type of scheme?\\n\\n3.\\tIn Appendix C, the factor t is introduced in Equality \\u2462 without sufficient description. Please provide more details on the reasoning behind this step and how t is derived. What is the impact of t on the overall debiased adjustment? Discuss whether removing or simplifying t would impact the method's performance.\\n\\n4.\\tWhile I understand that the ICLR submission deadline is earlier than the release phase for ECCV papers, I encourage the authors to include comparisons with some recent works from ECCV 2024, such as MPMC [a], which could serve as a recent example. [a] Beyond Pixels: Semi-Supervised Semantic Segmentation with a Multi-scale Patch-based Multi-Label Classifier. ECCV 2024.\\n\\n5.\\tI encourage the authors to discuss the limitations of the proposed method.\\n\\n6.\\tMinor typos: The \\u201cits\\u201d in \\u201ctaking advantage of its their better class-imbalance tolerance\\u201d in L19 should be deleted.\", \"questions\": \"There are also some minor questions.\\n\\n1.\\tIn the experiments, the authors use a fixed number of diffusion sampling steps, and the results demonstrate that the proposed method achieves competitive performance with reasonable computational cost. However, it is worth considering if a dynamic adjustment of sampling steps could potentially lead to further reductions in computational cost. Please discuss this possibility, as I think it will be interesting and feasible.\\n\\n2.\\tI am curious about the effectiveness of DiffMatch when applied to other related domains, such as semi-supervised classification.\\n\\nOverall, I think now this paper meets the bar for ICLR, and I will consider raising my score if my questions are addressed during the rebuttal.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a new method called DiffMatch that aims to address the Matthew effect in semi-supervised semantic segmentation. The approach reframes semi-supervised semantic segmentation as a conditional discrete data generation problem, with a debiased adjustment. This adjustment mathematically modifies the conditional reverse probability at each sampling step to produce unbiased results. The experimental results show that DiffMatch outperforms current state-of-the-art methods, especially in scenarios with limited labeling and severe class imbalance issues.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"DiffMatch is technically innovative, particularly in terms of incorporating debiased adjustments and conditional discrete data generation problems. It is not only theoretically appealing, but also performs well in experiments. Your analysis and explanation of how DiffMatch performs well in both head and tail classes are clear and convincing, which improves the model's ability to recognize minority classes and has potential applications in a variety of computer vision tasks.\", \"weaknesses\": \"1. MORE VISUALIZATION: I have noticed that Remote Sensing Interpretation and Medical Image segmentation are mentioned in the scalability for other scenarios, I suggest that the authors further provide information on the visualization of the results of the two datasets mentioned below to help readers more intuitively understand DiffMatch's performance.\\n\\n2. SCALABILITY FOR OTHER SCENARIOS: The experimental part is extensive and in-depth, covering the fields of natural images, remote sensing images and medical images. I suggest that the authors further explore how the model performs on different datasets, and whether there are specific categories or scenarios that have a greater impact on model performance. \\n\\n3. Writing quality: The overall writing quality of the paper is good, with clear logic and a well-structured layout. However, some sections, such as the discussion of related work, could be further expanded to better position the study within the existing literature.\", \"questions\": \"Please refer to the weakness section for detailed suggestions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents DiffMatch, which aims to use Diffution models to solve the data imbalance in SSS. Specifically, Diffmatch employs conditional discrete data generation to align the predictions with real class distributions. The authors conduct extensive experiments to demonstrate their effectiveness. While the paper tells a good story and offers thorough formulations as evidence, there remain certain concerns that could potentially impact the overall quality.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The motivation is clear and easy to follow.\\n2. The authors provide extensive experimental results to demonstrate the effectivness of their method.\", \"weaknesses\": \"Method:\\n1. Using diffusion models to solve segmentation is not very sound to me, as diffusion models are not yet mainstream in visual perception solutions. Despite mentioning some papers in related works, these approaches do not rank as top performers in segmentation.\\n2. The primary challenge in SSS is noisy pseudo labels. Even though the DiffMatch can align the predictions of unlabeled data, there is no guarantee they are correct.\\n3. I think the learning of unlabeled images is still based on the knowledge learned from labeled images. The process of adding noise and denoise did not bring new information to improve the quality of pseudo labels. \\n4. How to measure the effectiveness of DiffMatch in prediction alignment? Instead of final segmentation results, can we analyze them by distribution visualization or other methods?\\n5. Intuitively, the diffusion process will increase computation costs. However, as in Table 5, DiffMatch is more efficient than other methods. In addition, it seems that only a few sampling steps can already achieve good performances. Why? In addition, how are the effects of the new loss functions in efficiency (Secs 3.2 and 3.3)?\", \"experiments\": \"1. The authors claim that DiffMatch can extend to remote sensing and medical. However, DiffMatch is designed for the long-tailed problem with many categories. WHU-CD is a binary change detection dataset. ACDC is for heart segmentation. Both datasets do not have long-tail problems. Why DiffMatch still outperforms other methods by a clear margin? I think if the authors would like to prove the effectiveness of DiffMatch in long-tail, it would be better to select other datasets (e.g., containing different organs and tumors). Otherwise, such statements may be over-claimed.\\n2. Some recent works employ transformers and push the SSS to another level. Can DiffMatch apply to transformers?\\n2. The authors claimed that \\\"DiffMatch will serve as a solid baseline and facilitate future research\\\", but I don't think Diffusion models would be an extendable design for future research in SSS. \\n3. DiffMatch compares a lot with UniMatch and follows its settings. UniMatch released a strong codebase with checkpoints and training logs for following research. I think this part is important for DiffMatch to be a solid baseline.\\n\\nOverall, I am incline to the negative aspect currently. I am afraid that potential readers may consider this paper as a story but not a practical solution. However, I would be very glad to improve my ratings if the authors could solve my concerns.\", \"questions\": \"Please refer to the weakness for the questions.\\n\\nMinor Suggestion (not vital):\\nFigs 2 and 3 are not very illustrative and too small. Could they be enlarged, considering their significance in explaining the method? And the caption can be more illustrative since there are numerous mathematical symbols in the figures. It was only after delving into Section 3 that I grasped the workings of DiffMatch. The current presentation may be challenging to follow.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
85Eej2kUHQ
Certified Defense Against Complex Adversarial Attacks with Dynamic Smoothing
[ "Francesco Quinzan", "Marta Kwiatkowska" ]
Randomized smoothing has emerged as a certified defence mechanism with probabilistic guarantees that works at scale. However, current randomized smoothing methods offer theoretical guarantees that are limited by their reliance on specific noise distributions, and they struggle to handle complex adversarial attacks. In this paper, we propose a novel certification method based on randomized smoothing designed to handle complex adversarial attacks, including combinations of multiple attack types. We call this method Dynamic Smoothing (DSmooth). Our key idea is to incorporate more general distributions for smoothing then isotopic Gaussian noise, for which probabilistic guarantees can be derived in terms of the Mahalanobis distance. These general distributions make the smoothed classifier more robust against a wide range of threats, including localized adversarial attacks and multi-attacks. We validate the performance of our method experimentally on challenging threat models using CIFAR-10 and ImageNet, and demonstrate its superiority over state-of-the-art defenses in terms of certified accuracy. Our results show that the proposed method significantly improves the robustness of machine learning models against complex attacks, advancing their suitability for use in safety-critical applications. Code: [removed for review]
[ "AI safety", "adversarial robustness", "randomized smoothing" ]
https://openreview.net/pdf?id=85Eej2kUHQ
https://openreview.net/forum?id=85Eej2kUHQ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rz32t2NAKu", "rVjVJpdc1o", "jD3Uz4Vg0w", "eoa1wXGqBc", "IkxgsflfZD", "Es8zL1h3JQ", "DYVyddMKqU" ], "note_type": [ "official_review", "official_review", "official_comment", "comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1730452705318, 1729602069674, 1732296624967, 1732296777972, 1729475226960, 1732296575829, 1732296524245 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10353/Reviewer_fV7Q" ], [ "ICLR.cc/2025/Conference/Submission10353/Reviewer_9udX" ], [ "ICLR.cc/2025/Conference/Submission10353/Authors" ], [ "ICLR.cc/2025/Conference/Submission10353/Authors" ], [ "ICLR.cc/2025/Conference/Submission10353/Reviewer_1983" ], [ "ICLR.cc/2025/Conference/Submission10353/Authors" ], [ "ICLR.cc/2025/Conference/Submission10353/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes *Dynamic Smoothing* (DSMOOTH), an extension of the randomized smoothing framework aimed at enhancing robustness against complex adversarial attacks. Traditional randomized smoothing methods rely on isotropic Gaussian noise, which limits their effectiveness against multi-norm and structured adversarial threats. Authors overcome these limitations by employing a broader range of noise distributions and using Mahalanobis distance to define probabilistic robustness guarantees, making it more adaptable to localized and non-uniform attacks. The authors validate DSMOOTH\\u2019s effectiveness through experiments on CIFAR-10 and IMAGENET, where it demonstrates significantly improved certified accuracy against state-of-the-art baselines in multi-attack scenarios.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The core strength of the paper is its strong theoretical foundation, presenting a novel idea of using Mahalanobis distance to extend the randomized smoothing framework, enabling it to handle complex adversarial attacks. It demonstrates originality by expanding randomized smoothing to incorporate a range of noise distributions beyond isotropic Gaussian noise, thereby providing robustness against multi-norm, multi-type adversarial threats, which traditional smoothing methods struggle to defend against. This is a valuable and important direction to explore in the context of randomized smoothing. The experimental results also support the theoretical claims, further adding to the strength of the paper.\", \"weaknesses\": \"While the paper presents promising theoretical advancements, several issues weaken its overall contribution and lead me to lean towards rejection. Although the experimental results support the theoretical claims, the setup itself lacks rigor. For instance, the evaluation is conducted on only 500 images from the CIFAR-10 test set rather than the full test set, potentially skewing the robustness results and limiting the generalizability of the findings. This choice is not well-justified, especially given the scale of CIFAR-10 and the availability of complete test sets. Additionally, the authors choose not to compare against baselines designed for $l_{p}$-norm defenses with $p > 2$, but the reasons for this omission are vague. This exclusion undermines the evaluation since these defenses are common benchmarks in adversarial robustness research. Further, while the paper aims to address robustness in a multi-attack setting, it only tests a single combination of attacks (Square Attack + FGSM). The limitation to just one multi-attack scenario significantly weakens the experimental results, as the approach\\u2019s effectiveness under diverse, real-world multi-attack combinations remains unverified. Expanding the experimental setup to include multiple combinations and comprehensive baseline comparisons would strengthen the paper\\u2019s contributions.\\n\\nOne more thing I would like to point out is that, the authors discuss isotropic gaussian distribution (this has been referred to as isotopic consistently in the paper which is wrong terminology), something similar has been discussed in few other works in the context of randomized smoothing [1,2], similaritities and/or dissimilarities with those methods is not discussed in the related works section.\", \"references\": \"[1] Hanbin Hong and Yuan Hong. Certified adversarial robustness via anisotropic randomized smoothing. arXiv preprint arXiv:2207.05327, 2022.\\n[2] Francisco Eiras, Motasem Alfarra, M Pawan Kumar, Philip HS Torr, Puneet K Dokania, Bernard Ghanem, and Adel Bibi. Ancer: Anisotropic certification via sample-wise volume maximization. arXiv preprint arXiv:2107.04570, 2021.\", \"questions\": \"1. Could the authors provide the evaluation results of their method on the entire CIFAR-10 test set? Using the full test set is standard in the field and would provide stronger evidence for the method's generalizability.\\n\\n\\n2. The authors exclude baselines based on $l_{p}$-norm defenses with $p > 2$. Could the authors provide a more detailed justification for this choice? Although this is briefly mentioned in Section 5.1 under **Baselines**, it is not very clear to the reader why this choice was made. Specifically, could the authors expand on the line: *\\u201cWe do not consider randomized smoothing techniques with certification guarantees in terms of $l_{p}$ norms with $p > 2$, since impossibility results are known for increasing $p$.\\u201d*?\\n\\n\\n3. Under the multi-attack setting, only one combination of attacks (Square Attack + FGSM) was evaluated. Testing multiple attack combinations could better demonstrate DSMOOTH's robustness. Are there plans to expand the experimental setup to include additional attack combinations?\\n\\n\\n4. Other works have discussed anisotropic approaches in the context of randomized smoothing, such as Hong and Hong (2022) and Eiras et al. (2021). Could the authors elaborate on the similarities or differences between their approach and these methods, particularly in the related works section?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper investigates the effectiveness of dynamic smoothing as a defense mechanism against complex adversarial attacks in machine learning models. The authors assert that existing defense strategies often fall short in providing certified robustness, particularly in the face of sophisticated adversarial inputs. Their key contributions include:\\n1.\\tTheoretical Analysis: The paper establishes a dynamic smoothing framework that adapts the noise level based on input complexity, proving that this approach enhances the model's certified robustness against a variety of adversarial attacks.\\n2.\\tEmpirical Findings: The paper demonstrate through extensive experiments that the proposed dynamic smoothing method significantly improves robustness compared to traditional static smoothing techniques, while maintaining performance on both easy and hard samples.\\n3.\\tProposed Solutions: The paper introduce specific training strategies, including adaptive noise levels and robust certification techniques, which allow for effective defense against complex attacks. These solutions not only enhance certified robustness but also improve overall model performance.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The paper introduces a approach to dynamic smoothing that adapts noise levels based on input complexity, distinguishing it from traditional static methods and offering a new direction in randomized smoothing.\", \"weaknesses\": \"1. The scope of this paper is limited. Its primary contribution, anisotropic Gaussian randomized smoothing, has already been addressed by ANCER [1], which generalized Gaussian randomized smoothing to anisotropic Gaussian distributions and provided a robustness certification. This significantly diminishes the novelty of the current work. Moreover, the paper does not reference this closely related work [1].\\n\\n2. The writing lacks clarity, as exemplified by the inclusion of the phrase \\\"Code: [removed for review]\\\" in the abstract; this should be omitted. The experimental setup is inadequately explained. The paper needs to clarify why the Mahalanobis-norm certified accuracy of DSMOOTH is compared to the L1 and L2-norm certified accuracy of other methods. It should also discuss the rationale for using Mahalanobis Distance and identify practical scenarios where its robustness is applicable, assessing DSMOOTH's performance in those contexts. \\n\\n3. The benefits introduced by the modified noise are not well-presented. Finally, unnecessary text in the summary should be removed.\\n\\n[1] ANCER: Anisotropic Certification via Sample-wise Volume Maximization. PMLR 2022\", \"questions\": \"1. Add reference to the work [1].\\n1. Clarify why the Mahalanobis-norm certified accuracy of DSMOOTH is compared to the L1 and L2-norm certified accuracy of other methods.\\n2. Discuss the rationale for using Mahalanobis Distance and identify practical scenarios where its robustness is applicable, assessing DSMOOTH's performance in those contexts. \\n\\n[1] ANCER: Anisotropic Certification via Sample-wise Volume Maximization. PMLR 2022\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Q: It is currently unclear what quantity is being reported as \\\"Certified Radius\\\" for each defense method ( L1, L2, or Mahalanobis Distance radius). (It would be better to report \\\"Certified Volumes\\\" as in (Pfrommer et al 2023; Tecot 2021))\", \"a\": \"If the given matrix is not positive-definite, one can compute the pseudo-inverse or Moore-Penrose inverse instead.\", \"q\": \"How is $\\\\Sigma^{-1}$ computed with a low-rank approximation of $\\\\Sigma$?\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We would like to thank the reviewers for taking the time to provide insightful feedback. We understand that the current submission has various shortcomings, including lack of novelty, as well as the other issues pointed out by the reviewers. For this reason, we have decided to withdraw this submission.\"}", "{\"summary\": \"This paper proposes a generalization of Randomized-Smoothing-based robustness certification. It first claims a very general result (Theorem 4.3), which states that if one applies randomized smothing (Cohen et al 2019) with the added Gaussian noise transformed by an arbitrary (invertible) function p, then the resulting smoothed classifier will enjoy certified robustness under a metric determined by $p^{-1}.$ Precisely, for a smoothed classifier:\\n\\n$$g(x) := \\\\arg \\\\max_{i \\\\in [C]} P_{\\\\epsilon \\\\sim N(0,\\\\sigma^2 I)} f(x + p(\\\\epsilon)) = i), $$\\n\\nif \\n\\n$$ P_{\\\\epsilon\\\\sim N(0,\\\\sigma^2 I)} f(x + p(\\\\epsilon)) =A ) \\\\geq p_A \\\\geq p_B \\\\geq \\\\max_{B \\\\neq A} P_{\\\\epsilon \\\\sim N(0,\\\\sigma^2 I)} f(x + p(\\\\epsilon)) =B)$$\\n\\nThen, for any perturbation $\\\\delta$ such that $|p^{-1}(\\\\delta)|_2 < \\\\sigma/2 * (\\\\Phi^{-1}(p_A) - \\\\Phi^{-1}(p_B))$, we have that $g(x+\\\\delta) = A.$\\n\\nThis general result is then used to prove a more specific smoothing certificate (Lemma 4.5), claiming that if one applies randomized smoothing with anisotriopic Gaussian noise under an arbitrary covariance matrix $\\\\Sigma$, then this results in certified robustness under the Mahalanobis metric defined by $\\\\Sigma$:\\n\\n$$\\\\text{Mahalanobis}_\\\\Sigma (\\\\delta) := \\\\sqrt{\\\\delta^T \\\\Sigma^{-1} \\\\delta}$$\\n\\nThis more specific anisotropic smoothing result is proposed as the basis for a practical method, D-Smooth, for certifiably robust classification. In D-Smooth, the smoothing covariance matrix is determined as the empirical covariance of adversarial attack perturbations computed from samples in the dataset. Note that because different attacks will result in different covariance matrices, this allows the smoothing distribution to be tailored to a specific attack or threat model. Experiments are conducted on CIFAR-10 and ImageNet, using an attack procedure that combines an L_2 and an L_infinity attack.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The particular idea of smoothing using on a covariance matrix derived specifically from the empirical distribution of adversarial attack directions is, to my knowledge, novel, and may be promising.\\n\\nThe presentation of the theoretical results is clear for the most part.\", \"weaknesses\": \"# Theoretical Claims\\n\\nThe more-general theoretical claim of this paper (Theorem 4.3) is incorrect, while the more specialized claim for anisotropic Gaussian smoothing (Lemma 4.5) can be proved directly with a much simpler proof than the one provided, and in fact appears in prior works.\\n\\n- **Theorem 4.3 is incorrect**: one can construct a counterexample in one dimension. In particular, let:\\n\\n$$\\np(x) :=\\n\\\\begin{cases}\\nx,&\\\\text{if }x\\\\geq 0\\\\text{ or }x < -2\\\\\\\\\\\\\\\\\\nx -1,&\\\\text{if } -1 \\\\leq x < 0\\\\\\\\\\\\\\\\\\nx +1,&\\\\text{if } -2 \\\\leq x < -1\\n\\\\end{cases}\\n$$\\nNote that $p$ is deterministic and invertible (in fact, $p^{-1} = p$), and let:\\n\\n$$\\nf(x) :=\\n\\\\begin{cases}\\nA,&\\\\text{for }x\\\\leq -1\\\\text{ or }0 \\\\leq x < 1\\\\\\\\\\\\\\\\\\nB,&\\\\text{elsewhere.}\\\\\\\\\\\\\\\\\\n\\\\end{cases}\\n$$\\n\\n(Note, for later, that:\\n$$\\nf(p(\\\\epsilon)) =\\n\\\\begin{cases}\\nA,&\\\\text{for }\\\\epsilon\\\\leq -2\\\\text{ or }-1 \\\\leq \\\\epsilon < 1\\\\\\\\\\\\\\\\\\nB,&\\\\text{elsewhere.}\\\\\\\\\\\\\\\\\\n\\\\end{cases}\\n$$ and $$ f(p(\\\\epsilon) + 0.5) =\\n\\\\begin{cases}\\nA,&\\\\text{for }\\\\epsilon< -2\\\\text{ or }-1.5 \\\\leq \\\\epsilon \\\\leq 0.5 \\\\text{ or } 0 \\\\leq \\\\epsilon < 0.5 \\\\\\\\\\\\\\\\\\nB,&\\\\text{elsewhere.}\\\\\\\\\\\\\\\\\\n\\\\end{cases}\\n$$),\\nand let $\\\\sigma = 1$.\\n\\nThen, we consider the point $x = 0$ \\nThen, \\n\\n $p_A = P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,1)} [f(x + p(\\\\epsilon)) = A ] = P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,1)} [f(p(\\\\epsilon)) = A ]$\\n\\n$ = \\\\int^{-2}_{-\\\\infty} \\\\text{NormPDF}(x) $ \\n\\n$+ \\\\int^1_{-1} \\\\text{NormPDF}(x) \\\\approx 0.7055$\\n\\n and $p_B = 1-p_A$\\n\\n Then, $\\\\sigma/2 \\\\cdot (\\\\Phi^{-1}(p_A) - \\\\Phi^{-1}(p_B)) = \\\\Phi^{-1}(p_A) \\\\approx 0.5403$, so Theorem 4.3 states that for any (1D) displacement vector $\\\\delta$ such that: $|p^{-1}(\\\\delta)|_2 = |p^{-1} (\\\\delta)| \\\\leq 0.54$, we should have that:\\n $g(x + \\\\delta) = A$, \\nwhich means that \\n\\n$ P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,1)} [f(x + \\\\delta+ p(\\\\epsilon)) = A ]\\\\geq 0.5$\\n\\nHowever, if we plug back in $x = 0$ and use displacement vector $\\\\delta = 0.5$, so $|p^{-1} (\\\\delta)| = |p^{-1} (0.5)| = 0.5 \\\\leq 0.54$, we have:\\n\\n $p_A = P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,1)} [f(x + \\\\delta + p(\\\\epsilon)) = A ] = P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,1)} [f(p(\\\\epsilon) + 0.5) = A ]$\\n\\n$ = \\\\int^{-2}_{-\\\\infty} \\\\text{NormPDF}(x) $ \\n\\n$ + \\\\int^{-0.5}_{-1.5} \\\\text{NormPDF}(x) $ \\n\\n$+ \\\\int^{0.5}_{0} \\\\text{NormPDF}(x) \\\\approx 0.4559 < 0.5$\\n\\nWhich is a contradiction.\\n\\n- **The proof provided in this paper of Lemma 4.5 relies on Theorem 4.3 and is therefore invalid. Despite this, Lemma 4.5, which is the result actually used for the proposed D-Smooth algorithm, happens to be true. However, it can be proven correct with just a few lines as a corollary of Cohen et al. 2019's theorem.**\\n\\nIn brief, starting with Cohen et al. 2019's theorem, with $\\\\sigma = 1$:\\n\\nfor all $f : \\\\mathbb{R}^d \\\\rightarrow Y,\\\\\\\\, x \\\\in \\\\mathbb{R}^d,$ and $\\\\delta$ such that $|\\\\delta|_2 < 1/2 \\\\cdot (\\\\Phi^{-1}(p_A) - \\\\Phi^{-1}(p_B))$: \\n\\n$$P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,1)} [f(x + \\\\epsilon) = A ] \\\\geq p_A \\\\geq p_B \\\\geq \\\\max_{B\\\\neq A} P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,1)} [f(x + \\\\epsilon) = B ] $$ implies \\n\\n$$P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,1)} [f(x + \\\\delta+ \\\\epsilon) = A ] > \\\\max_{B \\\\neq A}P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,1)} [f(x + \\\\delta+\\\\epsilon) = B ]$$\\n\\nIf we fix the function $f$, and apply the theorem to the function $f'(x) := f(L(x))$, where $LL^T$ is the Cholesky decomposition of the matrix $\\\\Sigma$, then we have, by linearity:\\n\\nfor all $f : \\\\mathbb{R}^d \\\\rightarrow Y,\\\\\\\\, x' \\\\in \\\\mathbb{R}^d,$ and $\\\\delta'$ such that $|\\\\delta'|_2 < 1/2 \\\\cdot (\\\\Phi^{-1}(p_A) - \\\\Phi^{-1}(p_B))$: \\n\\n$$P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,1)} [f(Lx' + L\\\\epsilon) = A ] \\\\geq p_A \\\\geq p_B \\\\geq \\\\max_{B\\\\neq A} P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,1)} [f(Lx' + L\\\\epsilon) = B ] $$ implies \\n\\n$$P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,1)} [f(Lx' + L\\\\delta'+ L\\\\epsilon) = A ] > \\\\max_{B \\\\neq A}P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,1)} [f(Lx' + L\\\\delta'+L\\\\epsilon) = B ]$$\\n\\nWhich can be written equivalently as \\n\\nfor all $f : \\\\mathbb{R}^d \\\\rightarrow Y,\\\\\\\\, x' \\\\in \\\\mathbb{R}^d,$ and $\\\\delta'$ such that $|\\\\delta'|_2 < 1/2 \\\\cdot (\\\\Phi^{-1}(p_A) - \\\\Phi^{-1}(p_B))$: \\n\\n$$P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,\\\\Sigma)} [f(Lx' + \\\\epsilon) = A ] \\\\geq p_A \\\\geq p_B \\\\geq \\\\max_{B\\\\neq A} P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,\\\\Sigma)} [f(Lx' + \\\\epsilon) = B ] $$ implies \\n\\n$$P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,\\\\Sigma)} [f(Lx' + L\\\\delta'+ \\\\epsilon) = A ] > \\\\max_{B \\\\neq A}P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,\\\\Sigma)} [f(Lx' + L\\\\delta'+\\\\epsilon) = B ]$$\\n\\nThen, for any x, we can take $x' := L^T \\\\Sigma^{-1} x$ , so that $Lx' = x$, and we have:\\n\\nFor all $x \\\\in \\\\mathbb{R}^d,$ and $\\\\delta'$ such that $|\\\\delta'|_2 < 1/2 \\\\cdot (\\\\Phi^{-1}(p_A) - \\\\Phi^{-1}(p_B))$: \\n\\n$$P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,\\\\Sigma)} [f(x + \\\\epsilon) = A ] \\\\geq p_A \\\\geq p_B \\\\geq \\\\max_{B\\\\neq A} P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,\\\\Sigma)} [f(x + \\\\epsilon) = B ] $$ implies \\n\\n$$P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,\\\\Sigma)} [f(x + L\\\\delta'+ \\\\epsilon) = A ] > \\\\max_{B \\\\neq A}P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,\\\\Sigma)} [f(x + L\\\\delta'+\\\\epsilon) = B ]$$\\n\\n\\nFinally, for any $\\\\delta$ such that $\\\\text{Mahalanobis}_\\\\Sigma (\\\\delta) \\\\leq 1/2 \\\\cdot (\\\\Phi^{-1}(p_A) - \\\\Phi^{-1}(p_B))$, we can take $\\\\delta' := L^T \\\\Sigma^{-1} \\\\delta$ , so that $L \\\\delta' = \\\\delta$; this yields that $|\\\\delta'|_2 < 1/2 \\\\cdot (\\\\Phi^{-1}(p_A) - \\\\Phi^{-1}(p_B))$. Then, we have:\\n\\n\\nFor all $x \\\\in \\\\mathbb{R}^d,$ and $\\\\delta$ such that $\\\\text{Mahalanobis}_\\\\Sigma (\\\\delta) \\\\leq 1/2 \\\\cdot (\\\\Phi^{-1}(p_A) - \\\\Phi^{-1}(p_B))$: \\n\\n$$P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,\\\\Sigma)} [f(x + \\\\epsilon) = A ] \\\\geq p_A \\\\geq p_B \\\\geq \\\\max_{B\\\\neq A} P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,\\\\Sigma)} [f(x + \\\\epsilon) = B ] $$ implies \\n\\n$$P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,\\\\Sigma)} [f(x + \\\\delta+ \\\\epsilon) = A ] > \\\\max_{B \\\\neq A}P_{\\\\epsilon\\\\sim\\\\mathcal{N}(0,\\\\Sigma)} [f(x + \\\\delta+\\\\epsilon) = B ]$$\\nScaling $\\\\Sigma$ by scalar constants yields the lemma in the paper. It is important to notice that the linearity of the transformation was used here: this is why we can prove Lemma 4.5, even though the more general Theorem 4.3 is false.\\n\\n(Closely related -- and even identical -- results appear in prior works which were not cited; see below). \\n\\n# Missing Related Work and Novelty\\n\\nThere are several works that have similar contributions to this work, but were not cited. The existance of these prior works reduces the novelty of the work; and relevant citations should be added.\\n\\nAs discussed above, Theorem 4.3 of this work is incorrect, so the first main correct contribution of this work is the smoothing certificate under anisotropic Gaussian noise (Lemma 4.5). A second contribution is the proposed way of determining the covariance matrix Sigma to use for this certificate.\\n\\nConsidering first only the certificate itself (Lemma 4.5), the following similar works exist:\\n\\n- Tecot 2021: Derives the same certificate in the coordinate-axis-aligned (diagonal covariance matrix) case. (However, this is a master's thesis, not a peer-reviewed paper, so it might not count as necessary prior work by ICLR standards)\\n\\n- Pfrommer et al 2023: Derives a related certificate. Specifically, the proposed method projects all samples into a low-dimensional hyperplane before applying smoothing. This is equivalent to using a covariance matrix with eigenvalue 1 in some subset of directions, and eigenvalue \\\"infinity\\\" in all others.\\n\\n- Eiras et al 2022: Corollary 1 of Eiras et al. is logically identical to Lemma 4.5 of the submission. However, in practice, Eiras et al only applies this result to the diagonal covariance matrix case. (Also, Eiras et al. proposes to use a covarance matrix that is _sample-dependent_, using a sceme introduced by Alfarra et al, 2022 for isotropic smoothing with sample-dependent scalar variance. S\\u00faken\\u00edk et al, 2022 [Appendix C] argues convincingly that the sample-dependent smoothing scheme proposed by Alfarra et al, 2022 is fundamentally unsound, and this flaw is inherited by Eiras et al 2022, so that their overall results are likely unsound. However, for a given matrix $\\\\Sigma$, the fact remains that Eiras et al. gives an equivalent lemma to to this work).\\n\\n- Rumezhak et al 2023: followup work to Eiras et al 2022, which uses the result for general covariance matrices (See Equations 2 and 3 of Rumezhak et al 2023). However, this uses the same likely-flawed sample-dependence scheme as Eiras et al 2022.\\n\\nIn terms of methods to compute the covariance matrix, none of the above works propose exactly the method used in this paper (computing the covariance of adversarial perturbations), so it may be novel. However, it should be compared empirically to the methods used in the above:\\n\\n- Pfrommer et al 2023 uses the covariance of _clean_ samples in the dataset to determine the subspace to project into.\\n\\n- Tecot 2021 uses an optimization approach to determine a $\\\\Sigma$ that maximizes the average certificate volume.\\n\\n# Experiments/Results Section\\nThe reported results show _certified_ accuracies of a classifier smoothed with DSmooth on test samples form CIFAR-10 and ImageNet. This is compared to the certified accuracies of classifiers smoothed with prior techniques: Cohen et al. 2019's Gaussian randomized smoothing technique, which is certifiably robust under L2 perturbation, and Teng et al. 2020's Laplace Smoothing technique, which is certifiably robust under L1 perturbation.\\n\\nHowever, I don't understand the experiment that is presented. In Figures 1 and 2, the x axis is just labeled as \\\"certified radius\\\". Is this the L1, L2, or Mahalanobis Distance radius? If it is any one of these (let's say, Mahalanobis Distance), how is the Mahalanobis certificate computed for, say, Cohen's method, which only provides L2 certificates? If the \\\"certified radius\\\" is in different metrics for each technique, then this isn't a fair apples-to-apples comparison. A method for comparing certificates under different metrics that has been used in prior work (ex. Pfrommer et al 2023; Tecot 2021) is to compare the _volumes_ of the certified regions: this would give a more fair comparison. Also, the paper mentions adversarially training the classifier. However, it's not clear if the baseline certificates were also for an adversairally trained model; if not, then the comparison is not fair. (See Salman et al 2019 for an adversairal training technique for isotropic randomized smoothing.)\\n\\nAdditionally, for the adversarial training / computation of $\\\\Sigma$, it is stated on line 363-364: \\\"In this experiment we use Square Attack with maximum perturbation 0.5 and 5000 queries.\\\" This is unclear. If the Square Attack is, as stated, an L_infinity-bounded attack, then \\\"maximum perturbation 0.5\\\" might mean one of two things: either (a) the distortion in each pixel is bounded by 0.5, where each pixel is normalized to [0,1]; or (b) the distortion in each pixel is bounded by 0.5, where each pixel is not normalized, and so lies in the discrete space {0,1,2,...,255}. If (a) is true, then this is a gigantic perturbation, and not an imperceptible adversarial attack.\\n\\nAlso, empirical accuracies under adversarial attack are not reported. In general, I do not think it is necessarily a requirement for certified robustness papers to report empirical accuracies of the classifiers under attack; however, it is important to make the claims made in the paper match the evidence. For instance, I think that the claim on lines 477-478: \\\"this demonstrates that DSMOOTH is effective against complex adversarial attacks as defined in Def. 3.1, while RANDSMOOTH and LSMOOTH are inadequate for this purpose.\\\" is too strong, based on certification results alone. It is not tested how well any of these models _actually_ perform under attack.\\n\\nAdditionally, the comparison to Laplace noise (Teng et al. 2020) for the L1 metric is outdated: Yang et al 2020 has shown that uniform noise yeilds better L1 certificates, and Levine & Feizi 2021 propose an alternate \\\"splitting noise\\\" technique that also yeilds better certificates. Voracek and Hein (2023) make further improvements.\\n\\n# Minor Issues\\n- ICLR call for papers allows unlimited appendices after references in the main PDF submission: I would encourage authors to move appendices here, instead of having them in a seperate file.\\n- Line 17; 'then' -> 'than'\\n- Cohen et al (2019) reference is duplicated: Cohen et al (2019a) and Cohen et al (2019b) are the same\\n- Line 52: \\\"defending against complex, high-dimensional adversarial attacks\\\": Aren't almost all (norm-bounded) adversarial attacks high-dimensional? (In the sense that they have ther same dimensionality as the sample)\\n- It is generally considered standard to capitalize Equation in, for example, 'equation 1'\\n- The paragraph in lines 112-117 is not quite correct as written. The threat model as written in Equation 1 seems to only refer to _additive_ attacks: specifically, the perturbed sample is x + \\u03b4, where \\u03b4 is constrained to be in the set C(\\u03b4). Note that as written, C(\\u03b4) does not seem to depend on on x, so some of the _non-additive_ threat models mentioned in the paragraph (for example, recoloring attacks and physical adeversarial attacks) are not encompased by this. This could be fixed by making C(\\u03b4) depend on x, or (preferably) by replacing x + \\u03b4 with some more general 'perturbation function' q(x,\\u03b4).\\n- Line 125: \\\"highly-dimensional\\\" -> \\\"high-dimensional\\\"\\n- Line 136: I wouldn't use $\\\\mathbb{P}$ to both mean Probability of an event, and to refer to a specific distribution.\\n- I'm confused about how the smoothing covariance \\u03a3 is constructed: it depends on the distribution of (optimal) attacks as given in Equation 1, which itself depends on the classifier being attacked. But \\u03a3 is part of the definition of the final smoothed classifier g, so this seems like it could be circular. Unless \\u03a3 is defined in terms of the attacks on the base classifier f? If so, this should be stated explicitly.\\n- Lines 191-197: If $\\\\Sigma$ is not full-rank, won't $\\\\Sigma$-inverse in definition of the Gaussian distribution (and Mahalanobis distance) not be defined? Doesn't this break the assumption in footnote 1?\\n - Lines 391-392: \\\" These are two NVIDIA GPUs with a 64-bit width and clock speed of 33 MHz, and four NVIDIA GPUs of the GV102 model with a 64-bit width, operating at a clock speed of 33 MHz.\\\" -- It seems that all 6 have a clock speed of 33 MHz and 64-bit width. Why is the model number only mentioned for the first two? Also, I would double-check this for accuracy: 33 MHz is an extremely slow clock speed for a GPU, (from a brief search) all GV102 cards have base clock speeds far higher than this.\\n\\n# References\\n\\nTecot, L. M. (2021). Robustness verification with non-uniform randomized smoothing. University of California, Los Angeles.\\n\\nPfrommer, S., Anderson, B., & Sojoudi, S. (2023). Projected Randomized Smoothing for Certified Adversarial Robustness. Transactions on Machine Learning Research.\\n\\nRumezhak, T., Eiras, F. G., Torr, P. H., & Bibi, A. (2023). RANCER: Non-axis aligned anisotropic certification with randomized smoothing. IEEE/CVF Winter Conference on Applications of Computer Vision.\\n\\nEiras, F., Alfarra, M., Torr, P., Kumar, M. P., Dokania, P. K., Ghanem, B., & Bibi, A. (2022). ANCER: Anisotropic Certification via Sample-wise Volume Maximization. Transactions on Machine Learning Research.\\n\\nAlfarra, M., Bibi, A., Torr, P. H., & Ghanem, B. (2022). Data dependent randomized smoothing. Uncertainty in Artificial Intelligence.\\n\\nS\\u00faken\\u0131\\u0301k, P., Kuvshinov, A., & G\\u00fcnnemann, S. (2022, June). Intriguing Properties of Input-Dependent Randomized Smoothing. International Conference on Machine Learning.\\n\\nSalman, H., Li, J., Razenshteyn, I., Zhang, P., Zhang, H., Bubeck, S., & Yang, G. (2019). Provably robust deep learning via adversarially trained smoothed classifiers. Advances in neural information processing systems\\n\\nYang, G., Duan, T., Hu, J. E., Salman, H., Razenshteyn, I., & Li, J. (2020). Randomized smoothing of all shapes and sizes. In International Conference on Machine Learning.\\n\\nLevine, A. J., & Feizi, S. (2021). Improved, deterministic smoothing for l_1 certified robustness. International Conference on Machine Learning\\n\\nVor\\u00e1cek, V., & Hein, M. (2023). Improving l1-certified robustness via randomized smoothing by leveraging box constraints. International Conference on Machine Learning.\", \"questions\": [\"See \\\" Experiments/Results Section\\\" above under Weaknesses. There are several aspects of the experiments that could be clarified:\", \"It is currently unclear what quantity is being reported as \\\"Certified Radius\\\" for each defense method ( L1, L2, or Mahalanobis Distance radius). (It would be better to report \\\"Certified Volumes\\\" as in (Pfrommer et al 2023; Tecot 2021))\", \"What does \\\"maximum perturbation 0.5\\\" mean for the Square Attack?\", \"Are the attack perturbations $\\\\delta$ used to compute $\\\\Sigma$ based on attacks on the base classifier or the smoothed classifier?\", \"What models were used for the baseline smoothing techniques in the experimental section? Were these models also adversarially trained?\", \"How is $\\\\Sigma^{-1}$ computed with a low-rank approximation of $\\\\Sigma$?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Q: Add reference to the work [1]\", \"a\": \"Thank you for pointing this out. We are happy to improve our submission, by adding a clear discussion on this point. Our rationale for using the Mahalanobis distance was aligned with related work,, e.g., [1]. Standard p-norm certificates represent a worst-case scenario., since they constraint the certificate to the p-closest adversary. However, the decision boundaries of general classifiers may be complex and nonlinear, and standard $l_p$ norms may be uninformative in terms of the shape of decision boundaries. On the other hand, the Mahalanobis distance is defined relative to the distribution of the adversarial perturbations, using the covariance matrix to adjust for the spread and correlations of the perturbations. With this submission, our goal was to show that the proposed distance allows us to derive certificates that are more informative w.r.t. the decision boundary. We believe that practical scenarios where this framework is applicable are camera-based smart vision systems, where physical adversarial attacks can successfully mislead perception models. We understand, however, that our work at this stage is insufficient to support these claims.\\n\\n[1] ANCER: Anisotropic Certification via Sample-wise Volume Maximization. PMLR 2022\", \"q\": \"Discuss the rationale for using Mahalanobis Distance and identify practical scenarios where its robustness is applicable, assessing DSMOOTH's performance in those contexts.\"}", "{\"comment\": \"Q: Could the authors provide the evaluation results of their method on the entire CIFAR-10 test set? Using the full test set is standard in the field and would provide stronger evidence for the method's generalizability.\", \"a\": \"Hong and Hong (2022): The certification guarantees in this work (Hong and Hong (2022), Thm. 4.1) provide guarantees for the case where the smoothing distribution is of the form N(0, Sigma), where Sigma is a diagonal matrix. These guarantees are different from our work, which derives probabilistic guarantees for any positive-definite matrix Sigma. Furthermore, Hong and Hong (2022) differs from our work in the way that the matrix Sigma is chosen. Eiras et al. (2021) differs from our work in the way that the matrix Sigma is obtained. In Eiras et al. (2021), the matrix Sigma is computed by solving an optimization problem as in Eq. (1).\\n\\n[1] Hanbin Hong and Yuan Hong. Certified adversarial robustness via anisotropic randomized smoothing. arXiv preprint arXiv:2207.05327, 2022. \\n[2] Francisco Eiras, Motasem Alfarra, M Pawan Kumar, Philip HS Torr, Puneet K Dokania, Bernard Ghanem, and Adel Bibi. Ancer: Anisotropic certification via sample-wise volume maximization. arXiv preprint arXiv:2107.04570, 2021.\\n[3] Greg Yang, et. al. Randomized Smoothing of All Shapes and Sizes, ICML 2020\", \"q\": \"Other works have discussed anisotropic approaches in the context of randomized smoothing, such as Hong and Hong (2022) and Eiras et al. (2021). Could the authors elaborate on the similarities or differences between their approach and these methods, particularly in the related works section?\"}" ] }
84pDoCD4lH
Do Vision-Language Models Represent Space and How? Evaluating Spatial Frame of Reference under Ambiguities
[ "Zheyuan Zhang", "Fengyuan Hu", "Jayjun Lee", "Freda Shi", "Parisa Kordjamshidi", "Joyce Chai", "Ziqiao Ma" ]
Spatial expressions in situated communication can be ambiguous, as their meanings vary depending on the frames of reference (FoR) adopted by speakers and listeners. While spatial language understanding and reasoning by vision-language models (VLMs) have gained increasing attention, potential ambiguities in these models are still under-explored. To address this issue, we present the COnsistent Multilingual Frame Of Reference Test (COMFORT), an evaluation protocol to systematically assess the spatial reasoning capabilities of VLMs. We evaluate nine state-of-the-art VLMs using COMFORT. Despite showing some alignment with English conventions in resolving ambiguities, our experiments reveal significant shortcomings of VLMs: notably, the models (1) exhibit poor robustness and consistency, (2) lack the flexibility to accommodate multiple FoRs, and (3) fail to adhere to language-specific or culture-specific conventions in cross-lingual tests, as English tends to dominate other languages. With a growing effort to align vision-language models with human cognitive intuitions, we call for more attention to the ambiguous nature and cross-cultural diversity of spatial reasoning.
[ "vision-language models", "spatial reasoning", "multimodal reasoning" ]
Accept (Oral)
https://openreview.net/pdf?id=84pDoCD4lH
https://openreview.net/forum?id=84pDoCD4lH
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yj7UvSYTfG", "k7zmlkdC33", "jsrhxSdDch", "bRQqBb0rsh", "TCpnAWUyqs", "OrKzwD9pim", "LVCT78NLtI", "L2jWm1JbhU", "I2wR8LXwYL", "GXkU1ISNFr", "DGMoMHpQel", "CRle9WsVtS", "A0matR2IxW", "9KvrGtlYZw", "7gBimuieyN", "4ZO3bPfSC3", "2gPJvm5F5R" ], "note_type": [ "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "meta_review" ], "note_created": [ 1732510083243, 1733240029010, 1732511552022, 1737523384064, 1730684646564, 1732509990056, 1730200842808, 1729367340852, 1732509868558, 1732510164599, 1733212256849, 1732701268922, 1732512010952, 1732778675988, 1731054675677, 1729823364750, 1734851455071 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission194/Authors" ], [ "ICLR.cc/2025/Conference/Submission194/Authors" ], [ "ICLR.cc/2025/Conference/Submission194/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission194/Reviewer_4ug1" ], [ "ICLR.cc/2025/Conference/Submission194/Authors" ], [ "ICLR.cc/2025/Conference/Submission194/Reviewer_GnST" ], [ "ICLR.cc/2025/Conference/Submission194/Reviewer_EAzE" ], [ "ICLR.cc/2025/Conference/Submission194/Authors" ], [ "ICLR.cc/2025/Conference/Submission194/Authors" ], [ "ICLR.cc/2025/Conference/Submission194/Reviewer_QwFz" ], [ "ICLR.cc/2025/Conference/Submission194/Reviewer_Jhao" ], [ "ICLR.cc/2025/Conference/Submission194/Authors" ], [ "ICLR.cc/2025/Conference/Submission194/Reviewer_4ug1" ], [ "ICLR.cc/2025/Conference/Submission194/Reviewer_QwFz" ], [ "ICLR.cc/2025/Conference/Submission194/Reviewer_Jhao" ], [ "ICLR.cc/2025/Conference/Submission194/Area_Chair_NfCH" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer GnST\", \"comment\": \"We appreciate your thoughtful feedback and valuable insights, and for recognizing our novel approach, thorough and methodical studies, and well-organized and clear writing. Below, we respond to weaknesses, questions, and suggestions.\\n\\n### Response to Weakness #1 (Limited scope of spatial relations)\\nOur paper focuses on the ambiguous nature of spatial language and builds upon linguistic and cognitive science literature on spatial frames of reference. We chose lateral and sagittal directions as they are the most fundamental spatial indicators.\\n> \\u201cThere is a tight connection between the Relative FoR and the Intrinsic FoR: it seems that you cannot have a Relative FoR without an Intrinsic FoR. Like the Intrinsic FoR, the Relative FoR requires \\u2018parsing\\u2019 of objects \\u2013 most importantly, a parsing of the self into front, back, left and right.\\u201d (Majid et al., 2004)\\n\\nAs we mentioned in the Limitations section, spatial relationships like near-far and above-below are left to future work as we need novel metrics to evaluate these terms. \\n\\n### Response to weakness #2 (Synthetic 3D images may not fully capture real-world complexities)\\nOur synthetically generated images have occlusions and varying camera angles. We refer to Section 3.2 and Figure 3 for examples of the rendered images. For reasons of using only synthetic images in our dataset:\\nIt\\u2019s hard to rotate the target object around the reference object in the real world with uniform steps for collecting natural images.\\nWe also want to carefully control other variables (e.g. camera pose) in the experiments for systematic analysis, which are hard to control in capturing natural images. \\n\\nHowever, we added a case study to see if results in synthetic images generalize to real photos. We fixed the camera pose and manually rotated the target object around the reference object using (1) two equally sized red and blue balls; and (2) a laptop and the red ball. We keep everything else identical to the original COMFORT setup. We found that our findings still hold for alternative backgrounds. Please refer to Section Appendix C for more details.\\n\\n### Response to Weakness #3 (A more diverse linguistic and cultural context or human annotations)\\nWhile we have started this study and formulated the problem, more extensive investigation into more languages and cultures will be an interesting direction for future work. We agree that human annotations for the multilingual dataset will improve the quality, but due to the large number of images and languages we evaluated, it requires a collective effort from the community to make this happen: We need native speakers of different languages from different cultural backgrounds to label in total (720 + 57600) * 109 = 6356880 instances. \\n\\nIn fact, our ongoing work aims to address this gap through systematic human studies. Unfortunately, this research falls beyond the scope of an AI conference paper, so we leave it for future exploration. In this work, we hope to provide VLM\\u2019s preference and raise concerns that English may dominate the FoR preference conventions of other languages in multilingual VLMs with this example.\\n\\n### Response to Question #1:\\nOur findings suggest that current VLMs are not robust and consistent enough so we should be cautious when we apply VLM to autonomous driving. Additionally, spatial reasoning from another person\\u2019s perspective is important in embodied communications, but our experiments show that VLMs cannot flexibly accommodate multiple FoRs. Moreover, the failure to adhere to language-specific or culture-specific conventions in cross-lingual tests makes them unusable for people in a non-English speaking country.\\n\\n### Response to Question #2:\\nAs discussed in Section 5, \\u201ccurrent training recipes for multilingual multimodal language models heavily rely on machine-translated captions,\\u201d which can introduce significant challenges. Enhancing spatial reasoning is a critical step toward the broader goal of achieving better multilingual and multicultural alignment in vision-language reasoning, ultimately contributing to the development of fair AI.\\n\\n### Response to Question #3:\\nIn Section 5 Discussions, we mainly discussed (1) future work is necessary to improve the consistency and robustness of spatial representations in these models, (2) future work should extend the current 2D VLMs to the 3D domain, by considering camera poses and multiview data (Yang et al., 2024) for training, (3) to enable similar linguistic transmission in AI models, exposure to naturally generated multilingual image-text data is crucial.\"}", "{\"comment\": \"Thank you for your careful review and thoughtful feedback throughout the process! We\\u2019re glad that the addition of the real image case study in Appendix C resonated with you and helped demonstrate the generalizability of our findings. Your active engagement and constructive discussions have been instrumental in improving this work!\"}", "{\"comment\": \"We sincerely thank reviewer EAzE for their insightful feedback and constructive comments, as well as for recognizing the novelty of our benchmark, the clarity of our writing, the significance of our work in vision-language research and AI alignment, and its implications for developing more globally applicable models. **Given the overall positive sentiment of the review, we kindly remind the reviewer that a score of 5 is marginally below the acceptance threshold.** In case there are any misunderstandings or mistakes, we appreciate it if the reviewer could reconsider the rating.\\n\\nBelow, we address the identified weaknesses, questions, and suggestions.\\n\\n### Response to Weakness #1 (novelty):\\n\\n**Thank you for acknowledging that \\u201cframe of reference is a rather novel take\\u201d!**\\n\\nAs highlighted in the Introduction, frames of reference are crucial for studying spatial cognition across modalities, providing a foundational framework for understanding how spatial relationships are perceived, interpreted, and communicated. Selecting the appropriate FoR is key to resolving ambiguities in spatial language understanding, whether by following explicit instructions through perspective prompts or adhering to language conventions. To the best of our knowledge, existing benchmarks do not account for this aspect, as most assume an egocentric relative FoR with reflected coordinate transformation, aligning with English preferences. However, English speakers can adapt to alternative FoRs when instructed, and speakers of other languages may exhibit vastly different preferences.\\n\\n\\n### Response to Weakness #2 (Multilingual study remains high level)\\nIn this experiment, **we hope to provide VLM\\u2019s preference and raise concerns that English may dominate the FoR preference conventions of other languages in multilingual VLMs**.\\n\\nThe ground truth human preferences are based on established findings in the linguistics literature, mostly from Levinson, 2003 but also in Majid et al., 2004; O\\u2019Meara & B\\u00e1ez, 2011; and Bender et al., 2020. For example, Dutch, English, Japanese, Tamil, Hausa, Spanish, Norwegian, Chinese, Tzeltal, Tongan, and Farsi prefer Relative FoR; and Jaminjung, Mopan, Totonac prefer Intrinsic FoR. In the revision, we explicitly stated this in Section 4.5 and cited the corresponding studies.\\n\\nAlthough there is a good number of languages with well-documented frame-of-reference preferences and established consensus in cognitive psychology. However, not all languages have been extensively studied or reached such a consensus. Our ongoing work aims to address this gap through systematic human studies. Unfortunately, this research falls beyond the scope of an AI conference paper, so we leave it for future exploration. \\n\\n### Response to Weakness #3 (When is egocentrism problematic)\\n> \\u201cHowever, it does not deeply investigate whether this bias toward egocentrism is inherently problematic across all contexts or if there are scenarios where this behavior is acceptable or even preferable.\\u201d\\n\\nWhile there is no global ground truth, the language we studied in the main paper demonstrates a known preference, e.g., as the reviewer noted, English prefers an egocentric relative FoR. To this end, better alignment with the language convention and intuition is preferred in communication.\\n\\nBesides, the perspective-taking capability of VLMs (Section 4.3) can function as a benchmark, as the input and ground truth are unambiguous. We encourage future work in VLM training to address this specific challenge.\\n\\nTherefore, a \\\"bias toward egocentrism\\\" is acceptable when communicating with English speakers by default or when explicitly instructed to do so. However, it is not acceptable when communicating with speakers of languages that prefer other conventions or when models are instructed to adopt an alternative perspective, such as the addressee's viewpoint. **Alignment with human cognition and cultural conventions is preferred.**\\n\\n### Response to Weakness #4 (Practical relevance)\\n\\nOur findings provide valuable insights for VLM developers. As we mentioned in Section 5, \\u201ccurrent training recipes for multilingual multimodal language models heavily rely on machine-translated captions (Chen et al., 2023c; Geigle et al., 2024); however, this practice can be problematic\\u2026To enable similar linguistic transmission in AI models, exposure to naturally\\ngenerated multilingual image-text data is crucial (Romero et al., 2024).\\u201d \\n\\nAlso, VLMs have many applications in embodied AI and robotics and spatial reasoning is critical for making decisions in the 3D world. Moreover, embodied communication requires flexibility for accommodating different FoRs. For example, if you want a robot to pick up the cup on your left using VLMs, the robot has to reason from your perspective to select the cup for manipulation.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Oral)\"}", "{\"summary\": \"**Main contributions**:\", \"the_paper_has_two_main_contributions\": \"1. It presents COMFORT, a framework for evaluating VLMs' understanding of spatial frame of reference expressions, which relate two objects to one another (e.g. \\\"Is the basketball *to the right of* the car?\\\"). Expressions are evaluated using a rendered image containing the two objects and which is used to query the VLM with (as a binary Yes/No question) for whether or not the relation holds for the image. \\n\\n2. It uses COMFORT to show that a set of contemporary VLMs have distinct preferences for frame of reference parameters. Specifically, the paper presents evidence for the preference of English language frame of reference parameters in VLMs. \\n\\n**Framework Structure**:\", \"expressions_in_comfort_vary_across_two_main_conditions\": \"1. Expressions which include two objects with a semantic \\\"front\\\" (e.g. such as the \\\"front\\\" of a person being the side with the person's face). In this scenario the spatial expression must be reasoned about in terms of an anchor frame of reference (i.e. either object A's, object B's, or the camera's frame). The selection of the frame of reference will determine which direction in the image corresponds to \\\"left of\\\", \\\"back of\\\" etc. \\n\\n2. Expressions which include objects without a semantic \\\"front\\\". In COMFORT, this is instantiated by scenarios with two balls. In such scenarios, a coordinate transformation convention must be assumed for determining what is \\\"right\\\", \\\"left\\\", \\\"back\\\", etc. Different languages have different standard conventions, and so this condition is set up to test this. \\n\\nPreferences are measured by evaluating the [Yes/No] probability for a given spatial expression in relation to a ground truth *region of acceptability*, a continuous region of rotation of the second object around the center object (see Figure 1c for reference) in which a spatial expression may hold true. This region of acceptability allows for not only for accuracy to be measured but also for the dynamics of the probability to be measured as functions of distance from the center of the region of acceptability -- intuitively, the score of \\\"right of\\\" should decrease smoothly as the object is rotated from 0 to 90 degrees from the center of the region. Specifically, the paper presents. \\nThese measures probe the robustness of model scores (e.g. evaluating std across object variations unrelated to space like color or shape) as well as the consistency (are scores symmetric at both equivalent angles from the acceptability region's center? Do they change smoothly as angle is varied?). \\n\\n**Results**\\nThe paper uses the region of acceptability based error measures to show that VLMs generally skew towards preferring egocentric frames of references (condition 1) and the reflected coordinate transformation convention. Additionally, the paper also shows that these preferences cannot be reliably changed when the prompt explicitly defines the frame of reference to be used, or when a language (other than English) with different known preferences is used to prompt the model.\\n\\n**Recommendation** \\nThis is a solid paper which I believe presents a unique contribution that stands to better capacitate the community in vetting the spatial understanding of VLMs. I have noted a few suggestions below under Weaknesses to improve the presentation of the paper. With that said, I think it is already at a good quality and I recommend acceptance.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"**Strong Backing in Theory**: The paper's framing and setup in sections 1 through 3 (which motivates the problem statement and framework design choices) is a joy to read. Motivating statements and scoping within existing work is very clear and compelling.\\n\\n**Conclusions Well Supported**: The experimental setup is quite thorough and shows clear trends which the authors use to justify their claims. A majority of models are clearly shown to have FoR preferences based on the metrics *cos* and *hem* error functions defined, with the BLIP family of VLMs being a notable exception. I particularly appreciated that the paper additionally presents a compelling explanation for the BLIP discrepancy by showing that these same models score poorly on an evaluation of object hallucinations. \\n\\n**Novelty**: In my understanding I believe that the presented framework is novel based on its continuous definition based on regions of acceptability. This seems to me to be an important distinction namely because these spatial relationships are known to lie within continuums and are not discrete. \\n\\n**Clarity**: The paper's writing is generally clear, and I found the organizational structure helpful for understanding the concepts introduced.\", \"weaknesses\": \"**Cross-lingual Experiment Presentation**: I think some further refinement/elaboration of the cross lingual experiments (Section 4.5) could help strengthen the paper. Although the paper does state upfront that it spends most of the content focused around English experiments, the treatment of the section still felt a little too short for the emphasis that is placed in the introduction for this being a core piece of the framework (e.g. the M in COMFORT stands for Multilingual). Specifically I think a couple of details could be expanded upon.\\n\\nFirst, the results in Figure 8 are a little ambiguously presented. Is the world map mapping the preferences of GPT-4o, or is it mapping ground truth human preferences? I believe the former, but the wording in both the figure and the paper makes it sound like these are ground truth preferences. \\nSecond, for ground truth preferences, I'm wondering how these are obtained? -- I'm assuming this is pulled from the linguistics literature, but it is not explicitly stated and appears like a key detail to me. I would appreciate it if the authors could clarify both of these points. \\n\\nLastly, although the results presented clearly show a preference for the same reference frames as English, I think they could be strengthened by also showing a notion of what the expected preferences would be if the VLM were to be adhering to the conventions of the language being used. For example, I think this could be done by (a) adding a second map showing a visualization of ground truth human preferences, or (b) adding the bold/underline convention from the table in Figure 8 to Table 10.\", \"questions\": [\"**Suggestions**:\", \"L212 \\\"the query is appended after four different perspective prompts...\\\": I'm assuming this meant to say \\\"after *one of* four different...\\\"?\", \"I understand the authors may have been space limited (no pun intended), but I think adding a Conclusion section to reiterate contributions and takeaways (further than the Discussion section already does) could make the paper stronger. There's a large amount of content, so spoon-feeding the final takeaways to readers could aid clarity.\", \"This is minor, but relatedly I think the paper could be a little clearer about what the intended usage of the framework is for readers. Are readers supposed to come away with an understanding that this will be a benchmark they can evaluate their own models on? I believe yes, but the wording of the paper at the beginning and end doesn't make this explicit. In the introduction, COMFORT is introduced as a \\\"framework\\\" and not a \\\"benchmark\\\", was this intentional? Secondly, the paper closes with only a discussion over the empirical findings in Section 5, without any explicit reiteration that it has introduced COMFORT. This omission at the end made it feel like the only point of the paper was to present a study of VLM spatial understanding -- this would be fine of course, but if I'm understanding correctly I think the paper would additionally want to explicitly market itself as offering COMFORT as a testbed for VLMs. It's subtle, but I think being explicit about this, especially at the end of the paper, could really help drive home the point the authors want to communicate.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 4ug1\", \"comment\": \"We are grateful for your thorough evaluation and positive feedback on our motivations from linguistics and cognitive science, well-supported conclusions and novel continuous definitions based on regions of acceptability, and clear writing structure. Below, we respond to weaknesses, questions, and suggestions. The main weakness identified by the reviewer is the refinement/elaboration of the cross-lingual experiments, which we detailed below:\\n\\n### Response to Weakness 1 (the results in Figure 8 are a little ambiguously presented):\\nWe confirm that Figure 8 reflects GPT-4o's preferences based on the cosine parsing error, weighted by the speaking population of the top three languages in each region. To eliminate ambiguity, we revised the figure caption and the associated text in Section 4.5 to explicitly state this.\\n\\n\\n### Response to Weakness 2 (ground truth preferences and conventions):\\nGround truth human preferences are based on established findings in the linguistics literature, mostly from Levinson, 2003 but also in Majid et al., 2004; O\\u2019Meara & B\\u00e1ez, 2011; and Bender et al., 2020. For example, Dutch, English, Japanese, Tamil, Hausa, Spanish, Norwegian, Chinese, Tzeltal, Tongan, and Farsi prefer Relative FoR; and Jaminjung, Mopan, Totonac prefer Intrinsic FoR. In the revision, we explicitly stated this in Section 4.5 and cited the corresponding studies.\\n\\nAlthough there is a good number of languages with well-documented frame-of-reference preferences and established consensus in cognitive psychology. However, not all languages have been extensively studied or reached such a consensus. Our ongoing work aims to address this gap through systematic human studies. Unfortunately, this research falls beyond the scope of an AI conference paper, so we leave it for future exploration. In this work, we hope to provide VLM\\u2019s preference and raise concerns that English may dominate the FoR preference conventions of other languages in multilingual VLMs with this example.\\n\\n### Response to Suggestions (Intended usage of the framework)\\nWe intentionally use the term \\u201cframework\\u201d to emphasize that this dataset and its associated evaluation metrics are designed to assess cognitive similarity and alignment with human spatial cognition. While each studied language demonstrates a preference, we do not position this as a leaderboard-driven benchmark. However, the perspective-taking capability of VLMs (Section 4.3) can function as a benchmark, as the input and ground truth are unambiguous. We encourage future work in VLM training to address this specific challenge.\"}", "{\"summary\": \"This paper explores the spatial reasoning capabilities of vision-language models (VLMs) and their handling of spatial ambiguities. It introduces the COnsistent Multilingual Frame Of Reference Test (COMFORT) to systematically evaluate these models. The study reveals that while VLMs show some alignment with English conventions in resolving spatial ambiguities, they exhibit significant shortcomings: poor robustness and consistency, lack of flexibility to accommodate multiple frames of reference (FoRs), and failure to adhere to language-specific or culture-specific conventions in cross-lingual tests. The paper emphasizes the need for more attention to the ambiguous nature and cross-cultural diversity of spatial reasoning in VLMs.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper introduces the innovative COnsistent Multilingual Frame Of Reference Test (COMFORT), which systematically evaluates vision-language models (VLMs) across multiple languages and cultural contexts. This novel approach highlights the importance of considering linguistic and cultural variations in model performance. The research is thorough and methodical, assessing nine state-of-the-art VLMs with detailed analyses of their robustness, consistency, and flexibility. The paper is well-organized and clearly written, making complex concepts accessible. Its findings underscore the need for more robust and flexible models, providing a valuable framework for future research in vision-language modeling and spatial reasoning.\", \"weaknesses\": \"The scope of spatial relations is limited, focusing mainly on basic relations like front-back and left-right, while neglecting others such as near-far and above-below. The evaluation relies on synthetic 3D images, which may not fully capture real-world complexities, and lacks consideration for occlusion and varying camera angles. Additionally, the analysis could benefit from a more diverse linguistic and cultural context, as well as human annotations for the multilingual dataset. Addressing these weaknesses by expanding the scope of spatial relations, incorporating real-world scenarios, and improving the quality of the dataset with human annotations would provide a more comprehensive evaluation of VLMs' spatial reasoning capabilities.\", \"questions\": \"1. How do the findings impact the development of embodied AI systems and other applications like autonomous driving, robotics, or augmented reality?\\n\\n2. How could improved spatial reasoning in vision-language models (VLMs) enhance performance in multilingual and multicultural contexts?\\n\\n3. What are the key areas of future research to advance the spatial reasoning capabilities of VLMs, and what specific challenges or limitations need to be addressed?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": [\"Investigates how vision-language models (VLMs) represent and reason about spatial relations, particularly when spatial language is ambiguous due to differing frames of reference (FoRs). It focuses on evaluating the robustness, consistency, and flexibility of VLMs in handling such ambiguities.\", \"The authors introduce a new evaluation protocol called the COnsistent Multilingual Frame Of Reference Test (COMFORT), which systematically assesses VLMs' spatial reasoning capabilities. The evaluation includes tasks with synthetic 3D images and text descriptions of spatial relations, with tests conducted in 109 languages across 170 regions worldwide.\", \"The study reveals that while VLMs show some alignment with English conventions for resolving spatial ambiguities, they struggle with: (A) Consistency and robustness in spatial reasoning, (B) Flexibility in adopting multiple FoRs, (C) Cross-linguistic and cross-cultural understanding of spatial relations, with English conventions often dominating over those of other languages.\"], \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Unlike existing benchmarks, which typically focus on spatial language understanding within a single frame of reference or specific cultural context, this work broadens the scope by emphasizing ambiguity in spatial reasoning due to different Framework of References and cross-linguistic/cross-cultural diversity.\", \"The paper is well-structured, with clear explanations of key concepts such as frames of reference and spatial reasoning.\", \"More importantly,\", \"The paper\\u2019s contributions are highly significant in the context of both vision-language research and broader discussions on AI alignment with human cognition. By focusing on spatial reasoning, a core component of human cognition, this work addresses a critical gap in the evaluation of VLMs.\", \"The cross-lingual evaluation across diverse cultural backgrounds provides valuable insights into the limitations of current VLMs in handling non-English spatial conventions. This has implications for the development of more globally applicable models, as the dominance of English-centric FoRs could hinder the usability of VLMs in non-English contexts.\"], \"weaknesses\": [\"While the introduction of the COMFORT benchmark is a novel contribution, the idea of testing spatial reasoning in vision-language models is not entirely new. Several previous benchmarks, such as those mentioned in the paper, have evaluated spatial reasoning capabilities in VLMs. The novelty of this paper lies primarily in its focus on the frame of reference ambiguities, but the paper does not sufficiently clarify how this focus significantly advances beyond the existing benchmarks. But I agree that the frame of reference is a rather novel take on the same topic.\", \"Although the paper provides valuable insights into how English conventions dominate spatial reasoning in multilingual models, the discussion around cultural and linguistic biases remains relatively high-level.\", \"The paper identifies that most models tend to default to egocentric relative frames of reference, which aligns with English spatial conventions. However, it does not deeply investigate whether this bias toward egocentrism is inherently problematic across all contexts or if there are scenarios where this behavior is acceptable or even preferable.\", \"(not required) It would be beneficial to include a dedicated section discussing the practical relevance of the findings in LLM applications. Is there an area the the findings of this paper can seriously affect?\"], \"questions\": \"NA\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer QwFz\", \"comment\": \"Thank you for your thoughtful and constructive feedback and for recognizing our novel approach to probing spatial perception, demonstration of anglewise judgments, and extensive experiments to support the conclusions and discussions. We address your questions below and have updated our paper according to your suggestions.\\n\\n### Response to Weakness #1 (synthetic data)\\nThanks for your suggestion. We use synthetic data for the following reasons: \\nIt\\u2019s hard to rotate the target object around the reference object in the real world with uniform steps for collecting natural images.\\nWe also want to carefully control other variables (e.g., camera pose) in the experiments for systematic analysis, which are hard to maintain in capturing natural images. \\nHowever, we added a case study to see if results in synthetic images generalize to real photos. We fixed the camera pose and manually rotated the target object around the reference object using (1) two equally sized red and blue balls; and (2) a laptop and the red ball. We keep everything else identical to the original COMFORT setup. We found that our findings still hold for alternative backgrounds. Please refer to Section Appendix C for more details.\\n\\n### Response to Weakness #2 (limited diversity of backgrounds)\\nThanks for your suggestion. We added a new dataset for a case study with a brown background, while keeping everything else identical to the original COMFORT-BALL setup.\\nWe found that our findings still hold for alternative backgrounds. Please refer to Section Appendix C for more details.\\n\\n### Response to Weakness #3 Limited sections on multilingual experiments\\nThanks for your suggestion. Our multilingual experiments focus on investigating the preferred FoR in different languages. Since we built on the experimental setup already laid out for the English language in prior sections, we focused only on the experiments for multiple languages, which may explain why this section appears more concise. Table 10 in the appendix mainly serves as a full record for readers who are interested in looking at specific languages, but we plotted the preferred FoR on the world map in the main paper.\"}", "{\"title\": \"Response to Reviewer Jhao\", \"comment\": \"We sincerely thank the reviewer for your thoughtful and constructive feedback (rephrase using ChatGPT later), and for recognizing our paper to be systematic and rigorous, having reliable testing methods, and extensive multilingual analysis. Below, we respond to weaknesses.\\n\\n### Clarification\\n> \\u201cIn other words, the evaluation metrics proposed in the paper are not absolute criteria, like egocentric is not naturally better than addressee-centered, but rather serve as perspectives for observing how VLMs understand spatial information.\\u201d\\n\\nThis is true as we emphasize that this dataset and its associated evaluation metrics are designed to assess cognitive similarity and alignment with human spatial cognition. \\n\\nHowever, we also emphasize that:\\n- **While there is no global ground truth, the language we studied in the main paper demonstrates a known preference**, e.g., English prefers an egocentric relative FoR. To this end, **better alignment with the language convention and intuition is preferred in communication**.\\n- The perspective-taking capability of VLMs (Section 4.3) can function as a benchmark, as the input and ground truth are unambiguous. We encourage future work in VLM training to address this specific challenge.\\n\\n### Response to Weakness (Insufficient psychological and cognitive science related work)\\nWe kindly refer to Section 2.1 for how humans resolve ambiguities based on linguistic and cultural backgrounds, and we use human preference as the criteria for evaluation. For example, English prefers the egocentric relative FoR with reflected coordinate transformation, so when we are evaluating English, we use the reflected coordinate transformation as the ground truth.\\n\\nThese ground truth human preferences are based on established findings in the linguistics literature, mostly from Levinson, 2003 but also in Majid et al., 2004; O\\u2019Meara & B\\u00e1ez, 2011; and Bender et al., 2020. For example, Dutch, English, Japanese, Tamil, Hausa, Spanish, Norwegian, Chinese, Tzeltal, Tongan, and Farsi prefer Relative FoR; and Jaminjung, Mopan, Totonac prefer Intrinsic FoR. In the revision, we explicitly stated this in Section 4.5 and cited the corresponding studies.\\n\\nAlthough there is a good number of languages with well-documented frame-of-reference preferences and established consensus in cognitive psychology. However, not all languages have been extensively studied or reached such a consensus. Our ongoing work aims to address this gap through systematic human studies. Unfortunately, this research falls beyond the scope of an AI conference paper, so we leave it for future exploration. In this work, we hope to provide VLM\\u2019s preference and raise concerns that English may dominate the FoR preference conventions of other languages in multilingual VLMs with this example.\"}", "{\"title\": \"Awesome job with the extra experiment!\", \"comment\": \"I really appreciate the addition of the real image case study (even though it's very small) in Appendix C. This really drives home that this finding generalizes, even though I agree that all your explanations for why synthetic data only is used are valid. Ditto for the limited diversity of backgrounds. I will raise my score as I have no other complaints.\"}", "{\"title\": \"Thanks for authors' rebuttal\", \"comment\": \"I read the author's rebuttal as soon as it was posted. After thoroughly reviewing the rebuttal, considering the comments from other reviewers, and revisiting the paper, I have decided to maintain my original positive score.\\n\\nThank you!\"}", "{\"title\": \"General Response to Reviewers and ACs\", \"comment\": [\"We thank all the reviewers and ACs for their time and effort in reviewing the paper and providing valuable and constructive feedback. We appreciate that reviewers have recognized the following contributions of our paper:\", \"### Contributions\", \"**A novel, systematic, rigorous benchmark to probe and test spatial reasoning**\", \"Synthetic pipeline is deeply utilized and angle-wise evaluation is compelling and surprising. *(QwFz)*\", \"Continuous definition based on regions of acceptability is an important distinction. *(4ug1)*\", \"This novel approach highlights the importance of considering linguistic and cultural variations in model performance. *(GnST)*\", \"The authors create challenging but realistic queries using synthetic 3D images and corresponding textual descriptions and propose systematic metrics. *(Jhao)*\", \"**Experiments are comprehensive with detailed and extensive analysis to support conclusions**\", \"The extra effort of testing. Conclusions and discussion are thoughtful and well-supported. *(QwFz)*\", \"The experimental setup is quite thorough and shows clear trends, which the authors use to justify their claims. *(4ug1)*\", \"The research is thorough and methodical, assessing nine state-of-the-art VLMs with detailed analyses of their robustness, consistency, and flexibility. *(GnST)*\", \"The authors tested different VLMs and analyzed how they internally understand spatial representations. The testing methods are reliable. *(Jhao)*\", \"**Well-motivated work with valuable insights into AI alignment**\", \"Motivating statements and scoping within existing work are very clear and compelling. *(4ug1)*\", \"The authors extended the test cases to cover 109 languages across 170 regions, further testing how VLMs respond to ambiguities in spatial relationship descriptions under different languages. *(Jhao)*\", \"This work broadens the scope by emphasizing ambiguity in spatial reasoning due to different frameworks of reference and cross-linguistic/cross-cultural diversity. The paper\\u2019s contributions are highly significant in the context of both vision-language research and broader discussions on AI alignment with human cognition. The cross-lingual evaluation across diverse cultural backgrounds provides valuable insights into the limitations of current VLMs in handling non-English spatial conventions. *(EAzE)*\", \"**Clear and well-structured writing**\", \"The paper's writing is generally clear, and I found the organizational structure helpful for understanding the concepts introduced. *(4ug1)*\", \"The paper is well-organized and clearly written, making complex concepts accessible. *(GnST)*\", \"The paper is well-structured, with clear explanations of key concepts such as frames of reference and spatial reasoning. *(EAzE)*\", \"### Revisions Made\", \"To address reviewers\\u2019 suggestions, we made the following revisions (marked in blue in the PDF):\", \"1. **New experiments for case studies**\", \"**2 new case studies of real images** *(COMFORT-BALL, COMFORT-CAR)* for testing if the results generalize to real photos:\", \"We fixed the camera pose and manually rotated the target object around the reference object using:\", \"1. Two equally sized red and blue balls.\", \"2. A laptop and the red ball.\", \"We found that our findings still hold for real images.\", \"**1 new case study (COMFORT-BALL)** for testing if the results generalize to synthetic images with a brown background:\", \"We added a new dataset for a case study with a brown background while keeping everything else identical to the original COMFORT-BALL setup.\", \"We found that our findings still hold for alternative backgrounds.\"]}", "{\"comment\": \"Thanks to the authors for their thorough response to my questions and concerns. I maintain my original assessment and score. I believe that this is a solid paper and will be of interest to the community -- I recommend acceptance to the conference.\"}", "{\"summary\": \"The authors assess how VLMs represent space through the lens of \\u201csituated communication,\\u201d framing the meaning of spatial relations like \\u201cto the right of\\u201d under transformations, in different frames of reference, and others (situated communication).\\n\\nThey introduce the COMFORT dataset, tasks, and metrics to analyze spatial reasoning under these classes of transformations, and see which frame-of-reference preferences LMs have, and test whether these preferences are language-agnostic by extending their evaluation to multilingual settings. \\n\\nThe images are rendered to enable simple dynamic generation of test samples. Their language queries that test the relations likewise have a simple structure and are programmatically produced, enabling translation into other languages. Testing the kinds of relations is simple using this methodology and allows them to test for continua such as how a model rates the \\u201cin front of\\u201d relation by angle between the two in the plane relative to the camera.\\n\\n**Edit**: The authors did a great job adding some small case studies that **show** how their results generalize to other backgrounds and real images (although these are still pretty close to in-distribution to the test images). The use of a laptop rather than a ball in the real images in particular is a strong change. I have raised all the component scores to 4. **Although I would prefer an option to give a 9/10, since it isn't available I will raise from 8 to 10.**\\n\\nThese tests are then translated into metrics using the established concept of \\u201cacceptability regions\\u201d, which then allows them to score for correctness. Swapping object types, colors, etc in the same scene is easy using the synthetic pipeline and gives an evaluable sense of robustness.\\n\\nThey test a broad set of VLMs, including multilingual ones in 109 translated languages. Some models better represent egocentric vs object centric relations.\\n\\nOverall, they find that the VLMs do have some acceptable egocentric understanding capabilities but they have severe robustness issues. They find that English notions of frame of reference are held across most test languages, not reflecting the diversity of notions.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"Deep utilization of synthetic pipeline to really probe spatial perception from several angles.\\n\\nDemonstrations of anglewise judgements are compelling and surprising! (eg., figure 7)\\n\\nGreat to see the extra effort of testing \\n\\nConclusions and discussion are thoughtful and well supported.\", \"weaknesses\": \"~Sole reliance on synthetic data.~ A brief ablation over natural images would be nice to see to give a sense of how well these findings generalize. They added the case study, the results generalize (though the case study is still close to ID the synth data)!\\n\\n~Limited diversity of backgrounds. This is another form of variation that may have important implications for generalization.~ They added the case study, the findings generalize!\\n\\nThe multilingual experiments, while welcome, have limited time to breathe in the overall paper with just a few paragraphs and tables stuck in appendices. However, I think this is a great problem to have as the paper is densely packed with quality experimentation.\", \"questions\": \"Small grammar errors, eg 476: \\u201cDoes multilingual VLMs..\\u201d -> \\u201cDo multilingual\\u2026\\u201d Maybe do a brief grammarly check?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"10\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper discusses the performance of VLMs in handling ambiguity in spatial expressions. The authors constructed a new benchmark -COMFORT to systematically test the spatial reasoning capabilities of VLMs, and conducted analysis on COMFORT.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper delves into \\\"Do VLM represent space and how\\\". Specifically, it considers the concept of Frames of Reference (FoR) and constructs a framework for understanding how VLMs perceive spatial information, taking into account the ambiguities that FoR might bring. The COMFORT framework is systematic and comprehensive, and its task formulation is rigorous. The authors create challenging but realistic queries using synthetic 3D images and corresponding textual descriptions, and propose systematic metrics. From the perspective of benchmark construction, this is a systematic and rigorous work.\\n\\n2. The authors tested different VLMs and analyzed how they internally understand spatial representations. The testing methods are reliable.\\n\\n3. The authors extended the test cases to cover 109 languages across 170 regions, further testing how VLMs respond to ambiguities in spatial relationship descriptions under different languages.\", \"weaknesses\": \"As the author claimed, \\\"spatial expressions in situated communication can be ambiguous.\\\" Therefore, considering that humans also exhibit different understandings when dealing with similar issues of ambiguity, even if VLMs demonstrate different interpretations, it does not necessarily imply a clear superiority or inferiority. In other words, the evaluation metrics proposed in the paper are not absolute criteria, like egocentric is not naturally better than addressee-centered, but rather serve as perspectives for observing how VLMs understand spatial information.\\n\\nFrom this standpoint, I believe the author should include more analysis of how humans understand ambiguities in spatial relations (relevant information can be found in psychological and cognitive science research). This would help us better understand the differences between VLMs\\u2019 understanding of these ambiguities and the human perspective, and ultimately facilitate a better alignment of spatial reasoning in vision-language models with human cognition.\\n\\nHope to see further discussion on these points.\", \"questions\": \"Please refer to the comments in the Weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a novel evaluation protocol and benchmark, COnsistent Multilingual Frame Of Reference Test (COMFORT), to examine the spatial reasoning capabilities of vision and language models and evaluates 9 state-of-the-art models using this framework and demonstrate their significant shortcomings. This is an interesting framework and accompanying analysis of models and reviewers suggest that this paper will be a good contribution to the conference.\", \"additional_comments_on_reviewer_discussion\": \"The rebuttal period resulted in good outcomes and authors have improved their framework based on the reviewer comments and suggestions by adding new case studies.\"}" ] }
84WmbzikPP
Stiefel Flow Matching for Moment-Constrained Structure Elucidation
[ "Austin Henry Cheng", "Alston Lo", "Kin Long Kelvin Lee", "Santiago Miret", "Alan Aspuru-Guzik" ]
Molecular structure elucidation is a fundamental step in understanding chemical phenomena, with applications in identifying molecules in natural products, lab syntheses, forensic samples, and the interstellar medium. We consider the task of predicting a molecule's all-atom 3D structure given only its molecular formula and moments of inertia, motivated by the ability of rotational spectroscopy to measure these moments. While existing generative models can conditionally sample 3D structures with approximately correct moments, this soft conditioning fails to leverage the many digits of precision afforded by experimental rotational spectroscopy. To address this, we first show that the space of $n$-atom point clouds with a fixed set of moments of inertia is embedded in the Stiefel manifold $\mathrm{St}(n, 4)$. We then propose Stiefel Flow Matching as a generative model for elucidating 3D structure under exact moment constraints. Additionally, we learn simpler and shorter flows by finding approximate solutions for equivariant optimal transport on the Stiefel manifold. Empirically, enforcing exact moment constraints allows Stiefel Flow Matching to achieve higher success rates and faster sampling than Euclidean diffusion models, even on high-dimensional manifolds corresponding to large molecules in the GEOM dataset.
[ "3D molecular generative models", "flow matching", "Stiefel manifold", "structure elucidation" ]
Accept (Poster)
https://openreview.net/pdf?id=84WmbzikPP
https://openreview.net/forum?id=84WmbzikPP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xqmGyMOJi6", "wXb8T4cxPw", "vsUYAOqSvW", "rkKBqWVHeA", "qlee9fwxNK", "jxkKq5EM25", "gz4oghggyt", "ceBMBT4phl", "b5Cfpdj6CW", "XB1JejBQr0", "StH3dZO99y", "QGW9umE52c", "ITsOyLNNEI", "ARLyvdwQXH", "AA7IzC06WM", "5D4Op9prsQ", "3pHQ8lsSzg" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "decision", "official_comment", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1732277753042, 1732277920170, 1732290113989, 1732278055143, 1730687524816, 1732435764776, 1732277952873, 1732277995690, 1730146939100, 1729536402013, 1730703798140, 1732495009204, 1737524258767, 1732278297769, 1732278215411, 1735338149355, 1732278041271 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13419/Authors" ], [ "ICLR.cc/2025/Conference/Submission13419/Authors" ], [ "ICLR.cc/2025/Conference/Submission13419/Reviewer_oa92" ], [ "ICLR.cc/2025/Conference/Submission13419/Authors" ], [ "ICLR.cc/2025/Conference/Submission13419/Reviewer_f4fB" ], [ "ICLR.cc/2025/Conference/Submission13419/Reviewer_Locv" ], [ "ICLR.cc/2025/Conference/Submission13419/Authors" ], [ "ICLR.cc/2025/Conference/Submission13419/Authors" ], [ "ICLR.cc/2025/Conference/Submission13419/Reviewer_oa92" ], [ "ICLR.cc/2025/Conference/Submission13419/Reviewer_519h" ], [ "ICLR.cc/2025/Conference/Submission13419/Reviewer_Locv" ], [ "ICLR.cc/2025/Conference/Submission13419/Reviewer_519h" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13419/Authors" ], [ "ICLR.cc/2025/Conference/Submission13419/Authors" ], [ "ICLR.cc/2025/Conference/Submission13419/Area_Chair_QJD6" ], [ "ICLR.cc/2025/Conference/Submission13419/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We thank all reviewers for their thoughtful and constructive feedback. We are glad that reviewers Reviewers *Locv*, *oa92*, and *519h* recognize our application of the Stiefel manifold to molecular structure elucidation to be \\u201coriginal\\u201d and \\u201cthe very first work\\u201d, and that they find our mathematical presentation of the Stiefel manifold and Stiefel geodesics to be \\u201cthorough\\u201d, \\u201ceasy to follow\\u201d, and \\u201cconsistent and well-written\\u201d. Reviewers *Locv*, *f4fB*, and *519h* also recognize that our contribution empirically improves both the accuracy and efficiency of moment-constrained structure elucidation, with practical \\u201capplications in chemistry, pharmacology, and materials science\\u201d.\\n\\n**Empirical utility.** We first make a point that, despite tackling a heavily underconstrained problem, our results show that it is actually possible to take molecular formula and 3 moments of inertia, and then generate the all-atom 3D structure at 0.25 \\u00c5 resolution, for:\\n- 27.4% of the test set of QM9 (3580/13033), when combining KREED-XL, Stiefel FM, and Stiefel FM-OT\\n- 7.9% of the test set of GEOM (2297/29203), when combining KREED-XL, Stiefel FM (filter), and Stiefel FM-OT (filter)\\n\\nWe now respond to common comments.\\n\\n**Limited baselines.** Reviewers *Locv*, *f4fB*, and *519h* commented that Stiefel FM was compared to limited baselines. We would like to clarify a few items:\\n1. KREED-XL is a reflection-equivariant diffusion model that is a specialization of approaches like E(3)-equivariant diffusion [1], with differences being that KREED-XL does not change atom types, that it relaxes equivariance from E(3) to reflections, and has a more expressive architecture.\\n2. KREED-XL is different from KREED (Cheng et al.) because KREED assumes access to unsigned substitution coordinates as extra input, and they have different architectures.\\n3. KREED-XL-DPS is a significant baseline because Diffusion Posterior Sampling explicitly incorporates the closed-form formula of the moments in an extra guidance term while sampling the diffusion model.\\n4. We add an additional baseline, KREED-XL-proj, which simply takes all samples of KREED-XL and projects them onto the feasible manifold. The projection satisfies moment constraints exactly and does not change success rate, though it slightly reduces stability, since it has no effect on correct structures while distorting incorrect structures.\\n\\n[1] Hoogeboom, E., Satorras, V. G., Vignac, C., & Welling, M. (2022, June). Equivariant diffusion for molecule generation in 3d. In International conference on machine learning (pp. 8867-8887). PMLR.\\n\\n**Underfitting GEOM.** Reviewers *Locv* and *519h* commented that Stiefel FM on GEOM struggles with generating valid and stable structures. We point out that these validity and stability metrics are *averaged over all generated samples*, so validity and stability appear worse because when Stiefel FM fails, it tends to output a very bad structure. But, when Stiefel FM *does* generate the correct structure, it is valid and stable, which makes validity and stability a useful filtering mechanism for Stiefel FM, but not for KREED-XL. We note the greater efficiency (in terms of success rate per NFE) of Stiefel FM-OT for elucidating structure *despite* underfitting the dataset.\"}", "{\"comment\": \"Thank you for your thoughtful review of the paper. We appreciate your support of the work for its originality, readable presentation, and practical utility. We address your comments below:\\n\\n**Underfitting GEOM, concrete directions for improvement.** The difficulty of Stiefel FM may be attributed to pathologies in Riemannian flow matching with the geodesic distance on compact manifolds:\\n1. The velocity field under the geodesic distance suffers from discontinuity at the cut locus, as argued in [1].\\n2. The time-dependent probability density suffers from a shrinking support, which has been discussed for cases like the simplex [2] or $SO(3)$ (Appendix G.2 of [3]).\\n\\nOther works empirically report degraded results of Riemannian flow matching versus Riemannian diffusion on the torus [1,4]. These pathologies motivate the development of other probability paths for Stiefel flow matching, such as Stiefel diffusion, or flows which asymptotically land on the Stiefel manifold [6], both of which are discussed for future work.\\n\\nAt sample time, some potential improvements for flow matching are to use corrector sampling [5], or to enhance the flow with a jump process [3]. At training time, we have tried other flow matching techniques, such as a logitnormal timestep schedule [7] and stochastic flow matching [8], but with limited improvement. Minibatch optimal transport [9] is technically challenging to apply here because each example has a variable number of atoms.\\n\\n**Connections with discrete flow matching.** In our setting, the only degrees of freedom are the continuous atom positions, which does not include the discrete atomic identities because we assume a fixed molecular formula. In future work where molecular formula is unknown, the jump processes of generator matching [3] could enable a multimodal flow which simultaneously varies continuous atom positions and discrete atom types. This process would jump between Stiefel manifolds corresponding to different molecular formulae. Importantly, jump processes would not require initial knowledge of the number of atoms.\\n\\n**Heuristic alignment and validation.** We discuss each method of alignment and validation.\\n- The 3D alignment used in evaluating RMSD does not use a heuristic: it finds the node-permutation and reflection which yields the smallest least-squares Euclidean deviation. This is done by solving 8 linear assignment problems, one for each reflection, and taking the minimum.\\n- The alignment of noise and data samples for optimal transport minimizes Stiefel distance using greedy randomized search. We cannot rely on a linear assignment solver since we must minimize the nonlinear Stiefel distance. We suspect that solving this assignment problem is more difficult than the quadratic assignment problem, which is NP-hard already, and leave full validation of this hypothesis to future work. We have observed that decreasing the number of iterations of the heuristic can yield suboptimal paths, but can still provide useful learning signal. This is in line with other works where optimal transport approximated using mini-batches can still provide valuable learning signal [9].\\n- The validity metric uses rdkit's `rdDetermineConnectivity`, which assigns bonds based on a lookup of bond lengths and valency considerations.\\n\\n**Limited baselines.** See global response.\\n\\n**Longer than 9 pages.** The maximum page length is 10 according to the [call for papers](https://iclr.cc/Conferences/2025/CallForPapers):\\n> the main text must be between 6 and 10 pages (inclusive). ... We encourage authors to be crisp in their writing by submitting papers with 9 pages of main text. We recommend that authors only use the longer page limit in order to include larger and more detailed figures. However, authors are free to use the pages as they wish, as long as they obey the page limits.\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"I would like to thank the authors for their efforts and responses.\\n\\nI understand that the train/test split is applied, but my question was on OOD data. I suggest the authors to include the discussion of the OOD evaluation in the revised manuscript. If the method can generalize well due to the nature of the dataset, then I recommend highlighting this point. \\n\\nThe authors do address most of my concerns. As such, I will raise my score.\"}", "{\"comment\": \"**$\\\\mathrm{St}(n, 3)$ vs $\\\\mathrm{St}(n, 4)$.** The true degrees of freedom lie in $\\\\mathrm{St}(n-1, 3)$, with a dimension of $3n-9$, accounting for $3n$ coordinates and removing 3 translational, 3 rotational, and 3 moment degrees of freedom. $\\\\mathrm{St}(n-1, 3)$ is contained within $\\\\mathrm{St}(n, 3)$, which in turn is contained within $\\\\mathrm{St}(n, 4)$. The reason we consistently use $\\\\mathrm{St}(n, 4)$ in the main text is to be specific about \\\"which\\\" $\\\\mathrm{St}(n, 3)$ we are referring to, as it must maintain orthogonality to the mass vector. We only refer to $\\\\mathrm{St}(n, 3)$ two times in the main text: (1) when we refer to the first 3 columns of $\\\\boldsymbol{U}$, which does lie in $\\\\mathrm{St}(n, 3)$; and (2) to say that the logarithm of $\\\\mathcal{M} \\\\subseteq \\\\mathrm{St}(n, 4)$ can be computed using the logarithm of $\\\\mathrm{St}(n, 3)$. Theorem 4, which proves this, is postponed to the Appendix because its consequences are just that the logarithm can be computed slightly faster.\\n\\n**Sampling procedure is not clear.** We include the ODE to be solved by sampling in Section 3 of the main text, and we include an algorithm box in the appendix.\\n\\n**Timestep embedding.** We clarify that the appendix refers to *embedding* the timestep $t$ for the neural network using sinusoidal features with a *wavelength range* from 0.001 to 1. Anyways, we empirically do sample $U(0,1)$ without the endpoints to avoid potential numerical issues, so we update this in the text.\\n\\n**Require citation: Low-energy conformer has high population.** In rotational spectrometers, the molecular sample is usually cooled before measurement, which makes it a reasonable assumption that the lowest-energy conformer has the highest population. We have added a citation for this.\\n\\n**More parameters.** Our model is trained with more parameters because of the expected increased difficulty of learning a velocity on the Stiefel manifold, rather than in Euclidean space. We ensure fairness by training a reflection-equivariant diffusion model, KREED-XL, with identical architecture to the flow model. Under an equal architecture, Stiefel FM obtains a higher success rate per number of function evaluations, which demonstrates that the improvement results from the new generative approach.\\n\\n**Ill-sentence.** Thank you for catching this.\"}", "{\"summary\": \"The paper proposes a generetive model approach for generating 3D molecular structures conditioned on the molecular formula and moments of inertia. The generative method is an extension of Riemannian flow matching with equivariant optimal transport on the Stiefel manifold.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper shows the effectiveness of the proposed approach through evaluations over two molecule benchmarks: QM9 and GEOM. The method achieved lower RMSD\\u00a0for the generated molecules with a lower number of function evaluations (NFE) compared to the Euclidean diffusion model.\", \"weaknesses\": \"Limited model comparisons: The paper only compares its results against one method in the literature, where it might consider other works on Riemannian generative models.\", \"questions\": \"Does the Stiefel manifold have any constraints? How does it handle the increased complexity of larger molecules and the number of conformers?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I want to thank the authors for their comments. To me, the contribution by the authors seems novel and insightful. I will raise my score.\"}", "{\"comment\": \"**Optimal transport appears to hurt success.** It is not necessarily true that optimal transport reduces success rate - in fact it improves success rate on GEOM. What is true for both datasets is that the sample generation paths that land on the true structure often are longer than the sample generation paths of incorrect structures. We show histograms in Appendix Figure 7 to this end. We are unsure of the reason for this correlation, but one explanation is that initial points are sampled anywhere uniformly on the manifold, whereas for success they must travel to the single true structure. In contrast, there are many incorrect structures all over the manifold, which may end up on average closer to random initial points.\\n\\nRegarding adaptive tradeoff, we could not find a meaningful difference that discriminates between success cases of Stiefel FM and Stiefel FM-OT, and we tried number of atoms, number of distinct atom types, total molecular weight. We note it is easy and appropriate to simply train and sample from multiple models, as the real-world scenario assumes large computational resources.\\n\\nFor relaxing constraints, see above comment on flows which land on the Stiefel manifold.\\n\\n**Canonical and alternative metrics.** A one-parameter family of metrics on the Stiefel manifold exists [10], which includes the canonical metric as a special case. We use the canonical metric exclusively because the fast algebraic algorithm to compute the logarithm [11] only supports the canonical metric. Recent algorithms [12] extend the algebraic approach to the family of metrics, but we still find the canonical metric to be the fastest.\\n\\nAll discussion has been included in the revised paper.\\n\\n[1] Lou, A., Xu, M., Farris, A., & Ermon, S. (2023). Scaling Riemannian diffusion models. Advances in Neural Information Processing Systems, 36, 80291-80305.\\n\\n[2] Stark, H., Jing, B., Wang, C., Corso, G., Berger, B., Barzilay, R., & Jaakkola, T. (2024). Dirichlet flow matching with applications to dna sequence design. arXiv preprint arXiv:2402.05841.\\n\\n[3] Holderrieth, P., Havasi, M., Yim, J., Shaul, N., Gat, I., Jaakkola, T., ... & Lipman, Y. (2024). Generator Matching: Generative modeling with arbitrary Markov processes. arXiv preprint arXiv:2410.20587.\\n\\n[4] Zhu, Y., Chen, T., Kong, L., Theodorou, E. A., & Tao, M. (2024). Trivialized Momentum Facilitates Diffusion Generative Modeling on Lie Groups. arXiv preprint arXiv:2405.16381.\\n\\n[5] Gat, I., Remez, T., Shaul, N., Kreuk, F., Chen, R. T., Synnaeve, G., ... & Lipman, Y. (2024). Discrete flow matching. arXiv preprint arXiv:2407.15595.\\n\\n[6] Gao, B., Vary, S., Ablin, P., & Absil, P. A. (2022). Optimization flows landing on the Stiefel manifold. IFAC-PapersOnLine, 55(30), 25-30.\\n\\n[7] Esser, P., Kulal, S., Blattmann, A., Entezari, R., M\\u00fcller, J., Saini, H., ... & Rombach, R. (2024, March). Scaling rectified flow transformers for high-resolution image synthesis. In Forty-first International Conference on Machine Learning.\\n\\n[8] Bose, A. J., Akhound-Sadegh, T., Huguet, G., Fatras, K., Rector-Brooks, J., Liu, C. H., ... & Tong, A. (2023). Se (3)-stochastic flow matching for protein backbone generation. arXiv preprint arXiv:2310.02391.\\n\\n[9] Tong, A., Fatras, K., Malkin, N., Huguet, G., Zhang, Y., Rector-Brooks, J., ... & Bengio, Y. (2023). Improving and generalizing flow-based generative models with minibatch optimal transport. arXiv preprint arXiv:2302.00482.\\n\\n[10] H\\u00fcper, K., Markina, I., & Leite, F. S. (2021). A Lagrangian approach to extremal curves on Stiefel manifolds. AIMS.\\n\\n[11] Zimmermann, R., & Hu\\u0308per, K. (2022). Computing the Riemannian logarithm on the Stiefel manifold: Metrics, methods, and performance. SIAM Journal on Matrix Analysis and Applications, 43(2), 953-980.\\n\\n[12] Mataigne, S., Zimmermann, R., & Miolane, N. (2024). An efficient algorithm for the Riemannian logarithm on the Stiefel manifold for a family of Riemannian metrics. arXiv preprint arXiv:2403.11730.\"}", "{\"comment\": \"Thank you for your comments on the paper. We appreciate your recognition of the effectiveness of our evaluation for our proposed method and the fact that we achieve better RMSD with a lower number of function evaluations. We address your comments below.\\n\\n**Limited baselines.** See global response.\\n\\n**Stiefel manifold constraints.** The Stiefel manifold *imposes* the constraints of moments of inertia. The only other constraint is that the molecule must be energetically stable, which should be learned by the model.\\n\\n**Large molecules.** The performance of Stiefel flow matching on large molecules is shown by its performance on GEOM. All conformers are used during training, but evaluation considers only the lowest energy conformer. This is because in a rotational spectroscopy measurement, the sample is cooled, and so the lowest-energy conformer is likely to have the highest population, which therefore shows the strongest signal in the spectrometer.\\n\\n[1] Hoogeboom, E., Satorras, V. G., Vignac, C., & Welling, M. (2022, June). Equivariant diffusion for molecule generation in 3d. In International conference on machine learning (pp. 8867-8887). PMLR.\"}", "{\"summary\": \"This paper proposes the use of flow matching generative models to obtain the 3D structure of molecules. This is achieved by explicitly enforcing problem constraints using Stiefel manifold and its properties.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper is generally well-written and easy to follow. The math is sound and clearly presented.\", \"The use of Stiefel manifold and its connection to the problem and the constraints in (2).\", \"The use of geodesics in the Stiefel manifold that provides straight lines for navigating the manifold.\"], \"weaknesses\": [\"How generalizable is the proposed method? Can the authors perform OOD evaluation?\", \"Why the need to generate K=10 samples and report only the one with the lowest RMSD? Is this the standard practice? if yes, support is needed. If not, then performing this indicates instability which requires further investigation.\", \"The evaluation metric for the proposed method seems to depends on RMSD thresholds defined by the authors. Have these thresholds been employed by previous methods that utilize neural networks for predicting the 3D structure of molecules? If yes, then citations are needed. The authors in Cheng et al. (2024) used correctness and reported the exact RMSD values.\", \"Why not reporting the RMSD directly? For example, when predicting a protein 3D structure (e.g., in AlphaFold), the RMSD is reported directly after alignment.\"], \"questions\": [\"### Major Comments/Questions:\", \"In lines 175 to 177, how practical is the assumption $n\\\\geq 5$? How many examples with $n<5$ are removed from the datasets?\", \"To compute the loss in (8), the authors listed 4 requirements. All are well explained other than the second. Further clarification is needed here.\", \"Many details of the paper is put in the Appendix. This is disrupting the flow of the paper. I suggest that the authors to place an algorithmic procedure for the sampling part.\", \"### Minor Comments/Questions:\", \"Based on the definition of $\\\\mathbf{m}$, shouldn't the total number of masses $M$ be equal to $n$?\", \"Vector $\\\\mathbf{a}$ is not being used in the problem definition other than specifying the number of atoms.\", \"The use of $st(n,3)$ instead of $st(n,4)$ in the main body is not well justified. Why Theorem 4 is put in the Appendix?\", \"The sampling procedure is not as clearly presented as the training of $v_\\\\theta$. Methmatical procedure of how the pre-trained $v_\\\\theta$ is employed to perform sampling is needed.\", \"In practice, the authors use t=0.001 to 1, but the uniform distribution in (8) uses U[0,1]. Shouldn't this be U(0,1]?\", \"The last sentence in the Datasets paragraph in Section 4 requires support by a citation.\", \"Why the FM model has more parameters than the diffusion model in Cheng et al. (2024)?\", \"Ill-sentence in lines 445 to 446.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work proposed a novel flow-based generative framework to address the generation task of molecules with moment constraints. The authors noted the connection between the molecules with fixed moments and the Stiefel manifold and applied flow matching on such a manifold to ensure exact moment constraints. Empirical experiments demonstrated better generation results that satisfy these moment constraints.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The manifold structure of n-body particles with fixed moments was demonstrated to be a Stiefel manifold in the paper, which, to the best of my knowledge, is the very first work.\", \"Equipped with such a Riemannian structure, the proposed framework can effectively navigate through the Stiefel manifold with the moment constraints exactly satisfied. Empirical experiments also demonstrated zero errors against the moment constraints.\", \"A thorough mathematical introduction to the Stiefel manifold was provided in the Appendix, together with efficient algorithms to calculate the exponential and logarithm maps on the Stiefel manifold, which makes practical training feasible.\"], \"weaknesses\": \"1. The idea of the moment-constrained generation is not well-motivated in the paper.\\n - **Molecules are not rigid bodies**. The dynamics of a molecule (e.g., stretching, twisting of bonds) make it possible for it to have various conformations that vary slightly by their principal moment of inertia. In other words, the moment of inertia measured by rotational spectroscopy is an ensemble property that reflects the average (or effective) moment of inertia of a molecule. In this sense, it seems more reasonable to allow for some violation of the moment constraints.\\n - **Practical applications are missing**. It remains unclear what the practical applications of generating molecules with specific moments are. As an example, generating molecules with a specific HOMO-LUMO gap can be significant to the discovery of photoelectric molecules. The moment of inertia, however, does not hold a specific connection to real-world applications.\\n\\n2. **The number of baselines compared in the paper was small**. Indeed, only KREED was compared. There are other baselines available:\\n - The conditional generative model in [1] where molecular properties are fed as additional information during both training and sampling for the conditional generation of molecules with desired properties. As the generative model implicitly learns the connection between the property and the molecule, it does not guarantee the exact satisfaction of constraints, but in practice, it does provide fairly competitive generation results. \\n - As it is always possible to project the generation to the corresponding Stiefel manifold according to Appendix B.6, another natural baseline would be directly projecting the unconstrained generation onto the Stiefel manifold, which also ensures the exact moment constraints.\\n\\n3. **The experimental results were not convincing enough** to demonstrate the superior performance of the proposed method. Although the moment constraint errors were indeed zero, the validity and stability scores were worse. The RMSD percentage also outperformed the baseline by only a small margin on the GEOM dataset.\\n\\n4. **The filtering procedure on the GEOM evaluation may give the proposed model with inappropriate advantages**, as it is essentially picking from a larger number of generations with a high likelihood of finding better samples. A similar procedure should be applied to the KREED baseline for a fair comparison.\\n\\n[1] Hoogeboom, Emiel, et al. \\\"Equivariant diffusion for molecule generation in 3d.\\\" International conference on machine learning. PMLR, 2022.\", \"questions\": \"1. What are the practical applications of the moment-constrained molecule generation? Why is it important to enforce the exact moment constraints? See Weakness 1.\\n2. What are the performance of additional baselines? See Weakness 2.\\n3. Can you follow the same postprocessing procedure and compare the proposed model with the KREED-filter baseline performance? See Weakness 3 & 4.\\n4. What is the empirical training and sampling time of the proposed method compared to a normal unconstrained Euclidean flow matching model? I would expect extra time as the exponential map and logarithm map on the Stiefel manifold are far more complex to compute. Nonetheless, it is always beneficial to know the empirical time.\\n5. The logarithm map was calculated with 20 iterations. Does it provide a good enough approximation? Can you provide an analysis of the logarithm errors with respect to the number of iterations to better demonstrate your choice of 20 iterations (and 1 iteration for OT)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper addresses the molecular structure elucidation problem using a generative approach. This task involves inferring a molecule's 3D structure from its molecular formula and moments of inertia. Traditional generative models conditioned on moments of inertia fail to leverage the precision offered by rotational spectroscopy, which measures moments with high accuracy. The paper introduces Stiefel Flow Matching, a generative model that operates on the Stiefel manifold, where the set of point clouds with fixed moments of inertia is embedded. Empirical results demonstrate higher accuracy and efficiency in generating 3D molecular structures compared to Euclidean diffusion models.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"As far as I can tell, the approach used in the paper of embedding the molecular structure elucidation task in the Stiefel manifold is original. This approach leverages manifold geometry to respect exact moment constraints, improving over traditional Euclidean diffusion models.\", \"In general, the paper is written and formatted very well. I appreciate the consistent and well-written formalisms and figures, making the otherwise quite abstract paper well-readable. Moreover, the formalisms support the methodology of the work well.\", \"The results show some performance improvements, particularly in maintaining moment constraints and reducing sampling cost.\", \"This work has potential applications in chemistry, pharmacology, and materials science, where precise molecular structures are essential. By leveraging constraints from rotational spectroscopy, this model could significantly improve the accuracy of 3D structure predictions for unknown molecules.\"], \"weaknesses\": [\"Most obviously, for larger datasets like GEOM, the model struggles with validity and stability of generated structures, indicating possible underfitting issues. I think some more concrete suggestions on how to improve these issues would be greatly beneficial.\", \"Some connections to recent approaches to molecular generation with discrete flow matching are missing in the paper. Even though these problems consider a slightly different task as they operate on discrete domains, some intuition of why the authors' approach does not involve discrete dynamics/generation at all would be useful, as it intuitively would be beneficial for this task.\", \"As far as I understand it, some of the alignment and validation steps rely on heuristic checks. Some reflection on how task-specific these are, or some intuition behind the thereby introduced bias would be useful.\", \"While the authors provide some baseline comparisons, they are somewhat limited and some discussion on how Stiefel Flow Matching specifically outperforms or complements existing diffusion models on continuous or Riemannian spaces could benefit the paper. Moreover, adding more experimental results would aid in this too.\", \"The paper is longer than 9 pages.\"], \"questions\": [\"The model seems to underfit the GEOM dataset, with low stability and validity rates. Could techniques from recent flow matching techniques or regularization methods improve performance? Alternatively, could a hybrid approach that combines Stiefel Flow Matching and discrete approaches maybe be beneficial here?\", \"While optimal transport reduces trajectory length, it appears to slightly reduce success rates. Would a more flexible trade-off strategy that selectively applies optimal transport based on the target molecule\\u2019s complexity yield better outcomes? Some comparisons here would be nice. On a similar note, the paper embeds the structure space on the Stiefel manifold, which assumes exact moment constraints. Have the authors considered using approximate embeddings on simpler manifolds for faster computation at the expense of slight constraint violations?\", \"Do I understand correctly the canonical metric on the Stiefel manifold is used? Given recent developments in Riemannian FM, did the authors explore alternative Riemannian metrics?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Decision to raise my scores\", \"comment\": \"I thank the authors for their detailed explanation regarding my concerns on the practical application of moment-constrained generation in the chemistry domain. With the additional baseline results and further ablation studies on the iterative algorithm for the exponential and logarithm maps, I believe the authors have fully addressed my concerns. Therefore, I decide to raise my score from 5 to 8 and champion this paper for its rigorous mathematical background and good experimental results.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"**Filtering provides advantage.** We argue that validity-filtering does not provide an unfair advantage because the filtering procedure does not provide access to the true RMSD oracle - it just filters out structures with steric clashes. It is not guaranteed that validity-filtering will improve performance because a model could instead generate more valid structures that are not the true structure. The fact that Stiefel FM-OT can effectively leverage the validity filter demonstrates that exact moment constraints can help sift out the true structure.\\n\\nPut another way, the improved performance suggests that Stiefel FM (filter), when adjusted for generating valid structures, is more likely to select out the valid structure that actually is the true structure. This is in contrast to KREED-XL, which can readily generate valid structures, but which may not be the correct structure.\\n\\nLastly, Stiefel FM-OT (filter) still has a lower computational cost than KREED-XL: we report empirical timing of sampling below. Though KREED-XL could be adapted to Euclidean flow matching to reduce NFE cost, it is not clear that Euclidean flow matching would maintain its current success rate, since stochastic sampling empirically leads to higher quality samples [1], and this structure elucidation task particularly calls for low-temperature sampling. It is not clear how to fairly perform a filtering procedure for KREED-XL due to its much larger computational cost: KREED-XL requires 71.3 seconds per example, whereas Stiefel FM-OT (filter) requires 45.0 seconds per example.\\n\\n[1] Karras, T., Aittala, M., Aila, T., & Laine, S. (2022). Elucidating the design space of diffusion-based generative models. Advances in neural information processing systems, 35, 26565-26577.\\n\\n**Training and sampling overhead of Stiefel FM.** We summarize wall-clock timing on QM9 and GEOM below:\\n\\nThe cost of Euclidean flow matching can be estimated by dividing the sampling cost of KREED-XL by 5, though success rate may not remain the same, for the reasons mentioned above.\\n\\n| Method | Dataset | Training (min / epoch) | Training (it / s) | Sampling (seconds / K=10 samples) |\\n| --- | --- | --- | --- | --- |\\n|KREED-XL | QM9 | 1.02 | 6.7 | 13.9 |\\n|Stiefel FM | QM9 | 1.32 | 5.1 | 2.9 |\\n|Stiefel FM-OT | QM9 | 3.48 | 1.9 | 2.9 |\\n|KREED-XL | GEOM | 225.6 | 17.0 | 71.3 |\\n|Stiefel FM | GEOM | 224.4 | 17.1 | 15.0 |\\n|Stiefel FM-OT | GEOM | 229.8 | 16.7 | 15.0 |\\n\\nThe CPU bottleneck of logarithm computation and optimal transport disappears for GEOM due to a smaller batch size of 24, compared to a batch size of 256 for QM9.\\n\\n**Logarithm convergence.** Most examples converge before reaching 20 iterations. Only 1 iteration is used for OT because it is used as a heuristic for search, and it maintains the same relative ordering. We empirically validate this below.\\n\\nWe sample 100k training examples from QM9 or GEOM to be used as $\\\\boldsymbol{U}_1$ and for each example sample one random point $\\\\boldsymbol{U}_0$. We compute the true logarithm using a large number of iterations and compare it to the 20-iteration truncated logarithm. We record error as the infinity norm of the difference between the true and approximate logarithms. We set the convergence threshold to be 1e-6. \\n- QM9: The 20-iteration logarithm converges 97.9% of the time (median 9 iterations to converge), and the median error in case of nonconvergence is 2.4e-4. The Spearman correlation between the 1-iteration approximate distance and the true distance is $\\\\rho=0.88$.\\n- GEOM: The 20-iteration logarithm converges 99.7% of the time (median 9 iterations to converge), and the median error in case of nonconvergence is 8.8e-5. The Spearman correlation between the 1-iteration approximate distance and the true distance is $\\\\rho=0.82$.\\n\\nThese results also empirically show that the 1-iteration approximate distance is an upper bound on the true distance.\\n\\nPlots and histograms are shown in Appendix E.\"}", "{\"comment\": \"Thank you for your thoughtful comments. We appreciate your recognition of the novelty of our work in using the Stiefel manifold to exactly satisfy moment constraints and in providing a practical training procedure for an effective Stiefel generative model. We address your comments point-by-point below.\\n\\n**Molecules are not rigid bodies.** You are correct that molecules are not perfectly rigid, and that naively converting rotational constants to moments yields *effective moments*, not equilibrium moments. We have added a discussion of vibrational flexibility and rovibrational correction in Appendix A.4. In summary, exact moment constraints enables the search of the true equilibrium moments *up to the precision of calculating the rovibrational correction*.\\n\\nExperiment observes properties that have been *vibrationally* averaged, including the rotational constants $A(BC)_0$. Vibrational averaging is distinguished from conformational fluctuations such as torsions, which can be frozen out by cooling molecules to their ground vibrational state. However, even in the ground vibrational state, a molecule is still vibrating due to zero-point energy.\\nIf one were to *a priori* guess the equilibrium moments correctly, one could then verify whether they are indeed correct by generating structures, calculating their rovibrational corrections $\\\\alpha$, and then *checking* their agreement to the experimental rotational constants, via $A(BC)_0 = A(BC)_e - \\\\frac{1}{2}\\\\sum_i \\\\alpha_i^{A(BC)}$.\\nTherefore, given experimental rotational constants $A(BC)_0$, one can use this verification procedure in a fine-grid search for the true equilibrium rotational constants $A(BC)_e$. This is feasible since $A(BC)_e$ consist of only 3 numbers. Experimental precision in $A(BC)_0$ is maintained up to the precision in computing $\\\\alpha$.\\n\\nTo verify that a structure is the true structure, we must know that:\\n1. its $A(BC)_e$ and $\\\\alpha$ match $A(BC)_0$ \\n2. gradient norm is near 0.\\n\\nIn contrast, without the manifold constraint, samples may have incorrect moment constraints, and we must rely on quantum chemistry geometry optimization to land in a potential energy minimum that just happens to have the correct rovibrational-corrected moments.\\n\\n**Practical application.** The practical application is in enabling unknown molecule structure elucidation by rotational spectroscopy, as rotational spectroscopy can measure rotational constants, which are related to the equilibrium moments once you know the rovibrational correction. Our reported success rates are necessarily low due to the need to evaluate on many different test set molecules - in practice, a structure elucidation campaign focuses on a handful of molecules and can therefore devote large computational resources (e.g. 100x more samples, exhaustive energy calculations).\\n\\nAn immediate practical implication of our results is that it actually is possible to identify unknown molecules by just their moments of inertia and molecular formula. The experimental implication of solving this problem is that it could dramatically shift the role of rotational spectroscopy from merely confirming structures to elucidating unknown molecules. This would provide a new method for structure elucidation which does not require mixture separation and may be useful in identifying unknown molecules in the interstellar medium.\\n\\n**Number of baselines is small.** See global response. Thank you for the suggestion of KREED-XL-proj.\\n\\n**Experimental results not convincing.** We emphasize that validity and stability are reported as averages over *all generated samples*. Despite poorer averages, Stiefel FM-OT (filter) obtains a greater success rate. The fact that an underfitting model can already provide advantage suggests that the direction of the approach is meaningful, and we discuss potential improvements that overcome the pathologies of Riemannian flow matching in the response to Reviewer *Locv*.\"}", "{\"metareview\": \"The submission studies moment-constrained molecular structure identification, in which we are given the chemical formula of a molecule and the moments of its 3d structure. The goal is to sample from a distribution over structures which are both chemically realistic and agree with the observed moments. In previous work, this problem had been investigated using soft moment constraints. Here, the paper shows how to exactly enforce moment constraints, observing that these constrains can be converted into a manifold constraint in which allowable structures reside on a geodesic submanifold of the stiefel manifold. The paper develops generative models on this Stiefel manifold using flow matching and optimal transport.\\n\\nThe paper provides a novel approach to moment constrained structure generation. In particular, it provides a novel formulation of this problem as generation of a submanifold of the Stiefel manifold. Experiments demonstrate that the proposed method generates a larger fraction of correct structure compared to existing baselines. While there are some limitations to the method\\u2019s performance (this is a highly underdetermined inverse problem; all current methods struggle with the majority of instances). After discussion, reviewers converged to a uniform recommendation to accept, praising the paper\\u2019s novel and rigorous formulation and its experimental results.\", \"additional_comments_on_reviewer_discussion\": [\"The initial evaluation was mixed. On the positive side, reviewers noted that the paper develops a novel connection between generating structures with moment constraints and generative modeling on the Stiefel manifold. Reviewers found the paper to be very clearly written. The main points of discussion include:\", \"Performance on larger datasets such as GEOM [Locv,519h]. As noted by the author response, although the method struggles to generate valid and stable structures for some instances, it exhibits a higher success rate compared to existing methods.\", \"Optimal transport does not necessarily improve performance [Locv]. As noted by the authors, this is dataset dependent.\", \"Comparison to other models [Locv,f4fB,519h]: the author response added additional comparisons showing reduced computational cost compared to baselines.\", \"Reviewers found that the author response addressed their concerns, and converged to a uniform recommendation of acceptance.\"]}", "{\"comment\": \"Thank you for your helpful comments. We appreciate you saying that the paper is well written and easy to follow, as well as your recognition of the novelty of our method in applying the Stiefel manifold and its geodesics. We address your comments below.\\n\\n**Generalizability of the method.** Our results are on a *molecule-wise* train/test split, which means that at test time, the model has never seen any of the unknown molecule's conformers. The increased success rate of Stiefel FM indicates that the model generalizes better than unconstrained diffusion models. We note that the GEOM dataset covers a large area of chemical space which also covers the extent of applicability of rotational spectroscopy: medium-size molecules. Out-of-distribution detection is not perceived to be an issue, since any reported successes will be checked thoroughly using quantum chemistry. Does this answer your question?\\n\\n**Generating $K=10$ samples.** We take the minimum RMSD because the important metric is whether the generative model can generate the correct structure at least once. We set $K=10$ based on our computational budget of requiring evaluation on all 29203 test set examples. In actual structure elucidation campaigns, there will only be a handful of molecules of interest, so $K$ should be $>1000$. Correspondingly, significant computational resources will be available so that every generated example can be checked by geometry optimization with quantum chemistry, followed by ensuring that the molecule is stable (gradient norm 0) and has the correct moments.\\n\\nWe emphasize that we are proposing a new problem setting. Other 3D structure prediction problems such as docking usually query $K$ samples and then rank samples based on a confidence head to output a single top-1 predicted structure. We do not rank samples because every generated sample can be evaluated with quantum chemistry.\\n\\n**RMSD thresholds.** In our proposed problem setting, the RMSD thresholds are set quite small because the only way to verify that a generated sample is correct is to check that it is stable and has the correct moments. Stability is highly sensitive to tiny changes in conformation. The correctness metric of Cheng et al. is too forgiving, because torsion angles can be rotated, giving the same chemical graph but a different conformer with different moments of inertia.\\n\\n**Report RMSD directly.** We do not report mean/median RMSD because these aggregate metrics do not distinguish the performance of each method. Most examples have high RMSD, owing to the high difficulty of the problem, which washes out the mean or median. We therefore show RMSD histograms for transparency on the choice of threshold.\\n\\n**$n \\\\geq 5$ assumption.** There are 6 examples of molecules with $n < 5$ in QM9, and none in GEOM. There are relatively very few small molecules compared to larger molecules. The vast majority of already-known molecules are also small, because they are easy to study. In contrast, we focus on the setting of elucidating unknown molecules. \\n\\n**Explaining loss computation.** Step (2) is geodesic interpolation, which is given by Eq. (9). We refer to the Appendix for numerical algorithms of the exponential and logarithm.\\n\\n**Appendix interrupts flow, add sampling.** We have done our best to make the main text of the paper self-contained in conveying important details related to method and experimental results. We intentionally leverage the appendix for showing greater details for interested readers. We are happy to discuss which specific details you think would be a better fit for the main text. We have also added an algorithm box describing sampling as you suggested.\\n\\n**Masses.** You are correct that there are $n$ masses, one for each atom. $M=\\\\sum_i \\\\boldsymbol{m}_i$ is the total mass of the molecule.\\n\\n**Vector $\\\\boldsymbol{a}$.** This is defined this way just so it is clear that atom types are discrete. There is an implicit mapping between atom types and the mass of the most abundant isotope. $\\\\boldsymbol{a}$ is used in the appendix to specify that input features to the neural network include the embedded discrete atom types.\"}" ] }
83le3arfeA
Balanced Hyperbolic Embeddings Are Natural Out-of-Distribution Detectors
[ "Tejaswi Kasarla", "Max van Spengler", "Pascal Mettes" ]
Out-of-distribution recognition forms an important and well-studied problem in computer vision, with the goal to filter out samples that do not belong to the distribution on which a network has been trained. The conclusion of this paper is simple: a good hierarchical hyperbolic embedding is preferred for discriminating in- and out-of-distribution samples. We introduce Balanced Hyperbolic Learning. We outline a hyperbolic class embedding algorithm that jointly optimizes for hierarchical distortion and balancing between shallow and wide subhierarchies. We can then use the class embeddings as hyperbolic prototypes for classification on in-distribution data. We outline how existing out-of-distribution scoring functions can be generalized to operate with hyperbolic prototypes. Empirical evaluations across 13 datasets and 13 scoring functions show that our hyperbolic embeddings outperform existing out-of-distribution approaches when trained on the same data with the same backbones. We also show that our hyperbolic embeddings outperform other hyperbolic approaches and naturally enable hierarchical out-of-distribution generalization.
[ "Hyperbolic learning", "Out-of-distribution detection" ]
Reject
https://openreview.net/pdf?id=83le3arfeA
https://openreview.net/forum?id=83le3arfeA
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w72bifuFER", "s1KLi38GEo", "ritk90jQPK", "jyZZ9rOwIs", "hfn0WoF3Nh", "dsTcPSjQgH", "cyWjXQzC1V", "aejFXOes52", "ZhOLpSZceW", "QFBRuuNlRw", "QCpqRf90qE", "Le1EuY5DF6", "LPC3TlBLIg", "IjSUEZR9td", "IOxTBvcBTN", "FJZt9gCsI5", "8VDtyOh0WK", "5Ues0a1yr5" ], "note_type": [ "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment" ], "note_created": [ 1732399773904, 1730723809611, 1732400217898, 1737523819312, 1732400168772, 1733222750581, 1732399123689, 1732840345260, 1730697898138, 1734719577056, 1733054433833, 1732399493199, 1732399328032, 1732794923665, 1733071275914, 1730353244876, 1730481562388, 1732840128127 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7135/Authors" ], [ "ICLR.cc/2025/Conference/Submission7135/Reviewer_hwZY" ], [ "ICLR.cc/2025/Conference/Submission7135/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7135/Authors" ], [ "ICLR.cc/2025/Conference/Submission7135/Authors" ], [ "ICLR.cc/2025/Conference/Submission7135/Authors" ], [ "ICLR.cc/2025/Conference/Submission7135/Reviewer_ZNiu" ], [ "ICLR.cc/2025/Conference/Submission7135/Reviewer_rQmy" ], [ "ICLR.cc/2025/Conference/Submission7135/Area_Chair_PTqM" ], [ "ICLR.cc/2025/Conference/Submission7135/Authors" ], [ "ICLR.cc/2025/Conference/Submission7135/Authors" ], [ "ICLR.cc/2025/Conference/Submission7135/Authors" ], [ "ICLR.cc/2025/Conference/Submission7135/Authors" ], [ "ICLR.cc/2025/Conference/Submission7135/Reviewer_ZNiu" ], [ "ICLR.cc/2025/Conference/Submission7135/Reviewer_ZNiu" ], [ "ICLR.cc/2025/Conference/Submission7135/Reviewer_UQQS" ], [ "ICLR.cc/2025/Conference/Submission7135/Reviewer_ZNiu" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer UQQS\", \"comment\": \"We thank the reviewer for their detailed feedback, we address the question below and fix minor mistakes in the paper directly and update Figure 1 to be bigger.\\n\\n**Learning a well defined hierarchy**. We agree that distortion loss helps enforce the hierarchical structure. The reviewer is also correct in noting that the combination of distortion and norm loss helps prevent embeddings from collapsing. Measuring distortion of the learned graph (ex., distortion in Table 3, 7) shows that the hierarchical distances are preserved in the hyperbolic embeddings. To Appendix B, we add a plot of pairwise distances of each nodes which confirms that the hierarchy is indeed preserved. Additionally, we add a plot average norm for all the levels of hierarchy for CIFAR-100 and ImageNet-100.\\n\\n**Assumption of a known and correct hierarchy.** We agree that the method assumes access to a known and correct hierarchy. Updated the limitations to include this.\\n\\n**Distance of OOD.** The observations made by the reviewer in Figure 4 are correct, we add two clarifications:\\n\\n- On average OOD samples are closer to origin. As suggested, a plot showing the distribution of norms is added to the Appendix C.\\n- OOD samples can have a high norm without necessarily being close to the prototypes.\\n\\n\\n**If OOD is highly related to the semantic concepts.** Using hierarchical prototypes makes it easier to handle OOD examples that are closely related to in-distribution data, as shown in hierarchical ablations (CIFAR-OOD-Split). It also leaves room to build on existing hierarchical datasets from other research or come up with new methods, making it a flexible and practical approach for future improvements.\\n\\n**Generalization to other curvatures.** The method can generalize well to curvatures which can be dataset dependant. We keep the curvature fixed to -1 for our main experiments. We will add an ablation of curvature to Appendix C.\"}", "{\"summary\": \"## Summary\\n\\nThe paper proposes Balanced Hyperbolic Learning for improved out-of-distribution detection by utilizing the hierarchical label information and learning more discriminative representations between ID and OOD samples. This is done in a two-step hierarchical hyperbolic embedding optimization process, where the first step involves a combination of distortion loss and norm based loss to learn the hyperbolic class prototypes from the label hierarchy. Then, the second step involves obtaining the hyperbolic representation for each image by optimizing a hyperbolic distance-based cross entropy loss. Experimental results are provided on two ID datasets for a suite of near and far OOD datasets, and involves comparison with well known OOD detection techniques from literature. \\n\\n---\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"## Strengths\\n\\n1. The paper is generally well organized. \\n\\n2. Connections between hyperbolic representation learning and out-of-distribution detection have been evidenced by prior work and new contributions at this intersection are interesting and valuable to the community. \\n\\n3. The authors provide a good coverage of related works in out-of-distribution detection, hyperbolic embeddings of images, and hyperbolic learning of hierarchies, to place their proposed method in the right context. \\n\\n4. The authors propose generalizations of existing OOD detection scores to hyperbolic variants which are simple and easy to adapt. \\n\\n5. Hyperbolic distance based loss function during optimization as in (11) is intuitive and naturally suitable for learning discriminative representations for distance-based OOD detection \\n\\n---\", \"weaknesses\": \"## Weaknesses\\n\\n1. Some of the claims made by the authors are not substantiated by prior work or experimental evidence, for instance L077 \\u201cexisting hyperbolic embedding methods are biased towards deeper and wider sub-trees, with smaller sub-trees pushed towards the origin\\u201d - how do the authors define the bias and how is this verified experimentally? \\n\\n2. Key details of the setup and specific experiments are missing, making it difficult to accurately evaluate the comparison with prior methods and impact of the work (see list of questions below for missing details) \\n\\n\\n3. The goal and setup of the OOD detection task is to improve OOD detection performance while maintaining the ID accuracy, however the authors do not report the ID accuracy as compared to the baseline methods for either the CIFAR100 or the ImageNet100 datasets. The ID accuracy is reported for only a subset of methods for CIFAR100 in Table 3, and the reported ID accuracy for the proposed method seems to be lower than the accuracies reported by other methods, for instance CIFAR 100 ID acc is 73.4 whereas the CIDER paper reports ID acc of 75.35 on CIFAR100 with ResNet34 (page 19). \\n\\n\\n4. The design of the loss function in Eq. 7 can potentially lead to scale mismatch issues due to the difference in the source and nature of the hyperbolic distance $d_B$ and graph distance $d_G$. $d_B$ grows non-linearly as the points approach the boundary and can dwarf $d_G$ which remains typically bounded and grows linearly w.r.t the edge counts, especially in smaller graphs. This can cause mis-alignment where large $d_G$ does not map proportionately to large $d_B$, did the authors investigate the scales of these terms and the overall loss trends? \\n\\n\\n5. Some pivotal connections to prior works are not fully acknowledged or discussed, for instance the primary hypothesis of this paper from Fig 1(b) and L082 - \\u201cOOD samples lie between ID clusters and the origin\\u201d is similar to an identical hypothesis proposed in CIDER (Figure 2, page 4 of the CIDER paper), albeit in hyperspherical embeddings, the key idea remains the same. \\n\\n---\", \"questions\": \"## Questions\\n\\n1. What is the loss function and experimental setup used for all the baseline (non-hyperbolic) experiments as reported in Tables 1 and 2? L250-253 mention using the \\u201csame features as in existing works for the most direct comparison to Euclidean-trained counterparts\\u201d which indicates that the methods and losses from original works are used, whereas L310-315 mentions \\u201cfor the baseline and ours, we use the exact same backbone and training procedure\\u201d which is confusing since the loss function for the two settings (original and proposed balanced hyperbolic) are expected to be different. Additionally, did the authors experiment with the Supervised Contrastive Loss [1] for the baseline Euclidean setting? Empirically much better results are reported using the euclidean SupCon loss in SSD [2] and KNN (Sun et. al, 2022) as compared to the cross entropy loss. \\n\\n\\n2. How is the distortion measured in Table 3?\\n\\n\\n3. How do the learnt representations and hierarchies differ when the proposed method is used without the norm-balancing loss? While this is observed via the OOD detection performance in 349-360, how does it affect the learning intuitively? \\n\\n4. Since the initialization of the hyperbolic prototypes is dependent on another technique, how do the authors see their method generalizing to other models of hyperbolic geometry? \\n\\n\\n5. Have the authors visualized the learnt hierarchies from the hyperbolic embeddings to verify if the underlying hierarchical relationships are indeed accurately encoded in the hyperbolic space using the two-step optimization process? \\n\\n---\\n\\n## Suggestions on improving presentation: \\n\\n1. The description in Section 3.2 (154-161) does not mention any details about the initializations of the prototypes, this description can be moved up from the next section to provide more context\\n\\n2. \\u201cdistortion\\u201d is mentioned in the introduction and Section 3.1 several times without any references or description, this should be included \\n\\n3. The Algorithm 1 on page 4 should include an \\u201cOutput\\u201d marker and corresponding notations to denote the expected result from the optimization process \\n\\n4. Some minor typographical fixes - 157 \\u201ccorresponding the the n graph..\\u201d -> \\u201ccorresponding to the ..\\u201d , 183 \\u201cshould have a the same..\\u201d -> \\u201cshould have the same norm\\u201d, 417 \\u201cwe have the use backbone for all methods\\u201d -> \\u201cwe use the same backbone for ..\\u201d, etc. \\n\\n---\\n\\n\\n## References: \\n\\n[1] Supervised Contrastive Learning, Khosla et. al, NeurIPS \\u201820 \\n\\n[2] SSD: A Unified Framework for Self-Supervised Outlier Detection, Sehwag et. al, ICLR \\u201821\\n\\n---\\n\\n## Post Rebuttal \\n\\nI thank the authors for the rebuttal. Having gone through their responses, their are several issues raised in my original review about the completeness and presentation of the results that are still not addressed during the rebuttal, specifically the discrepancy in the in-distribution accuracies and reported results for only a subset of the settings, and details about the choice of experimental setup and it's implications. Therefore, I would like to maintain my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Continued response to Reviewer ZNiu\", \"comment\": \"(contd.)\\n\\n**Mahalanobis distance scoring on tangent space.** It is possible to apply Mahalanobis distance in the tangent space. However, the features after applying the logarithmic map to the tangent space do not satisfy the assumption of Mahalanobis distance, which requires the representation to follow a multivariate Gaussian distribution. For this reason, we exclude this distance comparison in our work.\\n\\n**L2 distance between $d_G$ and $d_\\\\mathbb{B}$.** To clarify, we cannot take L2 distance in hyperbolic space between distance measures. Our formulation ensures that the contribution of the distance difference is relative and not dominated by points with larger hyperbolic distances.\\n\\n**Line 172: Riemannian SGD.** We use Riemannian SGD in only Algorithm 1 to learn the label hierarchy in the hyperbolic space. Afterwards, the labels become fixed prototypes. We train a ResNet34 with these fixed targets, which means all learnable parameters are in Euclidean space for that stage, and so we use SGD (line 293-294).\\n\\n**Visualizing learned embeddings.** We generate trees with various structures to learn balanced hyperbolic embeddings to illustrate the benefit of losses. The visualizations are added to Appendix B.\\n\\n**Line 141-144, quality of hierarchy.** We assume a correct and known hierarchy is available. For ImageNet-100, we use a pruned WordNet hierarchy, and for CIFAR-100, we use the available superclasses hierarchy. We do not use any LLM-generated hierarchies, leaving this exploration for future work.\\n\\n**About hierarchical OOD and ablations.** Hierarchical OOD and related metrics are designed for scenarios where an OOD sample overlaps with ID data, as similarly defined in previous works [a], [Betterino et.al. 2020]. For instance, this could represent a new, unseen bird species in a biological dataset. In such cases, it is crucial to achieve both granularity\\u2014classifying the bird as OOD\\u2014and precision\\u2014identifying the closest related ID class.\\n\\nTable 5 presents the standard OOD performance metrics (FPR/AUROC/AUPR) in the hierarchical setting, while Table 6 highlights the metrics that evaluate hierarchical properties as defined in [Betterino et.al. 2020] and [Dengxiong et.al. 2024]. \\n\\nWe have clarified this setup and provided further details in Appendix C.\\n\\n[a] Linderman, et al., Fine-grain Inference on Out-of-Distribution Data with Hierarchical Classification. COLLAs 2023\\n\\n\\n\\n**Questions for better clarity.**\\n- Line 70: \\u201cdistance to the class prototype\\u201d, prototype either can be class mean or in our method, a predetermined prototype to reflect hierarchy of label space. \\n- Distortion in Table 3 is measured using metric from [Sala et.al., 2018] (referenced in Table 7) and now updated in Table 3 accordingly. \\n- Line 199; curvature c=-1, we updated the main text to include curvature in the equations.\\n- ID dataset for Figure 2 is CIFAR-100, now updated in caption. \\n- Added ablations of curvature, dataset-wise OOD results, additional scoring functions from Table 2, AUPR and AUROC from Figure 2 to Appendix C.\\n- Rewrote Line 146, \\u201cequivalent edge distances\\u201d and Algorithm 1 We updated notations in Algorithm 1 to improve readability. \\n- base in Table 6 refers to a Euclidean ResNet trained on the CIFAR-split. \\n- Lines 842-850, we added a brief motivation of losses to the main text and expand on it in the Appendix A. Added more details for toy tree example and update the Figure 6.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer ZNiu\", \"comment\": \"We thank the reviewer for their feedback. Below, we have addressed the reviewer's comments regarding difference over existing works and additional analyses. We have also gone over all individual questions. We believe that the review and rebuttal strengthen the paper and we hope to have answered all questions adequately.\\n\\n**Novelty over existing works.** Indeed, [1] and [2] have previously highlighted the potential of hyperbolic or hierarchical embeddings for out-of-distribution detection. We take inspiration form such works and make following novel contributions:\\n\\n- The proposed balanced hyperbolic embeddings approximate tree distances more accurately than existing optimization approaches and can also serve as a standalone embedding tool.\\n- We introduce generalizations of existing OOD detection scores to their hyperbolic variants and provide in-depth analyses to showcase its effectiveness in OOD detection across datasets.\\n\\nWe have added clarifications in the paper to highlight these key distinctions and novel contributions over existing works.\\n\\n**Changes in section 5.** For better clarity, we added a short introduction in section 5 and updated the headings to be more descriptive.\\n\\n**Motivation for losses and how norm loss affects OOD detection.** We aim to learn an embedding with minimal distortion to preserve the hierarchical information in the labels. This is achieved by jointly optimizing the distortion (Equation 7) and norm loss (Equation 9). Optimizing Equation (7) directly minimizes distortion. The reviewer is right in pointing out that in we should try to push all in-distribution samples away from the origin. The ID samples are indeed away from the origin when the distance to leaf nodes (corresponding to the classes) from the balanced hyperbolic embeddings is minimized. Consequently, it means we need norm loss in Equation (9) ensures that all nodes belonging to the same granularity of a hierarchy are equidistant from the origin.\\n\\n**Comparison to hyperbolic methods.** We added Poincare embeddings and HEC for completeness, and didn't expect them to be better in classification performance, as also explained in lines 347-377. The AUROC and AUPR performance of Poincar\\u00e9 embeddings (PE) highlights the strong connection between hyperbolic learning and effective OOD detection. However, a downside of PE is its low classification performance. In contrast, our method achieves strong performance in both classification and OOD detection. We add suggested comparisons to other hyperbolic approaches below.\\n\\n| | FPR@95 | AUROC | AUPR |\\n| --- | --- | --- | --- |\\n| HNN [1] | 64.53 | 73.79 | 60.46 |\\n| PR [4] | 87.83 | 58.27 | 37.73 |\\n| HCL [5] | 62.31 | 75.43 | 59.10 |\\n| Ours | **49.46** | **82.43** | **70.41** |\\n\\n**k = 300 in Table 4.** We choose k=300 as to compare to ablations from CIDER [Ming et.al., 2023] and PALM [Lu et.al., 2024] [2]. As suggested, we also show below the results for k=200.\\n\\n| | | k=200 | | | k=300 | |\\n| --- | --- | --- | --- | --- | --- | --- |\\n| | FPR | AUROC | n-AUROC | FPR | AUROC | n-AUROC |\\n| CIDER | 43.98 | 85.52 | 76.63 | 43.24 | 86.18 | 75.43 |\\n| PALM | 39.01 | 87.46 | 79.36 | 38.27 | 87.76 | 78.96 |\\n| Ours | **35.66** | **89.15** | **79.07** | **35.83** | **89.45** | **78.50** |\\n\\n**Eq (11) and comparing Euclidean vs Hyperbolic logits.** Equation (11) ensures that embeddings of images are optimized to be closer to their class prototypes, $P_\\\\mathbb{B}$. This results in tighter, more compact ID clusters.\\nBoth Euclidean and hyperbolic logits perform equally well in classifying ID samples, as shown in Fig. 1. However, hyperbolic logits are particularly advantageous for identifying OOD samples due to the exponential nature of distances in hyperbolic geometry.\\nA key distinction of our method compared to hyperbolic OOD scoring approaches such as [1], which use distance hyperplane boundaries, or [Khrulkov et.al., 2020], which rely on sample norms to distinguish ID from OOD, is that our method differentiates samples based on their proximity to the prototypes. While [1] and [Khrulkov et.al., 2020] might misclassify OOD samples with high norms as ID, our approach leverages more nuanced information for better discrimination.\\n\\n**Line 157-159; Regarding [2] and CPCC loss.** [2] and our method both enforce the hierarchical knowledge of the tree into the feature space but in different ways. [2] enforces hierarchy during training by using CPCC loss as a regularizer to learn tree distances between class means. In contrast, our method first explicitly embeds the label hierarchy into hyperbolic space. Using the leaf nodes corresponding to classes as prototypes, we then train the model on the dataset. We add the discussion to related work section.\"}", "{\"title\": \"Clarifications\", \"comment\": \"Thank you for getting back to us. Based on the comments, we want to make a few clarifications that hopefully make our approach better to understand:\\n\\n**Role of norm loss for OOD.**\\nWe apologize for the confusion, the norm loss is *not* used when training networks and hence does not directly force upon ID/OOD samples. The norm loss is only used when constructing the hyperbolic embeddings of the hierarchies. The idea behind the norm loss is to enforce similar norms for nodes at the same level of abstraction. We find that existing approaches ignore the problem of imbalance between subtrees. When training a network, ID samples end up near the boundary of hyperbolic space. During testing, OOD samples tend to gravitate towards the origin. We show this in Figures 4 and 8. Without a norm loss, shallow subtrees also gravitate towards the origin, leading to unwanted biases. Hence our improvements in Table Figures 2,7 and 10. \\n\\n**Dealing with multiple hierarchical levels.**\\nWe want to clarify that norm loss is applied to same levels of abstraction, i.e., we group nodes together starting with the leaf nodes as the first group. We will add this detail to the paper.\\n\\n**Novelty.**\\nWe hope that the clarifications regarding the norm loss help to solidify the approach.\\n\\n**Hierarchical generalization experiments.**\\nWe have added the hyperbolic cross-entropy baseline to the main Tables, where our method remains best. We will do the same for the hierarchical experiments.\"}", "{\"title\": \"Summary of rebuttal\", \"comment\": \"We thank all reviewers for their appreciation of our work: novel and valuable (*hwZY, UQQS*), yet simple and widely applicable (*hwZY, UQQS, ZNiu*), with comprehensive experiments and ablations (*rQmy, UQQS*).\\n\\nWe also thank the reviewers for the constructive suggestions. We have revised the main text for better readability (now highlighted in blue) and added additional suggested experiments and visualizations to the appendix. The PDF will be uploaded shortly.\\n\\nWe are running the ablations on curvature (*ZNiU*) and validating bias towards deeper and wider subtrees (*hwZY*) and the results will be added to the appendix.\"}", "{\"comment\": \"## Novelty over existing works\\n\\nAlthough this might be somewhat subjective, I still want to give my own reason for why the novelty of this work is relatively limited. Let me copy the authors' response for the novelty statement in the rebuttal and elaborate:\\n\\n*\\u201cThe proposed balanced hyperbolic embeddings approximate tree distances more accurately than existing optimization approaches and can also serve as a standalone embedding tool.\\u201d*\\n\\nAfter you clarified your distortion metrics for evaluation and training, this novelty argument is not very strong. First, your distortion metric is borrowed from [Sala et.al., 2018], which is a recent work under a similar context. [Sala et.al., 2018] proposed this metric under the context of learning hyperbolic embedding, and authors directly use almost the same formulation as their training objectives. Besides, when you say \\u201capproximate tree distances more accurately,\\u201d I do not think this is a fair comparison with other embedding methods. If you directly optimize the distortion metric, as long as the optimizer works well, it is hard to achieve anything better than direct optimization. \\n\\n*\\u201cWe introduce generalizations of existing OOD detection scores to their hyperbolic variants and provide in-depth analyses to showcase its effectiveness in OOD detection across datasets.\\u201d*\\n\\nGeneralizations of existing OOD scores are a very simple extension if you learn embeddings in hyperbolic spaces. However, I am not sure if this amount of extension is enough novelty for ICLR. For the in-depth analysis, I agree authors provide extensive experiments, but the authors mainly focus on describing the numbers in the tables. Since authors include many metrics, datasets, baselines, and settings in the experiment section, it is important to carefully interpret both performance gain and performance loss/outliers for some set of experiments, so researchers can use your work more effectively in the future.\\n\\n## Hierarchy for OOD detection?\\n\\nThe paper claimed that OOD detection will benefit from both hyperbolic embeddings and hierarchical information. I don\\u2019t have too many concerns about the hyperbolic embeddings, but for the hierarchical information, first, it relies on the quality of hierarchy as you mentioned in the limitation. However, in practice in a more general OOD setting, such kind of ground truth tree information is usually unavailable, so I cannot verify if your method will generalize well other than CIFAR100, ImageNet, etc. Second, I would imagine, even if you have access to the ground truth hierarchy, it may hurt OOD detection performance when the inD and OOD are too near. If they share some parts of the ground truth hierarchy, introducing tree information can cause confusion in the OOD detection. This problem becomes even more challenging when we do not have access to the ground truth tree for the datasets. So the benefit of using hierarchy for OOD detection, although mentioned in some previous literature, is not very strong either. \\n\\nFurthermore, in the current submission, it is hard to see how much the OOD gain is from hierarchical information and how much is from hyperbolic embeddings. If the Euclidean baseline is just standard ResNet trained with cross-entropy, this baseline seems to be too weak for comparison. It is more reasonable to use the Euclidean version of your distortion loss (by replacing $d_\\\\mathbb{B}$ with Euclidean distances) + norm loss for OOD detection, to prove the advantage of hyperbolic embeddings. And this will also help you show how much gain you get from including hierarchical information by comparing Euclidean hierarchical embedding with Euclidean + cross entropy embedding. If you are worried that Euclidean space cannot embed the tree very well, then another necessary baseline is just training with hyperbolic cross-entropy in Equation 11, and use this to compare with your final method. Some of the results are now included in Figure 2, but many OOD metrics are still missing. \\n\\n\\nIf I have more questions about some details of the paper, I will add them later and ask for explanations if necessary. But personally, these concerns mentioned above are more fundamental weaknesses of this work, so I want to summarize them here and I am open to further discussion. Besides, if the authors feel some mentioned experiments require a huge amount of effort and find it challenging to finish all during the remaining rebuttal time, feel free to point them out.\"}", "{\"summary\": \"This paper proposes hierarchical hyperbolic embedding for out-of-distribution detection. Specifically, it introduces a balanced hyperbolic embedding that maintains a similar distance between any two nodes. The learned hyperbolic embedding demonstrates superior performance compared to existing OOD approaches across various benchmarks and scoring functions, and its effectiveness is validated through numerous ablation studies.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The hierarchical hyperbolic embedding shows promise for OOD detection, which is an interesting idea.\\n\\nThe effectiveness of the method is demonstrated through comparisons with various benchmarks and ablation studies. \\n\\nOverall, the paper is clear.\", \"weaknesses\": \"1) The motivation of \\u201chierarchical\\u201d hyperbolic embedding for OOD detection is somewhat unclear.\\nCould you please clarify the motivation behind using hierarchical relationships for hyperbolic embedding in OOD detection? Although it is well-known that hyperbolic embeddings can effectively represent distances in hierarchical graphs, it\\u2019s a little confusing about how this specifically benefits OOD detection.\\n\\n2) It would be helpful to include some recent related works on hierarchical hyperbolic embedding [1, 2].\\n\\n[1] Unsupervised Hyperbolic Metric Learning, CVPR, 2021\\n[2] HIER: Metric Learning Beyond Class Labels via Hierarchical Regularization, CVPR, 2023\", \"questions\": \"Related to W1, could you please clarify the motivation behind using hierarchical relationships for hyperbolic embedding in OOD detection? Although it is well-known that hyperbolic embeddings can effectively represent distances in hierarchical graphs, it\\u2019s a little confusing about how this specifically benefits OOD detection.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper proposes Balanced Hyperbolic Learning for improved out-of-distribution detection by utilizing the hierarchical label information. It has received constructive reviews, detailed comments, and also mixed ratings. On one hand, the reviewers appreciate the clear structure and organization of the paper, as well as the significant improvements in comparison to Euclidean embeddings in OOD detection. On the other hand, significant concerns have also been raised regarding the technical details and experimental setups that are missing.\\n\\nAuthors and reviewers engaged in extensive discussion during the rebuttal period, but nevertheless, the response was not able to fully address the reviewers' questions. In particular, the use of norm loss is not well supported in principle. Empirically, reviewers also had questions on the discrepancy in the IID accuracies and reported results for only a subset of the settings. \\n\\nThe authors are encouraged to incorporate the reviewers' comments to further strengthen the work when preparing for the next iteration of the paper.\", \"additional_comments_on_reviewer_discussion\": \"The reviews are mixed. Both two expert reviews share some common concerns of the paper that have not been properly addressed during the discussion session. The decision was made largely based on the expert reviews.\"}", "{\"title\": \"Response to additional questions from Reviewer ZNiu\", \"comment\": \"We thank the reviewer for their feedback and appreciate the opportunity to clarify further questions. Below, we address each of your concerns, and will add minor requested changes (Fig 2, other OOD metrics) to the final version. We hope to have answered all questions adequately.\\n\\n**Norm loss for OOD detection.** The norm loss ensures that within a hierarchy all nodes belonging to a same level maintain a similar distance from the origin. By balancing the norms across the hierarchy, norm loss prevents imbalances that could otherwise lead to subtrees having disproportionately lower norms. This is important, because without this loss, smaller subtrees are placed closer to the origin (where OOD samples commonly occur), which leads to less uniform probability distributions for OOD samples and makes it harder to discriminate ID from OOD samples.\\n\\n*Where does norm loss place ID nodes w.r.t This is origin?* In our training framework, all in-distribution (ID) classes are designated as leaf nodes of the hierarchy. Consequently, norm loss guarantees that all leaf nodes maintain an equal distance from the origin. Because leaf nodes in a hierarchy are positioned farther from the the root node, ID classes are consistently placed near the boundary, leveraging the natural advantage of hyperbolic embeddings. This relationship is also illustrated in Appendix B, Figure 7 (right), where the leaf nodes (the final level of the hierarchy) have the highest norms, while ancestor nodes that are necessary to establish a hierarchical relationship have lower norms.\\n\\n*How close are OOD data to the origin?* As illustrated in Appendix C, Figure 8, OOD data consistently reside closer to the origin compared to ID data. \\n\\n\\n\\n**Novelty over existing works.** Our novelty encompasses the combination of both things below: \\n\\n(1) While distortion, a standard graph metric also used in [Sala et.al.,] is now mostly used for evaluation only, we do not claim distortion itself as our contribution. Our contribution is Algorithm 1, which combines distortion and norm loss to learn the hierarchy embeddings in the hyperbolic space.\\n\\n(2) We provide in-depth analysis of across multiple settings\\u2014standard OOD and hierarchical OOD\\u2014and embedding spaces\\u2014comparison to hyperbolic embeddings and ablations w.r.t euclidean and non-hierarchical hyperbolic embeddings. \\n\\n**Hierarchy for OOD Detection.** As mentioned in our response to reviewer UQQS, our work relies on a predefined hierarchy, which we have now included in the limitations. Regarding OOD detection when ID and OOD data are closely related, we demonstrate that in hierarchical generalizations\\u2014specifically within the CIFAR50/50 split, where ID and OOD classes share common ancestors in the ground truth tree\\u2014our method is beneficial. Our approach not only improves OOD detection performance, as shown in Table 5, but also enhances the identification of the closest ID class or ancestor, as detailed in Table 6. \\n\\n**Disentangling gain from hyperbolic embeddings and hierarchy.** \\nThank you for the great suggestion. We have implemented a hyperbolic cross entropy loss as an additional baseline to show the gain from using hyperbolic space vs using additional hierarchy information. Euclidean version of distortion loss might not be feasible in this short time frame, but we plan to add it in the final version for completeness. \\n\\n\\n| | | FPR95 | | | AUROC | |\\n|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\\n| | Euc CE (base) | Hyp CE | Ours | Euc CE (base) | Hyp CE | Ours |\\n| MSP | 58.24 | 58.08 | **49.46** | 77.05 | 78.34 | **82.43** |\\n| TempScaling | 54.29 | 58.08 | **48.61** | 78.18 | 78.34 | **83.02** |\\n| Odin | 60.96 | 62.50 | **49.45** | 76.63 | 76.52 | **82.96** |\\n| Gram | 83.33 | 69.26 | **57.78** | 62.31 | 73.23 | **76.84** |\\n| Energy | 58.47 | 59.81 | **55.41** | 77.65 | 78.22 | **81.74** |\\n| KNN | 47.95 | 55.25 | **44.00** | 83.29 | 78.40 | **85.50** |\\n| DICE | 64.61 | 75.24 | **53.56** | 74.35 | 70.97 | **82.80** |\\n| RankFeat | 73.03 | 72.05 | **52.88** | 68.98 | 69.25 | **77.97** |\\n| ASH | 67.48 | 82.53 | **53.48** | 76.88 | 67.83 | **79.15** |\\n| SHE | 77.07 | 74.47 | **56.05** | 67.09 | 75.60 | **81.91** |\\n| GEN | 54.66 | 58.77 | **47.20** | 79.21 | 78.53 | **83.80** |\\n| NNGuide | 65.44 | 95.88 | **50.01** | 76.37 | 35.05 | **82.75** |\\n| SCALE | 57.65 | 73.17 | **51.33** | 79.68 | 76.20 | **82.05** |\"}", "{\"title\": \"Response to Reviewer rQmy\", \"comment\": \"We thank the reviewer for their positive feedback and clarify our motivation below.\\n\\n**Motivation of hierarchical embeddings for Hyperbolic OOD.** Previous research has demonstrated that hyperbolic spaces are highly effective for OOD detection, as briefly shown in Fig. 1 and further explored in the ablation study in Fig. 2 (non-hierarchical hyperbolic). This establishes a strong foundation for our approach to OOD detection using hyperbolic spaces. This forms a strong basis for our approach to OOD detection using hyperbolic spaces. Using hierarchical relationships for OOD adds two benefits:\\n\\n1. Incorporating hierarchical relationships introduces meaningful structure to the embedding space, where the distances between class prototypes reflect their hierarchical relationships. This makes it easier for OOD detection methods to differentiate between samples from closely related classes and those that are genuinely out-of-distribution, particularly in challenging scenarios where OOD samples resemble in-distribution (ID) classes.\\n2. Learning with hierarchical prototypes enables our method to generalize to both distance-based and logit based scoring functions.\\n\\n**Adding recent related works.** The paper is updated to include citations to the related works, thanks for pointing them out.\"}", "{\"title\": \"Response to Reviewer hwZY\", \"comment\": \"We thank the reviewer for suggesting related works and experiments, as well as visualizations to enhance the paper. We answer the questions below and correct minor mistakes directly in the paper.\\n\\n**On bias towards deeper and wider subtrees.** The loss function in Equation (7) leads to embeddings for which the hyperbolic distances between the embedded vertices closely resemble the graph distances. Hierarchy imbalance is caused by the uneven distribution of labeled nodes which can cause uneven norms corresponding to the imbalance in works that don't explcitly correct for it [Nickel et.al, 2017, Ganea et.al. 2018]. We generate synthetic hierarchies for essential verification and compare with the hierarchies generated by previous works. We will add the visualization to Appendix A. \\n\\n**ID performance.** The reported accuracy for our hyperbolic model is comparable to baseline CE models, while offering significant advantages in OOD detection. When training CIFAR-100 under a setting similar to CIDER (75.35), our method achieves nearly identical ID performance (75.24). \\n\\n**Possible scale match issue in loss function.** The division by $d_G$ in Equation (7) actually prevents any distant pairs contributing large values. The non-linear growth of $d_\\\\mathbb{B}$ is with respect to the Euclidean representation that is being used to model hyperbolic space, i.e. the Poincar\\u00e9 ball, which only influences the embedding through possible numerical errors. \\n\\n**Connection to previous papers.** We agree and have added a reference to CIDER in the paper accordingly. In CIDER, OOD samples lie only between ID clusters because the embeddings are normalized onto a hypersphere. In contrast, in our method, OOD samples lie both between ID clusters, and between ID clusters and the origin, as hyperbolic space allows embeddings to exist anywhere within the space. \\n\\n**More details of experiment setup.** Loss function for baseline experiments is cross-entropy loss, experimental setup is similar to OpenOOD benchmark. L250-253 is updated for clarity. L310-315, the models differ in how logits are computed but the loss is still cross entropy loss (Eq 11), so training hyperparameters remain the same. Distortion metric is defined in [Sala et.al., 2018] as referenced in Table 7 and now updated for Table 3 accordingly. We update necessary details in the paper and in Apendix C. \\n\\n**Using SupCon loss instead of CE loss.** Methods using SupCon loss evaluate OOD using only distance-based functions (e.g., Mahalanobis score in SSD+ and KNN score in KNN+). In contrast, our method generalizes to both distance-based and logit-based scoring functions. Therefore, the baseline Euclidean setting uses CE loss to enable comparison across the scoring functions. For completeness, we have now included comparisons to SSD+ and KNN+ (in the context of table 4) to Appendix C. \\n\\n| **Method** | **FPR@95 \\u2193** | **AUROC \\u2191** | **n-AUROC \\u2191** |\\n| --- | --- | --- | --- |\\n| SSD+ | 57.13 | 80.27 | 77.13 |\\n| KNN+ | 54.46 | 79.29 | 77.81 |\\n| CIDER | 43.24 | 86.18 | 75.43 |\\n| PALM | 38.27 | 87.76 | **78.96** |\\n| Ours | **35.83** | **89.45** | 78.50 | \\n\\n\\n**How does norm loss affect the learning?** By adding Norm loss, all ID prototypes in the label hierarchy are positioned at the same distance from the origin in hyperbolic space, even for imbalanced hierarchies. These prototypes are then fixed for the next stage. When training a ResNet-34, this ensures that all ID embeddings are farther from the origin, allowing OOD samples to map closer to the origin or between the prototypes. \\n\\n**Can the method generalize to other hyperbolic models?** We first randomly initialize the model and learn Poincar\\u00e9 embeddings for 100 epochs, after which we continue training with our distortion loss. We now clarified this in the setup as well. Since a distance function is well-defined for various models of hyperbolic geometry, making it possible to generalize this formulation across different representations of hyperbolic space. Alternatively, prototypes can be learned in the Poincar\\u00e9 ball and mapped isometrically to other models. \\n\\n**Visualizing learnt hierarchies.** Measuring distortion of the graph (ex. in Table 3 and 7) shows that the hierarchical distances are preserved in the hyperbolic embeddings. In the Appendix B, we also visualize the pairwise distances of each nodes in the hierarchy to show the hierarchical structure that emerges.\"}", "{\"title\": \"Updated suggested changes and analyses in the PDF.\", \"comment\": \"We again thank the reviewers for their thoughtful feedback, which has helped us further refine the paper. We have incorporated your suggestions and updated the PDF, with all changes highlighted in blue. Below is a summary of the revisions:\\n\\n1. **Improved Clarity and Readability**: We refined the main text to enhance its clarity, readability, and completeness. Specifically, we increased the size of Figure 1, clarified equivalent edge distances in line 149, improved the description of Algorithm 1, added a citation for distortion in Table 3, and expanded the discussion on scoring function comparisons in lines 233\\u2013255. Additionally, we ensured consistent decimal formatting across all tables.\\n2. **Structural Improvements**: To improve structure and presentation, we added a brief overview paragraph before discussing the experiments in Section 5 and revised the headings for the hyperbolic comparisons and hierarchical ablations. We expanded the details of the hierarchical generalization in Section 5, with further elaboration provided in Appendix C. To accommodate these updates, we moved portions of the Related Work section to Appendix D.\\n3. **Ablations and Analyses**: We included multiple new ablations and analyses in Appendix C to strengthen our findings. These additions include expanded results on ImageNet-100, dataset-wise results on CIFAR-100, and ablations focusing on AUPR, AUROC (Figure 2), and curvature.\\n4. **Additional Discussions and Visualizations**: : Appendix A includes a discussion of biases toward deeper and wider subtrees. Appendix B provides visualizations of the learned hierarchies. Appendix C now includes detailed experimental setups for the Euclidean baseline and analyses on ID/OOD embedding norms. We also added further details on Figure 6 to clarify the motivation for our proposed losses.\\n\\nWe hope these revisions address the feedback and further enhance the quality and clarity of the paper. We welcome any additional questions or suggestions.\"}", "{\"comment\": \"Thanks for your reply.\\n\\n**Norm Loss** Again your mathematical expression of norm loss cannot guarantee the position of inD data and OOD data for their distances to the origin. Therefore, I still do not understand why it can work as now it works magically. I believe my counterexamples provided before, no matter where my vertex is with respect to the origin, as long as all vertices are at the same depth/level, my norm loss will be small. It can be contradictory to the hyperbolic embeddings for OOD detection\\u2019s major advantage. Even if all inD nodes are leaf nodes, again, their distances to the root node/level is based on the ground truth tree, but they can be either large or just 1. \\n\\nBesides, I don\\u2019t think all leaf nodes\\u2019 distance to root is the same for ImageNet100 is the same, based your figure 7 in the Appendix. If they are indeed placed equidistant to the origin, probably some of your implementation is wrong.\\n\\n**Novelty** Because you claim your contribution to the combination of distortion and norm loss, while the norm loss\\u2019s motivation is still questionable, this novelty statement is not very strong in my point of view. This is the reason why I keep asking clarifying questions for the norm loss.\\n\\n**Hierarchy for OOD detection** For the nearOOD performance on CIFAR-OSR split, the same concern for the weak baseline applies, and you need to provide more comprehensive analysis (with several controlled splits sharing the same hierarchy, stronger baselines, more inD datasets, more OOD metrics etc.) to illustrate the point that your method even works for nearOOD methods. I didn\\u2019t expect the authors to address them in the rebuttal time but this will be a good thing to add.\"}", "{\"summary\": \"The paper introduces balanced hyperbolic learning for improved OOD detection by leveraging hyperbolic embeddings to represent class hierarchies. The authors propose a method to embed classes as hyperbolic prototypes that capture hierarchical relationships and seek to balance shallow and wider subtrees in the learned embedding with a distortion loss and norm minimizing term. This method can be easily integrated into existing OOD detection pipelines with minimal modification and works effectively in a wide range of datasets.\\n\\n---\\n\\n**Post Rebuttal** I want to first thank the authors for engaging in the discussion period. After the rebuttal discussion, I am more certain that the norm loss does not directly relate to the InD/OOD position. Thus why norm loss is helpful for OOD detection tasks is still not unclear. Even if I assume empirically *\\\"without a norm loss, shallow subtrees also gravitate towards the origin, leading to unwanted biases.\\\"* mathematically norm loss does not directly control the distance of nodes to the origin (which is the major motivation in the introduction), putting one of the main contributions of this paper in doubt. Besides, in the current submission, the empirical evidence of *\\\"without a norm loss, shallow subtrees also gravitate towards the origin, leading to unwanted biases.\\\"* seems missing. Also, the current paper does not provide a correct definition of what they mean by \\\"levels\\\". This will cause complete confusion for the methodology and many analyses provided in the paper, unfortunately. In summary, I do not think this submission is ready for publication. I will keep my original score and raise the confidence score to 5.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The writing of the methodology section is clear. The method is simple and flexible, which means it can be widely applicable to many OOD scores and settings.\", \"weaknesses\": [\"Novelty is somewhat limited: \\u201cthe hyperbolic classifier provides strongly uniform distributions for samples near the origin and strongly peaked distributions for samples near the boundary,\\u201d this motivation of this paper can date back to at least [1] Fig. 1. [1] uses distance to hyperbolic disk origin as an uncertainty metric in Sec.5.1, which can be also used for OOD detection, without leveraging any training labels. Please cite [1] accordingly in the introduction section and treat it as an important baseline for comparison. Also, the idea of leveraging hierarchical relationships for OOD detection is old, such as [2]. Therefore, Sec 1 and 2 are mostly from previous literature.\", \"Writing for the experimental section can be improved. In Section 5, because you have multiple paragraphs, it might be better to include a few sentences for summary about what experiments/settings you are going to cover at the beginning of Section 5. Some paragraph names probably need to be revised: for example, \\u201ccomparison to hyperbolic embeddings\\u201d sounds similar to \\u201ccomparison to other hyperbolic methods\\u201d, which is very confusing while reading.\", \"Soundness of core methodology: the motivation of using the exact formulation of distortion loss and particularly norm loss is questionable. It requires either more explanation or more experimental support. Also, there are many details below that need to be clarified in the experimental section for OOD detection.\", \"Lacking comparison with some important baselines for the choice of distortion loss functions and hyperbolic neural networks. See Questions section for more details.\", \"[1] Khrulkov, Valentin, et al. \\\"Hyperbolic image embeddings.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.\", \"[2] Linderman, Randolph, et al. \\\"Fine-grain inference on out-of-distribution data with hierarchical classification.\\\" Conference on Lifelong Learning Agents. PMLR, 2023.\"], \"questions\": [\"Line 70: \\u201cdistance to the class prototype\\u201d how is prototype defined on Poincare disk? Also, how are Euclidean logits, based on dot products with Euclidean classifiers, and Poincare logits comparable? How to compare logits derived from different hyperbolic methods, especially Eq. 11 and those in Hyperbolic Neural Networks such as [1]? This is important because you rely on logit values heavily for many OOD detection scores as you described in Sec. 3.3. Why do you specifically use Eq. 11 as your classification loss? It would be interesting to explore the impact of loss function on hyperbolic OOD detection.\", \"Line 141-144: How does the quality of the hierarchy affect your method? How do you verify if LLM-generated hierarchies are reliable? Why can you directly transfer WordNet hierarchies to image datasets, because some semantic relationships are not reflected in images. For example, \\u201clarge/small\\u201d in CIFAR100 hierarchy\\u2019s coarse label.\", \"Line 146: what does \\u201cequivalent edge distances\\u201d mean?\", \"Line 157-159: Distortion Loss\\u2019s motivation is the same as [2] by simply replacing Euclidean feature distance in [2] with Poincare distance, please include the comparison of Eq.7 and CPCC loss. Besides, why do you divide d_G in the loss instead of directly minimizing the l2 distance between two distance metrics?\", \"Norm loss: What\\u2019s the motivation for \\u201cnodes on the same level in the hierarchy should have the same norm, ensuring a uniform distribution across levels\\u201d? How does this affect OOD detection? I would recommend you to visualize the learned embeddings (like in [3]) of different types of trees with varying structures to illustrate this argument and put it in the main body, as this will add to the soundness of your methodology. Current A.1 is not straightforward enough, as I will discuss later below. Additionally, how can you apply norm loss for weighted hierarchies?\", \"Line 168/171: The format of i and e doesn\\u2019t match. Also i is defined twice, one in subscript of distance matrix, and one in the epoch number.\", \"Line 172: Why do you use Riemannian SGD if your encoder ResNet is fully Euclidean?\", \"Line 199: curvature c = -1. Curvature is not introduced in Section 2.2 where you hide all c terms in the operations. Although you set c = -1 by default, it\\u2019s better to show this term in equations. More importantly, curvature will affect the distance between points on the Poincare disk, which may have a significant impact on OOD detection. Please consider adding this in the experiment section for comparison.\", \"Line 228-229: \\u201cWe also exclude \\u2026 Euclidean\\u201d, Can you map your Poincare features to its tangent space with tangent map, so the features are on Euclidean space, and you can use Mahalanobis-distance based OOD score?\", \"Line 268: Can you include the details of \\u201chierarchical out-of-distribution evaluations\\u201d as this is not common in OOD literature.\", \"Table 2: 89.1 => 89.10, 74.4 => 74.40 the number of digits should be consistent across tables. Compared to Table 1, why do you only use 5 OOD scores? Can you show the comparison of base and your method for each OOD dataset for Table 1 and Table 2?\", \"Figure 2: Which in-distribution dataset do you use to plot this figure? What about the comparisons of AUPR and n-AUROC for the same setting?\", \"Line 362-365 \\u201cSeveral \\u2026 options\\u201d: since you initialized with [3]\\u2019s representation (line 176-177), it\\u2019s not surprising that your method is better than [3] because [3] is not designed for OOD detection.\", \"Line 369-371 \\u201cThese values \\u2026 standard classification\\u201d: Again [3] and HEC are not designed for classification. In my opinion, it is a method to visualize hierarchical data with lower distortion of input hierarchies. This is also the reason why comparing with [1] [4] [5] etc. is important and you can try to apply the Poincare Multinomial Logistic Regression layer on top of [1].\", \"Table 3: How do you compute the distortion metric and accuracy metric in Table 3? It seems PE has high AUPR and AUROC values, and can you explain this advantage (and disadvantage of your method)?\", \"Table 4: Why do you choose k = 300? In [6], k is 200 for CIFAR100, 50 for CIFAR10.\", \"Hierarchical Ablations: I\\u2019m a bit confused about this setting. Are hierarchical ood datasets really OOD data? Then in Table 6, you show that you can learn some hierarchical relationships on the OOD data, but is this going to hurt or improve OOD performance if you treat hierarchical held-out data as the OOD data? Why are FPR95/AUROC/AUPR metrics missing for this comparison?\", \"Table 6: What does \\u201cbase\\u201d stand for in Table 6?\", \"Line 842-850:\", \"I strongly recommend authors to move the motivations of your loss design to the main section for better readability.\", \"Could you elaborate why ensuring equal distance to origin for nodes at the same level is beneficial for OOD? In hyperbolic embeddings, OOD samples are closer to the origin as you said, but how does training in distribution data with norm loss affect the location of OOD distribution? Shouldn\\u2019t we try to push all in-distribution samples away from the origin?\", \"Please include details about the \\u201ctoy tree example\\u201d you used to plot Figure 6.\", \"Text in Figure 6 is too small.\", \"[1] Ganea, Octavian, Gary B\\u00e9cigneul, and Thomas Hofmann. \\\"Hyperbolic neural networks.\\\" Advances in neural information processing systems 31 (2018).\", \"[2] Zeng, Siqi, Remi Tachet des Combes, and Han Zhao. \\\"Learning structured representations by embedding class hierarchy.\\\" The eleventh international conference on learning representations. 2023.\", \"[3] Nickel, Maximillian, and Douwe Kiela. \\\"Poincar\\u00e9 embeddings for learning hierarchical representations.\\\" Advances in neural information processing systems 30 (2017).\", \"[4] van Spengler, Max, Erwin Berkhout, and Pascal Mettes. \\\"Poincare resnet.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\", \"[5] Yue, Yun, et al. \\\"Hyperbolic contrastive learning.\\\" arXiv preprint arXiv:2302.01409 (2023).\", \"[6] Sun, Yiyou, et al. \\\"Out-of-distribution detection with deep nearest neighbors.\\\" International Conference on Machine Learning. PMLR, 2022.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work first identifies that hyperbolic embeddings are stronger at differentiating between in and out of distribution than euclidean methods due to the inherent distance / hierarchy property of the space. The authors identified that OOD points will lie closer to origin and as such have a more uniform distribution to classification prototypes placed at the boundary. Following this observation, the authors present a new distortion based loss function to match embedding targets to a known hierarchy, and balances hierarchy levels to ensure that correct hierarchical depths are preserved. This method is then applied to to a variety of OOD scoring functions to demonstrate wide applicability. The results show that the proposed method outperforms Euclidean and hyperspherical approaches over a variety of benchmarks and OOD scores and visualisation mostly support their hypothesis. All implementations and empirical evaluations are described in full detail for replication.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"**Structure and Clarity:**\", \"The problem statement is very clear, hypothesis is well reasoned, and the justification well described.\", \"Preliminaries are concise yet highly informative, and add little bloat to the work while providing the reader with the necessary understanding for both OOD and hyperbolic learning.\", \"Results are presented clearly, with descriptive figures, graphs and accompanying written discussions.\", \"**Method hypothesis, findings, and rationale:**\", \"The method is very well motivated by observations and key findings of hyperbolic deep learning, where the properties of hyperbolic distances wrt distance from the origin are leveraged.\", \"Balancing all learnt embeddings to match the known tree like hierarchy is a nice addition, while it has its limitations as addressed below, should enforce correct hierarchical structure in representation space.\", \"The proposed method is simple in each of its components, allowing for simple and neat incorporation into a variety of existing settings, providing good impact and insight, valuable to the community.\", \"**Reproducibility:**\", \"All details are presented for full reproduction in the text including optimisation, hyperparameterisation, datasets, and architectural settings, with the addition of algorithm of the method further adding clarification.\", \"**Experimental results:**\", \"The performance improvements of the proposed method in comparison to Euclidean approaches are significant across almost all OOD scoring methods, providing clear justification for its future use.\", \"A variety of experimental settings are evaluated, while more could always be added, that present are enough to provide confidence on the generalisation of the method.\", \"**Ablations:**\", \"The presented ablations do a good job of analysing each proposed component and justify empirically its value in the proposed system in addition to being visually very interpretable.\", \"The addition of comparisons to hyperspherical work appropriately evaluates this work in line with existing and preferred methods, which further provides confidence of the findings.\"], \"weaknesses\": [\"**Learning a well defined hierarchy:**\", \"One assumption you make is that the hyperbolic method does indeed learn a strong hierarchy. However, this work does not demonstrate empirically or theoretically that the hierarchy is in fact learnt. The distortion loss should help enforce this structure, however, empirical analysis would be a great addition.\", \"Following from the prior, it is well known that in hyperbolic space embeddings can \\u201ccollapse\\u201d to the boundary of the ball, and hence no hierarchy is learnt. Do you provide evidence that this does not happen or that the hierarchy is indeed present? While the norm loss should help address this, it would good to see how this holds as prior attempts to regularise hyperbolic embeddings via norm result in clipping like effects. While you show oil ablation comparisons to clipped embeddings, directly\", \"**Assumption of a known and correct hierarchy:**\", \"Less of a weakness and more of a limitation, the method assumes access to a well defined and known hierarchy. While in the demonstrated case this is know, in most settings this is not the case. Therefore the generalisation of this method is somewhat limited\", \"As mentioned above, while an understandable and reasonable limitation, this needs to be clarified and addressed explicitly as part of the limitations of the work.\", \"**Distance of OOD:**\", \"Figure 3 measures the distance between embedding of each distribution, however to more appropriately support your hypothesis of OOD points being lower shallower nodes in the tree, a more appropriate analysis would be on the norm of the embeddings from the origin.\", \"As can be seen from figure 4, the points can achieve high distance but lie very deep in the tree given the increase in hyperbolic pairwise distance wrt to distance of the points from the origin. Therefore, for the most part your hypothesis holds, unknown points lie closer to the origin. However, this is not always the case.\", \"Furthermore, from the prior point, do you have any intuition how so much of the OOD points are taking up space close to the boundary? If you assume a relatively uniform distribution of embeddings of the hierarchy during training of ID points then figure 4 embeddings for OOD are lying in the hierarchy of an ID point.\", \"**Minor:**\", \"Figures 1 a is informative and descriptive for the problem setting and help visually demonstrate the properties of hyperbolic space. However, 1 b is less informative and is not clear if this is an illustration or real embedding positions? If simply an illustration, it does not add a great deal to the narrative or work. I assume the latter, but would argue that figure 1 a is a clear selling point.\", \"Following the prior point, it would be good if the figures are slightly larger and the points made more distinguishable.\"], \"questions\": \"1. How does the method perform if the OOD same is highly related to the semantic concepts captured in the learnt hierarchy. This method assumes that the OOD sample is semantically very different from the training dat\\n2. Does the method generalise well to other curvatures, given it has been shown that most visual embeddings are not fully hyperbolic, and thus different curvatures may be optimal for different datasets.\\n- Additional questions are asked for improve clarity to the authors in the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I want to thank the authors for carefully reading my review for each question, and I really appreciate the authors for making a significant effort on the revised submission. However, I want to emphasize several key points why I will still not give a higher score given the updated submission. The following sections are approximately ordered by importance sequentially from top to bottom.\\n\\n## Norm Loss for OOD detection?\\n\\nI am aware of the new visualizations and experiments in the Appendix for the discussion of the distortion loss and norm loss. Thanks for providing these experiments. However, since one of your main contributions is on OOD detection, as shown in your title and keyword, these make me believe that norm loss is helpful for OOD detection. The authors also mentioned it in lines 886-888.\\n\\nThe only takeaway message that I get from the paper (Figure 6 + motivation of losses in Appendix + Norm Loss section in the main body) is that norm loss successfully makes the vertices with the same depth having the same distance to the origin. This is expected due to the definition of your norm loss and success application of the optimizer, and I have no doubts.\\n\\nBut, in the introduction, and your motivation Figure 1 and from previous literature, hyperbolic embeddings typically tend to place OOD data around the origin, and ID data near the boundary. Then the introduction of Norm Loss can hurt OOD data performance. For in-distribution nodes with lower depth (near the root node), with norm loss, assume they have the same depth, they are all equidistant to the origin. But how close are they to the origin? They can be equidistantly close to the origin, overlapping with the OOD data region, hurting OOD data performance. Without norm loss, on the same depth, it can be the case where there are only a few points close to the origin, and more points are near the boundary, then these few outliers do not have a strong impact on the OOD detection. If you want to boost the advantage of hyperbolic embeddings, boundary collapse of InD data seems to be very beneficial to leave enough space for OOD data, but as UQQS mentioned, your norm loss addresses boundary collapse, which contradicts the natural advantage of hyperbolic embeddings on OOD detection. \\n\\nTherefore, I don\\u2019t see a convincing or intuitive connection between your Norm Loss and OOD detection. For example, some very important questions that you need to address are:\\n\\n* How close are OOD data to the origin? (Also asked by UQQS)\\n* Where does norm loss place the inD nodes w.r.t the origin and why?\\n\\nIn Figure 2, if my understanding is correct, the last two columns for each set of the experiments are your method without vs. with this norm loss. For AUROC, there is almost no difference between balanced and unbalanced versions. Given AUROC is usually a more stable metric than FPR95 and AUPR in OOD detection from my experience, I again want to ask what is the real motivation of norm loss. It seems the Reviewer hwZY also asked about this question as well, but unfortunately, the logic in your reply *\\u201cThe ID samples are indeed away from the origin when the distance to leaf nodes (corresponding to the classes) from the balanced hyperbolic embeddings is minimized. Consequently, it means we need norm loss in Equation (9) ensures that all nodes belonging to the same granularity of a hierarchy are equidistant from the origin.\\u201d* still confuses me.\\n\\n**I want to emphasize again that this is a very fundamental problem that authors have to address. Since you focus on the OOD detection, your methodology should be well-motivated for this purpose.** Otherwise, it can be hard for future researchers to use, analyze, and extend your work, and it can be difficult to understand or trust your empirical findings.\\n\\nI am also trying to guess if norm loss is related to other major evaluation metrics in your paper. However, I do not think it is related to distortion: since you train the current distortion metric directly, distortion is worse with additional objective terms. Regarding in-distribution accuracy, I would expect you need some specialized theoretical tools for arguments about the generalization of hyperbolic geometry, which can be quite challenging. \\n\\nIn summary, I would like to see a very convincing argument of why we need this norm loss from the author.\"}" ] }
83iej2ANig
Learnability of Discrete Dynamical Systems under High Classification Noise
[ "Zirou Qiu", "Zakaria Mehrab", "Abhijin Adiga", "Madhav Marathe", "S. S. Ravi", "Daniel Rosenkrantz", "Richard Stearns", "Anil Kumar Vullikanti" ]
Due to the important role of discrete dynamical systems in modeling real-world cascading phenomena on networks, problems for learning such systems have garnered considerable attention in ML. However, existing studies on this topic typically assume that the training data is noise-free, an assumption that is often impractical. In this work, we address this gap by investigating a more realistic and challenging setting: learning discrete dynamical systems from data contaminated with noise. Towards this end, we present efficient noise-tolerant learning algorithms that provide provable performance guarantees under the PAC model, and establish tight bounds on sample complexity. We show that, even in the presence of noise, the proposed learner only needs a small training set to infer a system. Notably, the number of training samples required by the algorithm in the noisy setting is the same (to within a constant factor) as the information-theoretic upper bound in the noise-free scenario. Further, the number of noisy training samples used by the algorithm is only a logarithmic factor higher than the best-known lower bound. Through experimental studies, we evaluate the empirical performance of the algorithms on both synthetic and real-world networks.
[ "Efficient learning under noise", "Dynamical systems", "PAC model", "Sample complexity" ]
Reject
https://openreview.net/pdf?id=83iej2ANig
https://openreview.net/forum?id=83iej2ANig
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yYvIzdTDW3", "lDofw8PNdF", "VkBeZP8tCV", "TcZ3AFtoz1", "Ssccuchltq", "Fa55JTPe7r", "BaA69RA0ln" ], "note_type": [ "official_review", "decision", "meta_review", "official_review", "official_review", "official_comment", "official_review" ], "note_created": [ 1730329850574, 1737523840596, 1734493070631, 1730153941935, 1729901474377, 1732767208887, 1730674111288 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7469/Reviewer_ZdWT" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7469/Area_Chair_HPFf" ], [ "ICLR.cc/2025/Conference/Submission7469/Reviewer_Ky3G" ], [ "ICLR.cc/2025/Conference/Submission7469/Reviewer_xqL6" ], [ "ICLR.cc/2025/Conference/Submission7469/Authors" ], [ "ICLR.cc/2025/Conference/Submission7469/Reviewer_mGsE" ] ], "structured_content_str": [ "{\"summary\": \"This paper studies the learning of discrete dynamical systems under random classification noise. They consider a dynamical system with a known underlying graph structure with binary values and unknown discrete threshold functions as interaction function. The paper try to answer if the efficient learning is possible under classification noise and establish the required sample complexity for $\\\\epsilon$ prediction error with probability $1-\\\\delta$. They analyze three algorithm: V-ERM, VisScore, and VisRange. V-ERM is element-wise empirical risk minimization algorithm that assigns the threshold value that minimizes the error function over the corrupted training data. The sample complexity scales with $\\\\mathcal{O}(n^2log(n))$ for this algorithm, which is factor $n$ larger than the theoretical upper bound for the noise free case. In addition, they analyze VisRange algorithm which is the extension of the VisScore algorithm. The authors define the notion of visiting time of a score to run this algorithm. For each range score of the vertex, they assign the output based on the majority voting in the training data set and they choose the threshold as the maximum of the left values of the ranges with the majority vote value 0. This algorithm requires $\\\\mathcal{O}(nlog(n))$ number of samples which matches the theoretical upper bound for the noise-free upper bound. Besides the complexity w.r.t. to the dimension of the system, the sample complexities scales with $\\\\mathcal{O}((1-2\\\\eta)^{-2})$ where $\\\\eta$ is the maximum classification noise among the vertices. The author claims that V-ERM performs better in practice that VisRange despite the worse sample complexities. While the algorithm VisRange exhibits phase transition behavior, the loss function for the algorithm V-ERM decreases gradually. Last but not at least, the authors provided numerical experiments with synthetic and real-life data set to support the theoretical results. In their experiments, they tested the evolution of the loss $l$ and the number of required samples with dependence on the dimension $n$, density of the graph $d_avg$, and the probability of random classification noise $\\\\eta$.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper is the first to study dynamical systems under classification noise according to the literature review.\", \"The problem studied is of significant importance. Although the case with a finite hypothesis class and a single interaction function is analyzed, this paper could be a pioneering work in understanding dynamical systems under classification noise.\", \"The paper is well-structured and easy to follow in terms of the development of the results. Although there are some minor typos, it is grammatically well-written.\", \"Remarks provided throughout the paper address potential questions that may arise while reading. I particularly liked the placement of the remarks, as they do not interrupt the flow of the paper.\", \"The implications of the results and their relation to the existing upper and lower bounds are explained well.\", \"The proofs are easy to follow and appear mathematically sound in most parts.\"], \"weaknesses\": [\"Many acronyms are used before written in full format. For example, the algorithm names V-ERM and VisRange should be given in full name and what the names stand for must be explained before using the acronyms. Similarly, on line 097, CNF is used without any explanation.\", \"I think V-ERM is not different from empirical risk minimization, where the Hamming distance between the vectors $h(\\\\mathcal{W}_j)$ and $\\\\hat{\\\\mathcal{W}_j}$ is minimized. V-ERM does not provide any additional insight into the existing literature. Can you clarify how V-ERM differs from the standard empirical risk minimization?\", \"For line 316, the use of \\\"visiting time\\\" sounds awkward. It implies those are the times the score $s$ was visited, not the number of times the score $s$ was visited. I think it should be renamed as \\\"visiting frequency\\\" rather than \\\"visiting time\\\" because it measures the number of times the score $s$ was visited.\", \"There are issues with the mathematical representation of a General Learning Model on line 219. You claim it is multi-class learning with $k$ classes for each vertex. However, the vector $\\\\mathcal{W}_j$ takes values in ${0, 1}^n$. It should be ${0, 1, \\\\dots, k-1}^n$. The same error occurs in Appendix A.3 as well. Nevertheless, I believe this does not affect the proofs or mathematical results presented in the paper.\", \"Despite analyzing the general learning model in section 3.1, the result of Lemma 3.1 does not depend on $k$. I think you must either point out that Lemma 3.1 is for binary values, or you need to include the bound that depends on $k' = (k-2)/(k-1)$, as you provided in the Appendix. Moreover, you can raise $\\\\bar \\\\eta$ up to $(k-1)/(k)$ in the multi-class case. It might be worth mentioning.\", \"On line 370, you say $n$ is the dominant term, but as $\\\\eta$ approaches $1/2$, the term $\\\\mathcal{O}((1-2\\\\eta)^{-2})$ becomes dominant. I think it is better to say \\\"dependence on the dimension\\\" rather than \\\"dominant term.\\\"\", \"Using the letter $\\\\ell$ for the hypothesis class splits and the loss function value in experiments could be confusing.\", \"In general, the algorithms seem too simple and appear to have been studied before in various types of classification problems. This problem is a parametric classification problem over a finite number of parameters using a loss function. Therefore, the authors need to explain what kind of new mathematical understanding these algorithms bring. Can you provide a detailed comparison with existing methods in parametric classification?\"], \"questions\": [\"Please see the weaknesses above in addition to my questions below.\", \"Remark 2 is unclear in terms of how this dynamical system could be sampled over a trajectory. I think there could be some issues when the system updates are cyclic and the sampling only captures a single cyclic behavior. Let $\\\\mathcal{D}$ be the trajectory of the system, and suppose the system is sampled every other time period. We can generate a system such that the score for a vertex $i$ is 0 during these time periods, but the threshold function could be arbitrarily large, say, the degree of $i$. Then, you will not be able to learn the threshold function of vertex $i$ despite the minimal training error. In other words, any positive threshold value minimizes the training error, but the test error would be high when we randomly sample from trajectory $\\\\mathcal{D}$. Therefore, I believe there should be additional assumptions on the sampling behavior over a trajectory to avoid such ill cases and cyclic behavior. How can you address this?\", \"If you are sampling data from a trajectory of the dynamic system, the observed data would be correlated over time. How do you address the correlation over time in your proofs and results? Does it pose an additional challenge? How is it different from samples drawn from independent and identically distributed multiple trajectories of the system?\", \"If my understanding of Hamming distance ERM is correct, why do you claim ERM cannot be done efficiently unless P = NP on line 064? Do you mean ERM with the error loss function only?\", \"In Figure 3 (b), the sample complexity does not increase quadratically for the algorithm V-ERM. The sample complexity is $\\\\mathcal{O}(n^2 \\\\log(n))$ for this algorithm. Rather, it looks like a linear increase. How can you express this behavior?\", \"How do you explain the extension from VisScore to VisRange? What was the theoretical and empirical justification for focusing on this problem?\", \"How can you generalize these results to hypothesis classes of infinite size, e.g., $|\\\\mathcal{H}| = \\\\infty$? For example, you could have an interaction function with a parameter that can take infinitely many possible values.\", \"Similarly, how can these results be applicable to a system where vertex $v$ receives faulty observations from its neighbors? In other words, if we did not have random classification noise but had random observation noise from the neighbors due to partial information sharing or information asymmetry, can we use these algorithms for learning?\", \"How can the result be extended to non-stationary threshold functions (what happens when the threshold changes over time)?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"This work addresses the challenge of learning discrete dynamical systems from noisy data. It introduces efficient noise-tolerant algorithms with provable PAC guarantees and establishes tight sample complexity bounds. The required training samples in the noisy setting match the noise-free upper bound (up to a constant factor) and are only a logarithmic factor higher than the best-known lower bound. Empirical studies on synthetic and real-world networks validate the algorithms' performance.\\n\\nThere reviewers raised several concerns regarding the novelty and technical contribution of the paper. Unfortunately, the authors did not provide any response to these comments.\", \"additional_comments_on_reviewer_discussion\": \"Four reviews were collected for this paper, with three recommending rejection and one recommending acceptance. The AC agrees with the majority vote, supporting a rejection due to the unaddressed critiques raised by the reviewers.\"}", "{\"summary\": \"The paper studies the problem of learning discrete dynamical systems under random classification noise. Here, given a graph $G$ with a collection of boolean functions $\\\\{h_v: v\\\\in G\\\\}$, defines the dynamics of the system. Formally, Given $C\\\\in \\\\{0,1\\\\}^{n}$, the next state is generated as $\\\\hat{C}[v]=h_{v}(C)$ for all $v\\\\in G$. The graph to be known and they get samples of the form $C,\\\\hat{C}$ where $C\\\\sim D$ for some distribution $D$. The aim of the task is to predict $\\\\hat{C}$ given $C$ with high probability over $D$.\\n\\nIn this paper, they study the problem when the vector $\\\\hat{C}$ is corrupted by random classification noise. That is, for each vertex $v,\\\\hat{C}[v]=h_{v}(C)$ with probability $1-\\\\eta$ and flipped otherwise. They consider $h_v$ to be the class of 1 dimensional threshold, that is $h_v(C)=1$ iff $\\\\sum_{u\\\\in N(v)}C[u]\\\\geq t_v$ where $t_v$ is a threshold associated to the node $v$ and $N(v)$ is the neighbourhood of $v$ in $G$. They give two algorithms, V-ERM with sample complexity $O_\\\\eta(n^2\\\\log n/\\\\epsilon^2)$ and VisRange with sample complexity $O_{\\\\eta}(n\\\\log n/\\\\epsilon)$, the latter matches the sample complexity of the noiseless case.\\n\\nThey also perform experiments using the algorithm\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The problem of discrete dynamical systems seems to be an interesting one with practical applications. Robustness to noise in measurements is crucial for safe deployment of such algorithms. The authors take a step towards addressing this problem.\", \"weaknesses\": \"I might be mistaken but I don't see why the sample complexity of $O(n\\\\log n/\\\\epsilon(1-2\\\\eta)^2)$ doesn't immediately follow as a corollary of prior work on learning with bounded noise in the supervised learning setting. I will sketch an argument here. I am willing to change my score if the authors highlight why the following argument doesn't work and why their theorem is not immediate.\\n\\nFor a VC class of dimension $V$, [MN07] proved that the sample complexity of PAC learning with bounded noise rate $\\\\eta$ scales with $O(\\\\frac{V}{\\\\epsilon(1-2\\\\eta)}\\\\log(1/\\\\delta))$ (see section 1.3.1). Their algorithm is ERM.\\n\\nNote that since $h_{v}$ is a one dimensional threshold (since G is known), the VC dimension is $O(1)$. Thus, running ERM node-wise should give a sample complexity of $O(\\\\frac{1}{\\\\epsilon(1-2\\\\eta)}\\\\log(1/\\\\delta))$ and guarantees that with probability $1-\\\\delta$, ERM finds a $\\\\hat{h}$ that agrees with $h$ on $1-\\\\epsilon$ fraction of inputs. Setting $\\\\epsilon=\\\\epsilon/n$, $\\\\delta=\\\\delta/n$ and using a union bound gives the desired result. The algorithm is again node-wise ERM.\\n\\n[MN07] Pascal Massart and \\u00c9lodie N\\u00e9d\\u00e9lec; Risk bounds for statistical learning; The Annals of Statistics, vol. 34, no. 5, 2006, pp. 2326\\u201366.\", \"questions\": \"1) Refer to the Weaknesses section\\n2) The authors claim that the unknown graph case is hard. I don't see why there isn't a direct reduction to halfspace learning with random classification noise. There are many efficient algorithms for this problem. For the state of the art, see [DKT23]\\n\\n\\n[DKT23] Ilias Diakonikolas, Christos Tzamos, and Daniel Kane; A Strongly Polynomial Algorithm for Approximate Forster Transforms and Its Application to Halfspace Learning; STOC 2023\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The model being considered is the following: a undirected graph $G=(V.E)$ is given where $V$ are vertices ($n$ in number) and $E$ decides the neighbors of a vertex $v\\\\in V.$ Each vertex $v$ can be in a state $0$ or $1.$ Each node in $v$ has a function $f_v$ associated with it which is appplied sequentially with $t=1,2,\\\\ldots.$\\n\\nFor each iteration $t$, $f_v$ updates the value at the vertex according to the following rule: if the sum of states of neighbors of $v$ at time $t$ is greater than a node dependent threshold $\\\\tau_v$, (not changing with $t$) then the node value at $t+1$ for $v$ will be set to $1$, otherwise, it is set to $0.$ \\n\\nThe problem is to learn the threshold values $\\\\tau_v$ correctly from training data for each $v$. The training data assumes that initial state is available and a corrupted version of the transitioned data is available. Non-asymptotic sample complexity bounds are obtained for three algorithms.\\n\\nThe authors move the above dynamic setup to stationary setup wherein the problem is converted to the following. The vertices take values in $\\\\{0,1\\\\}$ with $C$ representing the configuration according to a distribution $D$ (which is unknown), a map $h^*$ acts on the the initial state $C$ to provide updated state $C'$ using threshold $\\\\tau_v$ as described erlier. The updated state $C'$ is measured; the measurement adds uncertainty and flips the state (which is either $0$ or $1$) with probability $\\\\eta_v.$ The training set consists of $q$ tuples $(W_i,\\\\hat{W})$ which denote the initial state and the measured transitioned state (which is corrupted). The problem is then to find the optimal mapping $h$ that minimizes the error with respect to the $h^*$ and obtain the sample complexity of training data needed to provide a needed level of accuracy.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The underlying problem being considered is applicable to diverse set of domains. The mathematical approach taken is overall intuitive and seems correct eventhough there are concerns\", \"weaknesses\": \"The presentation and the main thrust of ideas can be better presented. Some lemmas are not well written or need attention on the mathematics. Please see the questions section of the review to obtain a detailed view of the concerns.\", \"questions\": \"Detailed comments\\n\\n$\\\\bullet$ The notation for $h^*$ on line 120 is not in accordance with how its employed in most of the rest of the article. \\n\\n\\n$\\\\bullet$ PAC system is not described in needed detail.\\n\\n$\\\\bullet$ Line 170 can be phrased better. Possibly the authors want to empahsize that the learner does not know which transitioned state is wrong. The initial state prior to transitioning is known.\\n\\n$\\\\bullet$ In Remark 2, authors try and connect the original problem which is dynamic and the \\\"stationary\\\" problem being analyzed. The relationship of the dynamical updates of the original problem and the stationary problem that assumes a distribution of initial states being generated remains unclear and unconvincing; is it possible the dynamic setup cannot be described by a good distribution and thus data collected at diffferent phases of the dynamic evolution will lead to different results.\\n\\n$\\\\bullet$. In Remark 3, the authors provide rationale on why the measurment noise corruption leading to flipping the bit with probability less than $0.5$ leads no loss of generality. It seems like knowledge whether the probability of flipping the bit is greater than 1/2 or less than 1/2 is needed. The other part of the remark on the difficulty of unravelling the structure of the graph is difficult needs to be better qualified. There is a large literature on estimating topology of agents interacting with each other via quite general relationships. This part of the remark overeaches considerably.\\n\\n$\\\\bullet$ In line 225, the phrase number of labels is used. There is no explanaation of what labels mean. From later part, it seems like the number of labels is two, either zero or one. It is advisable to remove the use of labels and related development (for example in the Proof of Lemma 3.1) and set it to two.\\n\\n$\\\\bullet$ On element wise ERM, the authors use the emperical loss with respect to node $i$ of any hypothesis as \\n\\n$$\\\\min\\\\sum_{j=1}^q \\\\mathbb{1} \\\\left(h(W_j)[i]\\\\not = \\\\hat{W}_j[i]\\\\right)$$\\n\\nwhere the tuples $(W_j,\\\\hat{W}_j)$ are provided to the learner. The above optimization will lead to a solution say $h^{opt,i}$ for the $i^{th}$ node. The authors seem to make an assumption that $h^{opt,i}=h^{opt,j}.$ Though authors state as much, this assumption is very restrictive.\\n\\n$\\\\bullet$ Lemma 3.1 is quite confusing. The authors introduce partitions with respect to each node $i$ which are equivalence classes of hypothesis which result in the same empirical loss. Thus, the partition depends on the number $q$ of the available training data and thus are dictated by the probability distributions $D$ and $\\\\eta_v.$ Thus the cardinality of the maximum element of the collection of partitions $t_max$ should be a random variable (dictated by the distribution on the training size, $q$, $D$ and $\\\\eta_v.$) Now the order relation of $q$ involves $t_max$ which is itself dependent on $q$, $D$ and $\\\\eta_v.$ This seems like a circular dependency. Authors needs to indicate clearly what the relationship implies.\\n\\n$\\\\bullet$ The V-ERM algorithm generates a hypothesis by first setting $\\\\tau^{opt}_i$ the threshold for node $i$ that minimizes the empirical loss for node $i$ and then forming the hypothesis by using $\\\\tau^{opt}_i$ for node $i$ for $i=1,\\\\ldots,n.$ Theorem 3.2 statement does not suffer from the ambiguity of Lemma 3.1 pointed earlier; however, the proof relies on Lemma 3.1. There might be a way to derive Theorem 3.2 directly with the concrete construction of the partition. \\n\\n$\\\\bullet$ For VisScore and VisRange algorithms, the authors propose to use the training data of configurations and the transitioned measured configurations to evaluate the frequency of a score, of a node, which is number of neighbor states that possess the value $1$ over the $q$ number of training data. For each of these scores the frequency of the transitioned configuration of the node as measured is also tracked. The statitics of the score and the transitioned value is leveraged to obtain estimates on the threshold to be used for setting the state to one. The analysis tracks scores that have high frequency for a node and the associated frequency of transitioned state being $0$ or $1$. Here standard techniques are employed to arrive at the sample-complexity of etsimates on the probability of scores and the error introduced by the measurement (considered to have a probability less than $1/2$) to diminish with training size $q$ to arrive at the result.\\n\\n$\\\\bullet$ Simulation results are presented that show $VERM$ showing gradual improvement whereas VisScore and VisRange algorithms show phase transition like behavior. More analysis with respect to the nature of the graph would provide more insights. Authors can clarify how the simulations were carried out; whether the dynamic version was simulated or was distribution of configurations $D$ used in some manner. If the dynamic version was employed then it would be interesting to understand which kinds of distributions arise.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"We thank all reviewers for their valuable feedback\", \"comment\": \"Dear reviewers and ACs,\\n\\nWe acknowledge that significant revisions are needed at this stage. We are currently preparing a revised version of the paper that incorporates all the feedback provided by the reviewers.\\n\\nThank you for your valuable feedback! Once the revised version is completed, we will provide detailed responses to the questions raised by each reviewer.\\n\\nBest,\\n\\nAuthors\"}", "{\"summary\": \"This authors study the problem of learning discrete dynamical systems from noisy data. They introduce two algorithms, V-ERM and VisRange, which are noise-tolerant and achieve efficient learning guarantees under PAC-bounds. The paper provides sample complexity bounds and demonstrates that the number of samples needed in the noisy scenario remains in the same order of those in noise-free settings. Experimental results on synthetic and real-world networks further validate the algorithms' effectiveness, revealing performance that favor V-ERM in practical applications and VisRange for theoretical bounds.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"**Clarity of exposition:** The paper is well-written and systematically introduces the problem setting, contributions, and theoretical derivations. Definitions and assumptions are clearly stated.\\n\\n**Intuitive and well-discussed results:** The authors provide sound theoretical results with rigorous sample complexity bounds, effectively extending previous works on learning discrete dynamical systems to the noisy setting.\\n\\n**Theoretical contribution:** Through empirical evaluations, the authors demonstrate the practical relevance of V-ERM in noisy environments and the sharp theoretical guarantees of VisRange, offering both a practical and a theoretically sound solution.\", \"weaknesses\": \"1) In Section 2.2, the authors assume that the underlying graph structure is fully known, which simplifies the learning task. Additional discussion on the practical implications would be necessary, especially regarding the scenarios where graph information may only be partially known.\\n\\n2) The assumption on the noise rate may be too restrictive. The authors could expand more on the implications of $\\\\eta_v = 1/2$.\", \"questions\": \"How does the guarantees of VisRange vary with increased graph density? It is unclear how the performance scales with denser graphs, as higher connectivity could potentially increase noise propagation. Could the authors give more intuitions on that? Could the authors provide experimental results on how VisRange performs with varying graph densities?\\n\\n**Minor**: Repeated ``there\\\" in line 1103.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
82p8VHRsaK
Language Models are Advanced Anonymizers
[ "Robin Staab", "Mark Vero", "Mislav Balunovic", "Martin Vechev" ]
Recent privacy research on large language models (LLMs) has shown that they achieve near-human-level performance at inferring personal data from online texts. With ever-increasing model capabilities, existing text anonymization methods are currently lacking behind regulatory requirements and adversarial threats. In this work, we take two steps to bridge this gap: First, we present a new setting for evaluating anonymization in the face of adversarial LLM inferences, allowing for a natural measurement of anonymization performance while remedying some of the shortcomings of previous metrics. Then, within this setting, we develop a novel LLM-based adversarial anonymization framework leveraging the strong inferential capabilities of LLMs to inform our anonymization procedure. We conduct a comprehensive experimental evaluation of adversarial anonymization across 13 LLMs on real-world and synthetic online texts, comparing it against multiple baselines and industry-grade anonymizers. Our evaluation shows that adversarial anonymization outperforms current commercial anonymizers both in terms of the resulting utility and privacy. We support our findings with a human study (n=50) highlighting a strong and consistent human preference for LLM-anonymized texts.
[ "privacy", "anonymization", "large language models" ]
Accept (Poster)
https://openreview.net/pdf?id=82p8VHRsaK
https://openreview.net/forum?id=82p8VHRsaK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wghJ4mfSUN", "wLBohftRPW", "rCwOPOZJBn", "onXa8xdXOG", "mXDzEWOkgf", "k0Fas1TIJS", "edMUgLQnjR", "d9DsMtsBs6", "b5fEEQ2Fu5", "b5O2jauYhS", "aiqTsJZMNN", "ZXjR4AffGf", "ZQzrQCRvIJ", "V0SpawPo8C", "SZs81TMa6S", "NhZsIfIM6n", "NFxsKo47dt", "LvdphvcoeY", "HfBC6bT1qY", "GgSotwUYtS", "9Zy4qWpbDV", "8wZcCeZJw3", "7BgkHKKkgB" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733209974240, 1737524287422, 1732277676518, 1732277473642, 1732277709960, 1732277535716, 1733217776560, 1732564492714, 1732554007783, 1730723979166, 1732631578558, 1730501182828, 1732277426524, 1732277369984, 1732631450203, 1734828424987, 1732277643927, 1732277560147, 1730070546912, 1732277486239, 1732277598488, 1732277729197, 1732631523575 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13884/Reviewer_CnVX" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13884/Authors" ], [ "ICLR.cc/2025/Conference/Submission13884/Authors" ], [ "ICLR.cc/2025/Conference/Submission13884/Authors" ], [ "ICLR.cc/2025/Conference/Submission13884/Authors" ], [ "ICLR.cc/2025/Conference/Submission13884/Authors" ], [ "ICLR.cc/2025/Conference/Submission13884/Reviewer_18jd" ], [ "ICLR.cc/2025/Conference/Submission13884/Reviewer_EPkh" ], [ "ICLR.cc/2025/Conference/Submission13884/Reviewer_CnVX" ], [ "ICLR.cc/2025/Conference/Submission13884/Authors" ], [ "ICLR.cc/2025/Conference/Submission13884/Reviewer_EPkh" ], [ "ICLR.cc/2025/Conference/Submission13884/Authors" ], [ "ICLR.cc/2025/Conference/Submission13884/Authors" ], [ "ICLR.cc/2025/Conference/Submission13884/Authors" ], [ "ICLR.cc/2025/Conference/Submission13884/Area_Chair_QLQe" ], [ "ICLR.cc/2025/Conference/Submission13884/Authors" ], [ "ICLR.cc/2025/Conference/Submission13884/Authors" ], [ "ICLR.cc/2025/Conference/Submission13884/Reviewer_18jd" ], [ "ICLR.cc/2025/Conference/Submission13884/Authors" ], [ "ICLR.cc/2025/Conference/Submission13884/Authors" ], [ "ICLR.cc/2025/Conference/Submission13884/Authors" ], [ "ICLR.cc/2025/Conference/Submission13884/Authors" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal Acknowledgement\", \"comment\": \"I appreciate the authors' rebuttal, which has sufficiently addressed my concerns. I have increased my score as a result. After reading all the other discussion, I note that the primary criticism centers on the method's lack of novelty. While it is less satisfying that this work does not provide theoretical guarantees or propose a novel method to protect privacy, it presents a fresh and more practical perspective on privacy that is backed up with extensive experiments.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"**Q6: Can you provide common failure cases for adversarial anonymization?**\\n\\nAbsolutely. Prompted by the reviewer's question, we have manually checked all cases in the PersonalReddit dataset where the adversary was able to make a certain prediction ($\\\\geq 4$) after the third iteration of GPT-4-AA. Unlike the cases with low certainty alluded to above, we consider these actual failure cases. Interestingly, we find that these failures (with the exception of a single case) are restricted to only three attributes: Education (5 cases), Relationship Status (7 cases), and Sex (10 cases). For all of these cases, we find that the core message of the text is closely related to the respective attribute, e.g., (for Sex) the personal experience of bearing a child or using gender-specific contraception (for Relationship Status) almost exclusively recent stories about experiences with dating (apps) that indicate their current relationship status as single and (for Education) mostly very college specific topics such as struggle with course load/selection. In all of the cases, a significant part of the utility of the text is given by the exposure of the personal attribute. While, e.g., more concrete references to universities have been removed from all texts, the overall nature of the text, which is about the life of a college student, was retained. In these cases, while adversarial anonymization provides some level of protection and certainly awareness for the user, the communication of the private attribute is core to the utility of the text, making full anonymization impossible without sacrificing almost all utility.\\n\\n**Q7: Have you considered possible adaptations for other privacy requirements and linguistic patterns?**\\n\\nThis is a very interesting avenue. For now, we have focussed our setup primarily on English and the attributes defined in existing prominent privacy regulations. With this in mind, the framework is straightforward enough that we have seen it work directly in other languages (e.g., our dataset contains some comments (partially) in Spanish, French, and German). Especially with the development of more language-specific LLMs (as well as the growing multilingual capabilities of frontier models), adversarial anonymization is very likely to produce similarly strong results here. With respect to varying privacy settings, this becomes even more interesting. We already observed a stronger degree of application flexibility in AA than in traditional text anonymizers (as well as strong results across several domains). Further, there is actually even one work currently under review at ICLR that uses AA as a baseline for a linkability privacy setting. If the reviewer has particular settings in mind here, we would be very excited to hear about them!\\n\\n**References**\\\\\\n[1] Staab, Robin, et al. \\\"Beyond memorization: Violating privacy via inference with large language models.\\\" ICLR 2024.\\\\\\n[2] Jin, Di, et al. \\\"What disease does this patient have? a large-scale open domain question answering dataset from medical exams.\\\" Applied Sciences 11.14 (2021): 6421.\\\\\\n[3] DOL, 2023. URL https://www.dol.gov/general/ppii. \\\\\\n[4] European Union. (n.d.). What personal data is considered sensitive?. European Commission.https://commission.europa.eu/law/law-topic/data-protection/reform/rules-business-and-organisations/legal-grounds-processing-data/sensitive-data/what-personal-data-considered-sensitive_en\"}", "{\"comment\": \"We thank the reviewer for their comprehensive feedback. We respond to the raised concerns below, numbered Q1-Q7. We have also uploaded a revised version of the paper, with updates highlighted in violet. We are happy to continue the discussion further if the reviewer has additional questions.\\n\\n**Q1: Does the remaining 41% accuracy after anonymization indicate failures of the anonymization?**\\n\\nWe thank the reviewer for raising this important point. To clarify, no, it does not indicate the failure of anonymization, it is instead an artifact of highly uncertain predictions in later anonymization stages. First, we note that certain discrete features enable even a random adversary to achieve high-looking scores, e.g., random accuracy on sex would be ~50%. Further, to elaborate on the main reason behind the remaining accuray, note that in the reported joint adversarial accuracy of 41%, we count any prediction made by the adversary for any certainty level (1-5). In particular, this also accounts for cases when there was no clear evidence or basis for inference in the text and the model relied purely on inherent biases (i.e., certainty 1 and 2 - we provide qualitative examples in the newly added App. D.4).\\n\\nCrucially, if we only account for predictions made with certainty $3$ or higher (following [1]) the resulting adversarial accuracy drops down to only $7.7%$ (For $\\\\geq 2$ we get $19\\\\%$. Notably as we had shown in Fig. 4 this even furthers the advantage adversarial anonymization has over traditional techniques as it not only leads to lower accuracy in guessing but also a much lower certainty in the guesses.\\n\\nPrompted by the reviewers question we have added an additional discussion on this to the updated manuscript in App. G. We are happy to expand on this in case the reviewer has further recommendations.\\n\\n**Q2: Why did you evaluate particularly on the online domain setting? How does adversarial anonymization transfer to other settings?**\\n\\nWe primarily focused on the online domain as (1) it is a setting where real-world risks were shown in prior works [1,2] and (2) it is a setting which is uniquely challenging for existing methods in anonymization\\u2014to an extent where they are borderline unusable (and unpreferred, as shown in our human evaluation). In that sense we believe that providing a solution that works well here is a valuable contribution.\\n\\nPrompted by the reviewer's question, we extended our evaluation in two ways (presenting new results in App. H): First, we evaluate also on the MedQA dataset [2] having a much more rigid/structured text setting, with a clearly defined downstream utility metric via multiple-choice accuracy. Second, we compute embeddings over all anonymized (and original) texts in PersonalReddit allowing us to quantify their similarity independent of potential downstream tasks.\\n\\nOn MedQA using GPT-4o as a downstream classifier we achieve $85.4\\\\%$ baseline accuracy. After applying a single round of adversarial anonymization we reduce the number of adversarial age, location, and place-of-birth, predictions by $>50\\\\%$ while also showing strong results other attributes like Sex ($>25\\\\%$) and occupation, while still maintaining a downstream accuracy of $81.4\\\\%$ (we expect some drop as in some cases this information can be quite relevant for predictions). This makes it competitive in utility with Azure ($81.5\\\\%$) that works that works better on such reports than on free-form data (e.g., almost every text starts with \\u201cA XX-years old man/woman/baby has been \\u2026\\u201d), while slightly outperforming it on privacy. We present a full overview in App. H.1.\\nAs a proxy for the retention of downstream utility on free-form text, we further also compute embeddings using the \\\\texttt{text-embedding-3-large} model by OpenAI. As these embeddings are usable for all sorts of potential downstream tasks, they constitute a strong proxy for how well we might perform on arbitrary downstream tasks and quantify how close the anonymized text is to the original. We report cosine similarity between embedding vectors (1 being a perfect match). We find that, e.g., on Llama-3.1-70B-AA, we have a median cosine-similarity of $0.93$ after one round ($0.88$ after 2 and $0.84$ after 3). This is in stark contrast to the median of $0.24$ between $1000$ selected random comments in PersonalReddit and re-affirms that adversarial anonymization maintains a high level of utility.\\nWe provide more details and an additional discussion on the above experiments in the newly added App. H.\"}", "{\"comment\": \"**Q4: Can you relate adversarial anonymization to DP? In which cases could there be joint applications?**\\n\\nWe agree with the reviewer that DP's privacy guarantees are the gold standard in many settings. However, in practice, these mechanisms are often very difficult to apply to text (For instance as in [6] (When did Tesla move to New York City? -> Wave did Tesla It way Dru Tully breaking?)), especially when we want to keep the general utility and meaning of the text, such as in free form online settings considered in our work. This problem is exacerbated in the setting we are targeting, where people are individually contributing to a \\\"database\\\" and have to resort to local DP approaches. Further, potential future interactions of a user are unbounded, inhibiting rigorous and finite DP guarantees that hold also in the future. Finally, DP fundamentally requires randomness, which would mean that DP-based anonymization would not allow the user to interfere with the anonymized outputs. Our previous example illustrates well why this is a problem; imagine if a user wants to ask \\u201cWhen did Tesla move to New York City?\\u201d in a forum but once they hit the \\u201cpost\\u201d button, the posted text appears as \\u201cWave did Tesla It way Dru Tully breaking?\\u201d, without an option to edit. In contrast, our method enables more practical fine-grained interactions with the user at each round of anonymization.\\n\\nNonetheless, we believe that adversarial anonymization and DP can actually complement each other as they target different notions of privacy - in particular, we can imagine a setting in which the collected data text is first anonymized in order to protect individual attributes at the time of collection and, later, model training via DP-SGD provides privacy guarantees on that particular model instance itself. If the reviewer is interested in a more comprehensive overview of text anonymization and DP methods we can recommend [7] as a great starting ressource.\\n\\n**References**\\\\\\n[1] Staab, Robin, et al. \\\"Beyond memorization: Violating privacy via inference with large language models.\\\" ICLR 2024.\\\\\\n[2] Thomas Brewster. Chatgpt has been turned into a social media surveillance assistant, Nov 2023.\\\\\\n[3] Jin, Di, et al. \\\"What disease does this patient have? a large-scale open domain question answering dataset from medical exams.\\\" Applied Sciences 11.14 (2021): 6421.\\\\\\n[4] DOL, 2023. URL https://www.dol.gov/general/ppii. \\\\\\n[5] European Union. (n.d.). What personal data is considered sensitive?. European Commission.https://commission.europa.eu/law/law-topic/data-protection/reform/rules-business-and-organisations/legal-grounds-processing-data/sensitive-data/what-personal-data-considered-sensitive_en. \\\\\\n[6] Yue, Xiang, et al. \\\"Differential privacy for text analytics via natural text sanitization.\\\" arXiv preprint arXiv:2106.01221 (2021).\\\\\\n[7] Lison, Pierre, et al. \\\"Anonymisation models for text data: State of the art, challenges and future directions.\\\" Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021.\"}", "{\"comment\": \"**Q5: Can one potentially distill these capabilities into smaller models?**\\n\\nWe thank the reviewer for this suggestion. Based on this, we have fine-tuned a 4-bit version of Llama-3.1-8B (requiring only 4GB of memory) as a new inference model (keeping the original 8-bit Llama-3.1-8B as a repair model). In particular, we only used synthetic demonstrations from Llama3.1-70B on SynthaPAI and our synthetic samples and evaluated the newly trained model on the separate full real-world dataset. We provide a full overview over the setting and the results in the newly added App. I.\\nAs we show in Fig. 20, this actually leads to a significant improvement in the utility-privacy-tradeoff, with the resulting Mixed-AA model achieving results almost as strong as the much larger LLama-3.1-70B. Interestingly we observed that demonstrations from Llama3.1-70B resulted in a better finetuned model than demonstrations from GPT-4. We speculate that this is due to the similar alignment of the Llama3.1 models. Overall, we believe these results are very promising, and more powerful approaches for model distillation could result in edge-device runnable models that achieve very strong utility-privacy tradeoffs. That is a very promising direction for future work, especially in order to enable lower-cost anonymization.\\n\\n**References**\\\\\\n[1] Staab, Robin, et al. \\\"Beyond memorization: Violating privacy via inference with large language models.\\\" ICLR 2024.\\\\\\n[2] Thomas Brewster. Chatgpt has been turned into a social media surveillance assistant, Nov 2023.\\\\\\n[3] Pil\\u00e1n, I., et al. 2022, 12. \\u201cThe Text Anonymization Benchmark (TAB): A Dedicated Corpus and Evaluation Framework for Text Anonymization.\\u201d\\\\\\n[4] Lison, Pierre, et al. \\\"Anonymisation models for text data: State of the art, challenges and future directions.\\\" Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021\\\\\\n[5] Bubeck, S\\u00e9bastien, et al. \\\"Sparks of artificial general intelligence: Early experiments with gpt-4.\\\" arXiv preprint arXiv:2303.12712 (2023).\"}", "{\"comment\": \"We are glad that our rebuttal addressed the reviewer's concerns, and we thank the reviewer for raising their score and for providing constructive feedback that has led to a better version of the paper. We are happy to hear that the reviewer shares the view that our work presents a fresh and more practical perspective on privacy backed up by extensive experiments.\"}", "{\"comment\": \"Thank authors for the rebuttal. The responses to Q4 and Q5 have effectively addressed my concerns, and I suggest that you include these experiments and analyses in the updated version. However, I still have reservations about the technical contributions and the fact that privacy can only be evaluated empirically due to the lack of theoretical analysis in this paper. This is not because your rebuttal is inadequate, I understand that you believe this is a new application and that you have conducted extensive experiments. Indeed, after reviewing your responses to Q4 and Q5, I feel the paper has improved. Nevertheless, my current impression is that this paper is still borderline work. Since there is no option for a score of 5.5, I will keep my original score.\"}", "{\"title\": \"Response to the authors\", \"comment\": \"The reviewer appreciates the detailed discussion provided by the authors and also the additional empirical studies. The reviewer does maintain their concern about the fit but will definitely consider increasing their score.\"}", "{\"summary\": \"This paper introduces a novel approach to text anonymization in the era of large language models. The authors present two main contributions: (1) a new evaluation framework that leverages LLMs for adversarial inference to measure anonymization effectiveness, and (2) an iterative anonymization pipeline that uses adversarial feedback to guide the text anonymization process. This framework offers improvement over the traditional span based formulation as contextual information leaks information as well. The authors conduct extensive experiments with various models and demonstrate that their approach achieves better privacy-utility tradeoffs compared to traditional span-based anonymization techniques such Azure Language Services. In their results, performing the procedure reduces the adversarial inference chance from 87% to 66%, and iterating the procedure with GPT-4 for three rounds further reduces adversarial inference success to ~45% while maintaining higher text utility than baseline methods. They validate their results with human annotation.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Novel approach that leverages LLMs' inference capabilities to measure privacy leakage in a more realistic way than traditional span-based methods.\", \"Comprehensive experimental evaluation across multiple models, attributes, and metrics, with clear ablation studies showing the benefit of the feedback-guided approach.\", \"Strong empirical results showing significant improvements over industry-standard tools like Azure Language Service, with detailed analysis of both privacy protection and utility preservation.\", \"Thoughtful consideration of practical concerns including computational costs, local deployment options, and regulatory compliance.\", \"Clear demonstration of how multiple rounds of anonymization can progressively improve privacy while maintaining readable text.\"], \"weaknesses\": [\"The ~41% remaining adversarial inference success rate after anonymization remains concerning for privacy-critical applications. The paper would benefit from deeper analysis of these failure cases.\", \"Limited domain evaluation, focusing primarily on data directly or indirectly from Reddit. Testing on other domains (medical, legal, etc.) would strengthen generalizability claims. It could also be reddit comments that mentions information from these domains.\", \"The method requires pre-defining attributes to protect, which may miss unexpected privacy leaks. An automated approach to identifying sensitive attributes could be valuable.\", \"Cost analysis could be more comprehensive - while per-comment costs are reasonable (\", \"(~$0.035), real-world applications with high volume could face significant expenses. In addition, the estimate is only for one turn, so to achieve the same level of privacy protection, this number might be more expensive.\"], \"questions\": [\"Have you explored methods to automatically determine the optimal number of iterations, perhaps based on inference confidence?\", \"How does the system perform when encountering privacy-sensitive attributes not explicitly listed in the input? Could it be extended to automatically identify such attributes?\", \"Can you provide specific examples of common failure cases where privacy leaks persist even after multiple rounds of anonymization? Understanding these patterns could help improve the approach.\", \"Have you considered how this approach might need to be adapted for different domains with varying privacy requirements and linguistic patterns?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**References**\\\\\\n[6] Lison, Pierre, et al. \\\"Anonymisation models for text data: State of the art, challenges and future directions.\\\" Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021\\\\\\n[7] Manzanares-Salor, B., S\\u00e1nchez, D. & Lison, P. Evaluating the disclosure risk of anonymized documents via a machine learning-based re-identification attack. Data Min Knowl Disc 38, 4040\\u20134075 (2024). https://doi.org/10.1007/s10618-024-01066-3 \\\\\\n[8] Robin Staab, Mark Vero, Mislav Balunovic, and Martin Vechev. Beyond memorization: Violating privacy via inference with large language models, ICLR, 2023\\\\\\n[9] Yukhymenko, Hanna, et al. \\\"A Synthetic Dataset for Personal Attribute Inference.\\\" NeurIPS Dataset and Benchmarks, 2024.\\\\\\n[10] Dou, Yao, et al. \\\"Reducing Privacy Risks in Online Self-Disclosures with Language Models.\\\" arXiv preprint arXiv:2311.09538 (2023).\\\\\\n[11] Yao, Yifan, et al. \\\"A survey on large language model (llm) security and privacy: The good, the bad, and the ugly.\\\" High-Confidence Computing (2024): 100211. \\\\\\n[12] Li, Haoran, et al. \\\"Privacy in large language models: Attacks, defenses and future directions.\\\" arXiv preprint arXiv:2310.10383 (2023).\\\\\\n[13] Schneier, Bruce. \\u201cThe Internet Enabled Mass Surveillance. A.I. Will Enable Mass Spying.\\u201d Slate, 4 Dec. 2023, slate.com/technology/2023/12/ai-mass-spying-internet-surveillance.html. \\\\\\n[14] Moody, Glyn. \\u201cChatGPT Is One Year Old: Here\\u2019s AI\\u2019s next Attack on Privacy, and What to Do about It.\\u201d Private Internet Access Blog, 8 Dec. 2023, www.privateinternetaccess.com/blog/chatgpt-is-one-year-old-heres-ais-next-attack-on-privacy-and-what-to-do-about-it/. Accessed 26 Nov. 2024.\\\\\\n[15] Stanley, Jay. \\u201cWill ChatGPT Revolutionize Surveillance? | ACLU.\\u201d American Civil Liberties Union, 19 Apr. 2023, www.aclu.org/news/privacy-technology/will-chatgpt-revolutionize-surveillance. \\\\\\n[16] Brewster, Thomas. \\u201cChatGPT Has Been Turned into a Social Media Surveillance Assistant.\\u201d Forbes, 16 Nov. 2023, www.forbes.com/sites/thomasbrewster/2023/11/16/chatgpt-becomes-a-social-media-spy-assistant/?sh=56b7f0345cf6. \\\\\\n[17] Levinson-Waldman, Rachel, et al. \\u201cSocial Media Surveillance by the U.S. Government.\\u201d Brennan Center for Justice, 7 Jan. 2022, www.brennancenter.org/our-work/research-reports/social-media-surveillance-us-government. \\\\\\n[18] SOCIAL LINKS. \\u201cSocial Links - OSINT Tools for Investigations.\\u201d Sociallinks.io, sociallinks.io/. \\\\\"}", "{\"summary\": \"In this work, the authors focus on the privacy scenario where online texts can be exploited to infer personal data. The authors utilize adversarial LLM inferences, which are highly performant in extracting personal attributes from unprotected texts, for evaluating anonymization and also use this adversarial model as \\\"feedback provider\\\" to another LLM whose goal is to anonymize texts. An iterative framework between these two LLMs lead to strong anonymization performance as shown by the authors in wide range of experiments, outperforming existing anonymizers and also aligning well with human preference.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"The presentation is very clear and the paper flows very well. Text anonymization is an important problem in the realm of privacy and the approach the authors introduce do improve the existing anonymization tools significantly. The evaluation section has extensive analysis, which is great. The reviewer really enjoyed reading this paper overall.\", \"weaknesses\": \"In my opinion, the paper is very well written and the authors conducted extensive empirical studies to demonstrate the significant improvement of their approach compared to the existing text anonymization tools. I think my main concern is the scope and complexity of the approach appear quite limited, especially for a conference of this caliber. The approach is based on iterating two LLMs, one is anonymizing text and the other is trying to infer personal attributes. To me, this is like a cute application of LLMs but perhaps rather better suited for a workshop instead of this conference. In this sense, I am unsure about the fit.\", \"questions\": \"1. Have you considered measuring utility by some downstream applications? E.g. if the texts are used for some analysis or for some task, how the performance changes from the original unprotected texts to the anonymized texts. Would you think this could also serve for useful utility metrics?\\n\\n2. How can one turn this approach into a more comprehensive privacy-protecting tool? To my understanding, it currently builds on pre-defined set of attributes and the adversary LLM is trying to infer these attributes while the anonymizer LLM is trying to anonymize as oppose. But it'd be hard to list all possible attributes that could lead to deducing personal information so any comments on scaling this approach would be appreciated.\\n\\n3. Also related to my question above, formal privacy guaranteeing mechanisms like differential privacy ensures that even the existence of the data cannot be inferred from the analysis by any adversary. Although in this work the authors focus on anonymizing individual text snippets so that DP may not be applicable, however, it'd be interesting to find a common scenario where two approaches can be compared I think.\", \"minor\": \"AzureLanguageService -> Azure Language Service\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Q2: Why did you choose online communication as a setting? How does this deal with long context sizes and real-time processing?**\\n\\nWe primarily focused on the online domain as (1) it is a setting where real-world risks were shown in prior works [1] and (2) it is a setting which is uniquely challenging for existing methods in anonymization\\u2014to an extent where they are borderline unusable (and unpreferred, as shown in our human evaluation). In that sense, we believe that providing a solution that works well here is a valuable contribution for potential users as well as the privacy community.\\n\\nIn particular, the inferential privacy setting addresses core issues in prior work in text-based anonymization using span-based tags that required significant human effort to collect and combine [3], overall hindering process in this area [4].\\nFurther, we do not believe that context size is a significant issue for (1) the primary use-case we are targeting (while online conversations are commonly shorter in length, PersonalReddit contains texts of up to ~1.5 pages (1000 tokens) - not just short individual comments) and (2) the significantly larger context size with many models already now being able to handle hundreds of pages of inputs ($>100k$ input tokens). As such, we do not think this provides a technical barrier to adversarial anonymization. Additionally, it has also previously been shown [5] that, e.g., GPT-4 outperforms existing methods in detecting PII in (longer) Human Right Court Documents [3].\\n\\nFor millisecond applications, we agree with the reviewer that using a secondary model for anonymization may incur noticeable overhead. However, we are not targeting such particular instances nor did we encounter any such application in our literature research\\u2014if the reviewer has any particular application with such time complexity constraints in mind, we are interested to learn more about them. While specialized methods for such potential applications certainly would have their place (and arguably an additional different set of other requirements) we would argue that there is a wide majority of use-cases where ultra-low-latency is not a primary objective. Further, from an execution speed perspective, adversarial anonymization does not fare as badly as one may expect. From experience on our datasets, each text was anonymized in order of seconds per round (fully parallel between texts), which we deem very usable for most use-cases. Additionally, efficient and fast serving of LLMs is an active research and commercial application area, providing solutions with considerable inference time improvements over the methods employed for our evaluation.\"}", "{\"comment\": \"We thank the reviewer for their comprehensive feedback. We respond to the raised concerns below, numbered Q1-Q5. We have also uploaded a revised version of the paper, with updates highlighted in violet. We are happy to continue the discussion further if the reviewer has additional questions.\\n\\n**Q1: Does the simplicity of the final anonymization algorithm decrease the overall merit of the work?**\\n\\nNo, we believe that even though our eventual anonymization method is conceptually straightforward from a purely technical perspective, it makes significant contributions in the field of text anonymization in several ways\\u2014especially so in casual communicative settings, such as online forums, where, as shown in [1] LLMs pose a severe novel threat to anonymity. \\n\\nOn the setting side, we are the first to provide an (inference-based) adversarial view on text anonymization, addressing the key limitation of prior stance on anonymization; namely that even entity-blanking-based anonymization still leaks private information in presence of an inferential adversary. This is a critical issue that is fundamental to current anonymization tools and evaluation. As also shown in [1], we demonstrate in our evaluation that on real-world data existing methods are clearly insufficient both against adversaries and in utility.\\n\\nOn a practical side, as reviewers agree, adversarial-feedback anonymization using LLMs provides a natural instantiation of this setting and achieves consistently better privacy protection and utility than *commercial* anonymization tools. As such, our strong empirical contributions could have an noticable impact on the state of anonymization currently in application.\\n\\nFurther, having such an elaborate analysis and extensive experimental evaluation (across 17 models, multiple datasets, and including a human study), sets a strong baseline for any potential follow-up work in this area.\\n\\nAs such, we believe that there is significant merit in (1) defining a practical and free form setting for text anonymization and (2) providing a method that outperforms current industry-standard anonymization across very extensive evaluations. Such a contribution is particularly timely as developments in LLMs have made such inferences much easier [1] up to the point where it can be potentially misused on a large scale [2], constituting a practical real-world threat.\"}", "{\"comment\": \"We thank the reviewer for their consideration and for providing feedback that resulted in an improved version of the manuscript\\u2014we truly appreciate it. Given the significant improvements over industry-level anonymization, level of exposition, and evaluation presented in the paper (17 models, 4 datasets, human study, various ablations, finetuning outlook), we believe the work is fit for a venue such as ICLR and establishes a strong foundation for future work in this area. If it helps assure the reviewer about the overall fit, we are happy to find that there are already works not affiliated with us (under submission in this cycle at ICLR) that build on / compare to our work.\\n\\nBut mostly, we believe that in light of the results of [1], showing that current text anonymization is ineffective in the presence of LLMs and current anonymization evaluation is disconnected from practical privacy risks in text, it is evident that fundamental adjustments have to be made to current tools and practices in anonymization. In this regard, we view our work as significant as it, for the first time, introduces a view on text anonymization that is practically linked to the inferential privacy leakage from the text and proposes an effective method to utilize the strongest available adversary for text sanitization. While we agree (and hope) that there is room for more complex methods and theoretical analyses, we believe that our work can lay the solid empirical and conceptual foundation for these important future studies.\"}", "{\"metareview\": \"Building on prior work from Staab et al. (2023) that showed that LLMs can infer many personal attributes from online posts without needing explicit identifiers, the authors show that LLMs can still infer these attributes even after the posts have been \\\"anonymized\\\" by SOTA, industry-grade anonymizers. They then propose an iterative adversarial scheme to better anonymize these posts (essentially, prompting an LLM to see if they can still infer any personal attribute; and then editing the post accordingly).\\n\\nReviewers thought the paper was well-written, well-executed, and tackling an important problem. The main concern from some reviewers was about fit, given that the paper is largely prompting-based and does not introduce any other methodological or conceptual developments. I am less concerned about this issue because the paper provides an important contribution by showing that SOTA anonymizers still allow for significant inference of personal attributes and that simple iterative methods with an LLM in the loop can improve anonymization. While there are no \\\"novel technical contributions\\\", I think this paper carries a useful message for the community and can serve as a foundation for future efforts to build better privacy safeguards. Thus, I recommend acceptance. I encourage the authors to take the skeptical reviewer feedback into account in the framing of their paper.\", \"additional_comments_on_reviewer_discussion\": \"Not much discussion besides clarifications; most of the objections are philosophical about what degree of technical contribution is needed for a good paper.\"}", "{\"comment\": \"**Q2: Have you considered measuring the utility via downstream performance? For which settings do you believe adversarial anonymization is useful here?**\\n\\nOne of the key challenges with free-form text in our target online setting is that utility in many cases (particularly for users that write the texts) is defined via the coherence and expression of the text itself. Notably measuring the utility via some downstream task can actually hide a lot of utility-loss (as we detail further in Q4) that would be important to humans in this setting (such as readability). Nevertheless, if there is a clear downstream target one could also use this to get a practical indication of how much task-specific utility is retained. Prompted by the reviewer's question, we extended our evaluation in two ways: First, we evaluate also on the MedQA dataset [3] having a much more rigid text setting, with a clearly defined downstream MC-Accuracy metric, and secondly, we compute embeddings over all anonymized (and original) texts in PersonalReddit allowing us to quantify their similarity independent of potential downstream tasks.\\n\\nOn MedQA using GPT-4o as a downstream classifier we achieve $85.4\\\\%$ baseline accuracy. After applying a single round of adversarial anonymization we reduce the number of adversarial age, location, and place-of-birth, predictions by $>50\\\\%$ while also showing strong results other attributes like Sex ($>25\\\\%$) and occupation, while still maintaining a downstream accuracy of $81.4\\\\%$ (we expect some drop as in some cases this information can be quite relevant for predictions). This makes it competitive in utility with Azure ($81.5\\\\%$) that works that works better on such reports than on free-form data (e.g., almost every text starts with \\u201cA XX-years old man/woman/baby has been \\u2026\\u201d), while slightly outperforming it on privacy. We present a full overview in App. H.1.\\n\\nAs a proxy for the retention of downstream utility on free-form text, we further also compute embeddings using the \\\\texttt{text-embedding-3-large} model by OpenAI. As these embeddings are usable for all sorts of potential downstream tasks, they constitute a strong proxy for how well we might perform on arbitrary downstream tasks and quantify how close the anonymized text is to the original. We report cosine similarity between embedding vectors (1 being a perfect match). We find that, e.g., on Llama-3.1-70B-AA, we have a median cosine-similarity of $0.93$ after one round ($0.88$ after 2 and $0.84$ after 3). This is in stark contrast to the median of $0.24$ between $1000$ selected random comments in PersonalReddit and re-affirms that adversarial anonymization maintains a high level of utility.\\n\\nWe provide more details and an additional discussion on the above experiments in the newly added App. H.\\n\\n**Q3: How can this be turned into a more comprehensive tool? Does adversarial anonymization require me to pre-define all attributes that should be removed in the text? Is this feasible?**\\n\\nThe reviewer raises an interesting question. In particular, we find that Adversarial anonymization only requires one to define the attributes that they want to protect, not the way they are expressed in text. As such, there could be no unintended privacy leakage in case the user lists everything to the algorithm that they consider \\u201cprivate\\u201d. In fact, these \\u201cattributes\\u201d may be even more complex aspects to protect, e.g., mental health information or descriptions of any specific information to hide. This is a natural improvement over how it is done in classical anonymizers (Presidio, Azure) that target the direct expression of a clearly scoped attribute in the text (e.g., a direct mention of a location but not clues that would allow a very certain inference). The interpretation of adversarial anonymization is much more aligned with regulatory requirements here (e.g., [4]). However, as pointed out by the reviewer, this still requires us to define the initial set of attributes. In a pragmatic sense, these can quite often be directly informed by existing legislation ([4] and [5]). Note that theoretically, as a first step, the adversarial LLM could be tasked to \\u201cinfer everything and anything\\u201d from the given text, from which then the user could select which information they wish to remove from the snippet. The deeper investigation of the promise of such approaches is, we believe, an interesting avenue for future work.\"}", "{\"comment\": \"**Q3: Does adversarial anonymization require me to pre-define all attributes that should be removed in the text? Is this feasible?**\\n\\nThe reviewer raises an interesting question. In particular, we find that adversarial anonymization only requires one to define the attributes that they want to protect, not the way they are expressed in text. As such, there could be no unintended privacy leakage in case the user lists everything to the algorithm that they consider \\u201cprivate\\u201d. In fact, these \\u201cattributes\\u201d may be even more complex aspects to protect, e.g., mental health information or descriptions of any specific information to hide. This is a natural improvement over how it is done in classical anonymizers (Presidio, Azure) that target the direct expression of a clearly scoped attribute in the text (e.g., a direct mention of a location but not clues that would allow a very certain inference). The interpretation of adversarial anonymization is much more aligned with regulatory requirements here [3]. However, as pointed out by the reviewer, this still requires us to define the initial set of attributes. In a pragmatic sense, these can quite often be directly informed by existing legislation ([3] and [4]), which is why we selected this as an (extensive) baseline. Note that theoretically, as a first step, the adversarial LLM could be tasked to \\u201cinfer everything and anything\\u201d from the given text, from which then the user could select which information they wish to remove from the snippet. The deeper investigation of the promise of such approaches is, we believe, an interesting avenue for future work.\\n\\n**Q4: Can you elaborate on the cost calculations of adversarial anonymization?**\\n\\nCertainly, and we have also extended App. A.6 to include a more detailed discussion in our updated manuscript. First and foremost, we agree with the reviewer that adversarial anonymization is more costly than, e.g., running regexes over an input string. Our currently presented number estimates this cost more on the worst-case side. In particular, since then, the cost of running the latest GPT-4 has reduced by 4x in input and 3x output (even 8x and 6x - when running in batched mode). The cost is further reduced if we switch to open models (or, as we show in the newly added App. I - are able to distill knowledge into a smaller model) - with the cost for Llama-3.1-8B-AA being only $\\\\$0.002$ for the full five rounds. While this cost, of course, is still higher than doing basic traditional NLP, we believe that there are many potential applications (especially for smaller local models) where this increase in cost is justified by the increase in both utility and privacy. In these cases, adversarial anonymization is both a valuable contribution and a (feasible) tool. Also note that presumably in the near future small LLMs may be integrated into mobile devices, which could be then used to power an anonymization application based on our method at almost no marginal cost.\\n\\n**Q5: Have you explored methods to automatically determine a cutoff point for the number of iterations?**\\n\\nThe reviewer raises an interesting point. In particular, in our newly added App. G, we include an ablation over the relative accuracy of the adversary based on the certainty that it presents (overall iterations of GPT-4-AA). This plot strongly indicates that when the certainty in a prediction falls below $\\\\leq 2$, there is only a marginal benefit in running additional iterations. Based on our runs on real-world data, we find that this already quite often happens after 1 or 2 rounds (see new Figure 17, $~50\\\\%$ of predictions have certainty $\\\\leq 2$ after a single round), allowing in many of these cases to stop early (not only resulting in lower costs but also higher utility). Particularly helpful is here, in practice, the fact that we, as the anonymizing party, can often assume knowledge of the actual attribute values. As such, we can actually directly check against the current inference to test whether an attribute is inferable (a generally stronger criterion).\"}", "{\"summary\": \"This paper proposes a LLM-based adversarial anonymization framework to address privacy risks. The authors use a feedback-guided approach where an LLM adversary attempts to infer personal attributes from a given text, and an anonymizer LLM iteratively modifies the text to reduce inference risks. The paper evaluates this method against traditional anonymization techniques and demonstrates superior performance in both preserving utility and privacy across several datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed adversarial anonymization framework leverages the strengths of LLMs both as adversaries and anonymizers, showcasing a new application of LLMs in a privacy-preserving context.\\n2. The inclusion of a human study adds value to the evaluation by confirming the practical applicability of the framework and showing a preference for the LLM-anonymized text.\", \"weaknesses\": \"While this might be an interesting application of LLMs to the field of anonymization, the core methodology introduces neither fundamentally new anonymization techniques nor a different way to use LLMs. It merely adapts existing concepts by leveraging the powers of LLMs. Thus, the contribution of novelty is limited for either the LLM or the privacy community.\\n\\nOne major limitation is that, in real life, texts to anonymize are normally very long, and due to the current method, which would be prohibitively expensive, it is practically not feasible in applications that demand real-time processing. The scalability and applicability of this framework are rather limited due to the enormous amount of documents it may need to work with iteratively. Another limitation is that privacy performance remains unpredictable due to heavy dependence on the capabilities of the LLM. This creates a dependency where consistency of anonymization outcome cannot be guaranteed and may even differ from model to model or update to update.\", \"questions\": \"1. Could you provide more examples where the framework fails and explain why the LLM is unable to recognize these instances?\\n2. Do you think it\\u2019s feasible to distill this capability into a smaller model? in this way, we can reduce the computational cost.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Q3: How heavily dependent on LLM capabilities is adversarial anonymization? Can we end up in a constant race with new models and varying performance?**\\n\\nThe reviewer raises a relevant point. We want to address this in two ways: \\n\\nWith respect to the capabilities, we note that one can make the same argument about existing anonymization methods, in particular, as it has already been shown in our work and in [1, 5] that they are severely insufficient for actually anonymizing free-form text. As such, we are in a state where the adversary is strictly more capable than what current methods are offering. Crucially, overcoming this imbalance, adversarial anonymization offers a way to, *at minimum, achieve parity*. In particular, as we have shown empirically, smaller or finetuned (see Q4) already achieve very strong tradeoffs - even when compared to the strongest available models.\\n\\nThis ties into our second argument, namely that these inferences are quite consistent across models and seem to transfer between models. Models like Llama3.1-70B and Llama3.1-8B perform so well exactly because they also anonymize the same underlying information as the stronger GPT-4. Additionally, for a given text, there is a finite amount of information that could be anonymized to prevent (reasonable) inferences as required by regulations. With many models already achieving close to human performance in this task, we do not expect an infinite race here; also, theoretically, in a given snippet of text there is an exhaustible amount of personal information contained. We agree with the reviewer that there could always be a stronger model in the future; however the closer we get to the human baseline and especially to the theoretical limit of private information inferable from a text, the less significant these improvements are from a privacy inference perspective. Further, our method even in its current instantiation already is a significant improvement over what span-based anonymization has to offer---providing a much better privacy-utility tradeoff. In addition, we can imagine adversarial anonymization to be particularly applicable in human-in-loop scenarios, where the human may recognize additional information they wish to hide but did not think of in the beginning.\\n\\n**Q4: Can you provide more insight into failure cases of adversarial anonymization?**\\n\\nAbsolutely. Prompted by the reviewer's question, we have manually checked all cases in the PersonalReddit dataset where the adversary was able to make a certain prediction ($\\\\geq 4$) after the third iteration of GPT-4-AA. Unlike the cases with low certainty alluded to above, we consider these actual failure cases. Interestingly, we find that these failures (with the exception of a single case) are restricted to only three attributes: Education (5 cases), Relationship Status (7 cases), and Sex (10 cases). For all of these cases, we find that the core message of the text is closely related to the respective attribute, e.g., (for Sex) the personal experience of bearing a child or using gender-specific contraception (for Relationship Status) almost exclusively recent stories about experiences with dating (apps) that indicate their current relationship status as single and (for Education) mostly very college specific topics such as struggle with course load/selection. In all of the cases, a significant part of the utility of the text is given by the exposure of the personal attribute. While, e.g., more concrete references to universities have been removed from all texts, the overall nature of the text, which is about the life of a college student, was retained. In these cases, while adversarial anonymization provides some level of protection and certainly awareness for the user, the communication of the private attribute is core to the utility of the text, making full anonymization impossible without sacrificing almost all utility.\\nWe have included an additional discussion on this to the updated manuscript in App. G.\"}", "{\"comment\": \"We thank the reviewer for their comprehensive feedback. We respond to the raised concerns below, numbered Q1-Q4. We have also uploaded a revised version of the paper, with updates highlighted in violet. Further we thank the reviewer for catching typos which we adapted in the manuscript. We are happy to continue the discussion further if the reviewer has additional questions.\\n\\n\\n**Q1: Does the simplicity of the final anonymization algorithm decrease the overall merit of the work?**\\n\\nNo, we believe that even though our eventual anonymization method is conceptually straightforward from a purely technical perspective, it makes significant contributions in the field of text anonymization in several ways\\u2014especially so in casual communicative settings, such as online forums, where, as shown in [1] LLMs pose a severe novel threat to anonymity. \\n\\nOn the setting side, we are the first to provide an (inference-based) adversarial view on text anonymization, addressing the key limitation of prior stance on anonymization; namely that even entity-blanking-based anonymization still leaks private information in presence of an inferential adversary [1]. This is a critical issue that is fundamental to current anonymization tools and evaluation. As also shown in [1], we demonstrate in our evaluation that on real-world data existing methods are clearly insufficient both against adversaries and in utility.\\n\\nOn a practical side, as reviewers agree, adversarial-feedback anonymization using LLMs provides a natural instantiation of this setting and achieves consistently better privacy protection and utility than *commercial* anonymization tools. As such, our strong empirical contributions could have an noticable impact on the state of anonymization currently in application.\\n\\nFurther, having such an elaborate analysis and extensive experimental evaluation (across 17 models, multiple datasets, and including a human study), sets a strong baseline for any potential follow-up work in this area.\\n\\nAs such, we believe that there is significant merit in (1) defining a practical and free form setting for text anonymization and (2) providing a method that outperforms current industry-standard anonymization across very extensive evaluations. Such a contribution is particularly timely as developments in LLMs have made such inferences much easier [1] up to the point where it can be potentially misused on a large scale [2], constituting a practical real-world threat.\"}", "{\"title\": \"General response\", \"comment\": \"We thank the reviewers for their feedback and their thorough evaluation of our work.\\nWe are pleased that reviewers find that our work introduces a more realistic and practical setting for privacy measurement (CnVX), provides a method that shows noticeably empirical improvements over existing solutions (CnVX, EPkh, 18jd) and has an extensive evaluation (CnVX, EPkh) including a human study (18jd).\\nWe are also glad to hear that reviewers found the paper clear and thoughtful with respect to practical considerations (CnVX) as well as enjoyable to read (EPkh).\\n\\nAlongside this rebuttal we have uploaded an updated version of our manuscript (with new content in purple), that contains various additional results and explanations to support our rebuttal. We responded to each reviewer separately below and are happy to further engage in the discussion in case of follow-up questions.\"}", "{\"comment\": \"We thank the reviewer for their feedback and are happy to hear that we could resolve multiple of their questions---we would also like to note that we uploaded a revision of the paper with the original rebuttal, including all referenced elements of our rebuttals to all reviewers. We appreciate the reviewer's feedback, which helped improve our paper and even pointed to new directions for future work.\\n\\nWhile we agree with the reviewer that a method providing theoretical guarantees would be the silver bullet for text anonymization, we argue that, to our knowledge, over a long line of anonymization research in NLP, **no method** was proposed that could provide **practical privacy guarantees** for the information leakage free-form text. Notably the current industry standard even falls significantly behind what can even be inferred with a 7B language model\\u2014highlighting that current anonymization does not even provide an empirical baseline related to the actual information leakage from text. For guarantees, we want to refer to a well-regarded paper in text anonymization [6] (similar arguments in [7]). In particular, methods from Privacy-Preserving Data Publishing (PPDP) that come with guarantees for structured databases (e.g., k-anonymity) commonly focus on the release of a full dataset, whereas \\\"[PPDP] solutions for anonymising unstructured text are scarce and mostly theoretical.\\\" Notably, DP methods (the gold standard for ML privacy) are generally considered in-applicable to individual text anonymizations [6].\\n\\nWith this in mind, our work addresses a concrete issue in text anonymization, as pointed out in [6]: \\\"NLP approaches to anonymization suffer from a number of shortcomings. Most importantly, they are **limited to predefined categories of entities** and **ignore how less conspicuous text elements may also play a role in re-identifying the individual.**\\\" Crucially, it is primarily focused on detecting these predefined entities directly and exactly in a given text (ignoring most inferences). Our adversarial anonymization setting actively addresses this shortcoming by:\\n\\nIntroducing a notion of anonymization directly related to the inferable information from text.\\nProviding an instantiation of an adversary that makes use of such less conspicuous text elements. Note that without such an adversary one has to rely on manually annotated datasets in order to evaluate any form of text-based privacy.\\nShowing in the real world how current defenses fall short against this practical adversary.\\nProviding an extensively evaluated approach for anonymization that achieves higher utility (including a human evaluation which is the best one can do for free-text utility) and higher privacy protection.\\n\\nWe believe there is a lot of scientific value in empirically quantifying and quite exhaustively setting the stage for much-improved text anonymization methods. Notably, we find that the setting we target (automated inferences from online texts) is something that is widely discussed academically [8][9][10][11][12] and in the privacy space [13][14][15] especially as it finds first applications in practice [16][17][18]. This gives practical relevance to the work presented here, which not only improves the setting but also provides first steps to protect users, which was not possible [8] and generally not studied before.\"}" ] }
82VzAtBZGk
Safe Reinforcement Learning in Black-Box Environments via Adaptive Shielding
[ "Daniel Bethell", "Simos Gerasimou", "Radu Calinescu", "Calum Imrie" ]
Empowering safe exploration of reinforcement learning (RL) agents during training is a critical impediment towards deploying RL agents in many real-world scenarios. Training RL agents in unknown, black-box environments poses an even greater safety risk when prior knowledge of the domain/task is unavailable. We introduce ADVICE (Adaptive Shielding with a Contrastive Autoencoder), a novel post-shielding technique that distinguishes safe and unsafe features of state-action pairs during training, thus protecting the RL agent from executing actions that yield potentially hazardous outcomes. Our comprehensive experimental evaluation against state-of-the-art safe RL exploration techniques demonstrates how ADVICE can significantly reduce safety violations during training while maintaining a competitive outcome reward.
[ "Reinforcement Learning", "Safe Reinforcement Learning" ]
Reject
https://openreview.net/pdf?id=82VzAtBZGk
https://openreview.net/forum?id=82VzAtBZGk
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wIj9JXFcGZ", "uzBQ23vmB6", "um3vvoS9E9", "uArxCqyHt0", "tCG184ZmR6", "r28J9iG1WA", "qwNqPxZnAt", "qSH5hAAcfI", "mjHanZ9mib", "i1wKHRxo0B", "Vc4ImIMtOS", "RskAttINsL", "RMWFrArZQp", "Pl5scOEioh", "HpUpTShL2L", "GSDmxWLyvK", "FqJDBeJSjE", "Evqjn6R5AD", "DGfWhd51F0", "BeBkED7UMi", "8N7jmL6Vyr", "5DMzd70Pxa", "1BwJI5LpWS" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732591479362, 1731968201495, 1732716916843, 1733147287558, 1732632262924, 1730734077758, 1732550209623, 1737523582096, 1731976641101, 1731968101076, 1732550229053, 1731976744808, 1732289919751, 1730208994991, 1730351596781, 1734589880818, 1732201310793, 1731968061575, 1733147006185, 1730685967898, 1732550245706, 1732550264072, 1731968248084 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3537/Reviewer_C6Rd" ], [ "ICLR.cc/2025/Conference/Submission3537/Authors" ], [ "ICLR.cc/2025/Conference/Submission3537/Reviewer_DQ3Z" ], [ "ICLR.cc/2025/Conference/Submission3537/Authors" ], [ "ICLR.cc/2025/Conference/Submission3537/Reviewer_Kt6J" ], [ "ICLR.cc/2025/Conference/Submission3537/Reviewer_Kt6J" ], [ "ICLR.cc/2025/Conference/Submission3537/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3537/Authors" ], [ "ICLR.cc/2025/Conference/Submission3537/Authors" ], [ "ICLR.cc/2025/Conference/Submission3537/Authors" ], [ "ICLR.cc/2025/Conference/Submission3537/Authors" ], [ "ICLR.cc/2025/Conference/Submission3537/Authors" ], [ "ICLR.cc/2025/Conference/Submission3537/Reviewer_Ff6i" ], [ "ICLR.cc/2025/Conference/Submission3537/Reviewer_C6Rd" ], [ "ICLR.cc/2025/Conference/Submission3537/Area_Chair_uj8M" ], [ "ICLR.cc/2025/Conference/Submission3537/Reviewer_Ff6i" ], [ "ICLR.cc/2025/Conference/Submission3537/Authors" ], [ "ICLR.cc/2025/Conference/Submission3537/Authors" ], [ "ICLR.cc/2025/Conference/Submission3537/Reviewer_DQ3Z" ], [ "ICLR.cc/2025/Conference/Submission3537/Authors" ], [ "ICLR.cc/2025/Conference/Submission3537/Authors" ], [ "ICLR.cc/2025/Conference/Submission3537/Authors" ] ], "structured_content_str": [ "{\"comment\": [\"Thanks for your clarification. Here are my comments:\", \"The distribution shift cannot be addressed by using randomized environment. The distribution will still shift if the policies vary from the initial one even in randomized environment. Meanwhile, I don't think there is anything special to use randomized environment. It's a default setting in safety gym.\", \"Regarding the safety discriminator training: Theory 2 says that the discriminator can be very accurate when data is ideally diverse. However, I didn't find such statement meaningful. Any KNN-based method can be very accurate if you have infinite and diverse enough data. On the other hand, you actually cannot not get a diverse enough dataset if you construct the dataset by Eq.(3) because you will always category the middle part of one trajectory, which occupies the majority of the state-action space, as \\\"inconclusive\\\". Furthermore, theory 1 can be trivial in some cases. Since there is no guarantee of your encoder, it's possible (and actually very likely) that the embeddings of a safety datapoint and an unsafe datapoint are very close to each other, i.e. $\\\\gamma\\\\to 0$, and then you will get a trivial conclusion: $P(\\\\text{mis-classification}) < \\\\frac{1}{2}$.\", \"Regarding the tasks used for experiment: I disagree with you on \\\"we believe our custom tasks, such as adding random obstacles inside the circle task, create a more challenging environment\\\". The standard tasks in safety gym (e.g., PointGoal, PointButton, PointPush, CarGoal, ...) are harder than the tasks used for your experiment. Their initializing layouts are also random.\"]}", "{\"comment\": \"Dear reviewer,\\nThank you very much for your valuable review, comments about ADVICE\\u2019s novelty, and fruitful comments regarding our paper! Please find answers to your questions below:\\n\\n## Questions and Answers\\n\\n> I question the neighbour model's effectiveness in distinguishing safe from unsafe state-action pairs due to potential distribution shifts caused by policy updates during execution.\", \"a\": \"In black-box environments, limited confirmed information can be derived about absolute safety, i.e., conclusive information that the state-action pair is safe or not. A key contribution to the paper was the classification of safe and unsafe features using accepting and terminal states, facts that we have derived about finite-horizon MDPs that we can leverage for safe exploration. Inconclusive features lack a clear safety classification, and such features have the potential to pollute the dataset and yield ineffective shields. Including these features would introduce significant ambiguity into the ADVICE method (as an inconclusive feature could lead to an accepting/terminal state in the future with no direct separation between the two). Future work would consider exploring these inconclusive features as if some knowledge can be derived from them, it would further improve ADVICE. However, this is a significant challenge within itself, and due to ambiguity, we focus on the conclusive features that can be confirmed as either safe or unsafe. Our experimental results demonstrate that even without using the inconclusive features, ADVICE can produce an effective shield that outperforms the state-of-the-art approaches. Including the inconclusive features, subject to resolving the challenge mentioned above, would only strengthen the effectiveness of ADVICE.\\n\\n**Questions and Answers continue in the next comment...**\"}", "{\"comment\": \"I am happy with the response from the authors. Thanks!\\n\\nAfter reading the comments from the other reviewers, I decided to keep my scores.\"}", "{\"comment\": \"A: We would like to thank you for the constructive comments and useful feedback. Since the reviewer has acknowledged the **significant potential of ADVICE**, we would be really grateful if they could elaborate on the aspects that they still deem insufficient, as all the points provided in their main review have been covered; that would help strengthen our work.\"}", "{\"comment\": \"I would like to thank the authors for addressing most of my comments. In general, I think this paper has significant potential. However, the current execution is still insufficient. Therefore, I decided to keep my recommendation. I list some minor follow-ups below. I hope they help improve the paper's clarity.\\n\\n1. **About the safety definition.** Please note that the current definition is only implicitly given in the middle of the algorithm presentation. This makes it difficult for the reader to separate the problem from the proposed method. See, for example, the paper by Wachi for a proper formulation.\\n\\n- Wachi, A., Shen, X., and Sui, Y. (2024). A survey of constraint formulations in safe reinforcement learning. *IJCAI*, 8262\\u20138271. <https://doi.org/10.24963/ijcai.2024/913\\n\\n2. **New theoretical results.** It is a great step to provide such results; however, at this point, the paper should undergo a new review to evaluate these results' properties. Furthermore, they should be in the main paper and not in the Appendix. \\n\\n3. **Results presentation.** Please add a reference to Fig 8 in the main document and make it clear that the full results are available.\"}", "{\"summary\": \"This paper develops a new shield based on contrastive learning to increase the safety of RL agents. This approach learns a latent representation that separates safe and unsafe state-action pairs. After learning this latent representation, it uses a KNN approach to classify, which requires a threshold K to indicate how many neighbors to consider a state-action pair unsafe. The paper shows how this parameter can be adapted online based on the safety of the RL agent. Finally, the paper provides empirical evidence that the proposed method increases the safety of the RL agent.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well written. The description of the method is precise.\", \"The proposed method is original. It is the first method to use contrastive learning in a safe RL setting.\", \"The empirical evaluation indicates the approach can potentially increase the safety of RL algorithms.\"], \"weaknesses\": [\"The problem formulation is incomplete. The paper does not define the safety properties expected from the RL agent.\", \"Lack of theoretical results. This paper provides only empirical results to support its claims.\", \"The results are presented in a convoluted way. In particular, the results disregard the safety violations of the agent in the first 1000 episodes. The reason for presenting the results in this way is unclear.\", \"The presentation of the DDPG-Lag as a constrained RL algorithm is imprecise, as it uses a fixed weight for the costs, which works as simple reward engineering. In general, with a Lagrangian relaxation, this weight should be adjusted online to ensure the accumulated cost stays below a predefined threshold [1].\", \"The evaluation in CMDPs is inconsistent. These approaches solve different problems where a predefined accumulated cost is allowed.\", \"Weak baseline. From the results in Figure 10, it is clear that Tabular Shield does not recognize any unsafe state-action pairs, making it an unsuitable baseline. This is not surprising considering how the state-action space is discretized. Perhaps it is necessary to finetune the discretization of this baseline. Alternatively, it would be more suitable to consider stronger baselines, such as the accumulating safety rules [2]\", \"**references**\", \"[1] Ray, A., Achiam, J., and Amodei, D. (2019). *Benchmarking safe exploration in deep reinforcement learning*. <https://github.com/openai/safety-gym>\", \"[2] Shperberg, S. S., Liu, B., Allievi, A., and Stone, P. (2022). A rule-based shield: Accumulating safety rules from catastrophic action effects. *CoLLAs*, 231\\u2013242. <https://proceedings.mlr.press/v199/shperberg22a.html>\"], \"questions\": [\"If the problem had a single initial state, would this be an issue for this approach? How does the diversity of initial states influence the performance of the ADVICE algorithm?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer Kt6J,\\n\\nAs the rebuttal deadline approaches, we hope that our responses have satisfactorily addressed all of your concerns. \\n\\nShould you need further clarification or additional explanations, we would be more than happy to provide them.\\n\\nThank you for your time and consideration.\\n\\nSincerely,\\n\\nADVICE Authors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Dear reviewer,\\nThank you very much for your valuable review, comments regarding significance and novelty, and questions regarding ADVICE! We have integrated some of your points regarding clarity into an updated version of our submission. Please find answers to your questions below:\\n\\n## Questions and Answers\\n\\n> In environments with a highly non-smooth reward function, similar features can induce very different rewards (specifically different levels of safety). What exactly is the function in (1)? If this function does not relate to safety or reward then the issue of non-smoothness in the safety w.r.t. features may be problematic here.\", \"a\": \"In black-box environments, a safety violation is represented by a terminal state which can be recognized when encountered in an MDP. This is similar to the question regarding accepting states above. We have improved the clarity of Line 311 based on this point.\\n\\n**Questions and Answers continue in the next comment...**\"}", "{\"comment\": \"Dear reviewer,\\nThank you very much for your valuable review, questions, and comments regarding ADVICE\\u2019s strengths! Please find answers to your questions below:\\n\\n## Questions and Answers\\n\\n> ADVICE requires an initial period to gather data before it can be fully effective, which could be a disadvantage in some scenarios.\", \"a\": \"While the contrastive autoencoder (CA) effectively distinguishes safe and unsafe features, some limitations could be sensitivity to the diversity of data. Theorem 2 in Appendix B provides theoretical analysis of the impact of data diversity in increasing/decreasing ADVICE\\u2019s effectiveness.\"}", "{\"comment\": \"Dear Reviewer DQ3Z,\\n\\nAs the rebuttal deadline approaches, we hope that our responses have satisfactorily addressed all of your concerns. \\n\\nShould you need further clarification or additional explanations, we would be more than happy to provide them.\\n\\nThank you for your time and consideration.\\n\\nSincerely,\\n\\nADVICE Authors\"}", "{\"comment\": \"**Questions and answers continued from the previous comment...**\\n\\n> The novelty of the paper as a whole could be more clearly spelt out.\", \"a\": \"In ADVICE, the latent representation learning is designed to classify safety features independently of immediate policy updates, mitigating instability from interdependent processes. The parameter K is adjusted gradually using moving averages of safety violations, ensuring smoother transitions; see Equations (5)-(7). Additionally, all updates occur within bounded values, minimizing abrupt shifts. Empirically, we observed no instability in our experiments, as the adaptive mechanisms allow ADVICE to handle varying safety requirements effectively while maintaining stability; see Figure 12 that shows results for K = {3, 4, 5}.\"}", "{\"comment\": \"We thank you for your discussion so far, it has been extremely helpful and raised some astute points. Please find the responses to your further questions below:\\n\\n## Questions and Answers\\n\\n> Your explanation makes sense to me however in the absence of a concrete example of h_theta, it is still difficult to know how the claims you make can be realised in practice. Can you specify an example of the function h_theta?\", \"a\": \"Thanks for the follow-up question on this point. In general, the space of valid actions may indeed not be closed under the Cartesian product operation, such as your example regarding the robot lifting its feet. In ADVICE, we address this implicitly through a validity check after constructing the Cartesian product, which considers mission-specific constraints and retains only the valid action space. We added a small clarification in the paper (line 244) to make this explicit.\"}", "{\"summary\": \"The paper presents a framework to address the problem of safety within unknown environments using a combination of techniques including shielding, contrastive learning and latent feature representations. The framework therefore consists of many components that have been combined under one roof. The target is so called black box environments where the benefit of information relating to safe states is not a priori available so special techniques need to be employed to enable training and execution to proceed as safely as possible. The empirical results show that the framework, ADVICE increases safety violations by a meaningful margin though often at the expense of return\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The methodology is intuitive and the motivation and requirement of each component are reasonably well explained. The problem setting is one of significant interest to the RL community and the approach seems original.\", \"weaknesses\": \"Although some parts of the paper are well explained, especially the non-technical elements and the intuition of the algorithm, many important details are skipped. This leaves the reader to simply guess how important aspects of the framework work. Also, some maths expressions seem to have been written without due care which further adds to confusion. The framework involves various interdependent learning processes - specifically, in addition to the regular policy updates, the action and hence the reward received at a given state depends on the latent representation (which presumably is being updated) as well as the value of K which varies in quite an abrupt manner. This generates concerns about convergence guarantees and the numerical stability of the framework. The paper would benefit from some analysis or at least a discussion on this aspect. Additionally, within the framework there are a number of functions within the method that have max/min functions - this introduces a high level of discontinuity which can be harmful for some learning methods e.g. gradient-based learning methods, and may cause numerical instabilities.\\n\\nThe novelty of the paper as a whole could be more clearly spelled out. Lastly, although the framework achieves a significant reduction in safety, this often comes at a notable expense to the return. I would like to see how this compares to the frameworks that have some calibration aspect for the performance:safety ratio.\", \"questions\": \"1. In environments with a highly non-smooth reward function, similar features can induce very different rewards (specifically different levels of safety). What exactly is the function $h_\\\\theta$ in (1)? If this function does not relate to safety or reward then the issue of non-smoothness in the safety w.r.t. features may be problematic here.\\n\\n2. Given that $\\\\mathcal{U},\\\\mathcal{S},\\\\mathcal{I}$ are defined as sets, the construction of $g$ in (3) does not seem like a function since it maps elements of $F_E$ to sets. Perhaps $\\\\mathcal{U},\\\\mathcal{S},\\\\mathcal{I}$ need not be sets for this construction.\\n\\n3. The construction of $\\\\mathcal{C}$ in (4) also seems spurious since it maps a product of sets to a set which is the union of other sets. Also, it seems not to consider the cases of two dissimilar features in $U$ and secondly, two dissimilar features in $S. \\n\\n4. What are $S$ and $U$ in (4) - in page 3, the subindex of $f$ represents time, the indices $i,j \\\\in S$ etc. in (4) therefore seem incorrect.\\n\\n5. In relation to the construction of the function $g$ in (3), in the black box setting, how can we know which are accepting states?\\n\\n6. In line 6 of the algorithm, how can we be sure that constructing an action by taking a cartesian product across action dimensions will produce a valid action in the environment? The set of valid actions may only be a subset of this product space e.g. a robot may not be able to lift both its feet off the ground simultaneously.\\n\\n7. What is the definition of a safety violation - specifically, is there a notion of safety violation that can be used to detect such violations in black-box environments?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a post-shielding method ADVICE to reduce the constraint violation during the safe RL training. It first trains a neighbor model to classify the safety of state-action pair by contrastive learning in embedding space. Then it leverages this model to identify the safety of new state-action by the statistics of its K nearest neighbors and corrects the unsafe action to a safe one from a selected safe set. The authors conduct experiments on several safety gymnasium environments, which show that the proposed method can reduce the cumulative safety constraint violation during training.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This paper proposes a new shield-based method for safe exploration, which is an important problem for application of RL.\", \"Overall, the paper is clearly written (e.g., fig. 1) and easy to follow.\", \"The idea of classifying the safety of state-action in latent space is novel.\"], \"weaknesses\": [\"My biggest concern is the effectiveness of neighbor model in step 1, which determines whether a new state-action is safe or not. However, this key component in the proposed shielding method is trained based on data collected in an initial unshielded stage (line 161). During the execution, the policy will be updated and differ from the initial policy, which will lead to a severe distribution shift of state-action pair. Therefore, it's very questionable whether the neighbor model can still distinguish safe state from the unsafe one given unseen new state-action distribution.\", \"The construction of safe or unsafe sets $\\\\mathcal S, \\\\mathcal U$ is also problematic. In eq. (3), the feature (state-action pair) is classified as safe if it's start or next state reaches the goal while the feature with next state crashed is unsafe. Such classification obviously introduces some spurious correlations, e.g., the state-action pair is identified as unsafe because it's far from the goal instead of it will truly crash with obstacles. Meanwhile, the contrastive learning objective excludes the inconclusive features, the majority of the collected data, which makes the coverage of training data on state-action space very limited and further exacerbates the distribution shift issue.\", \"In experiment, although the results in fig.2 show that ADVICE has smaller cumulative safety violation, ADVICE also performs similarly to baselines on some other tasks (e.g., fig. 8(c) &9). Meanwhile, it's very weird that DDPG-lag and DDPG have very close results (also see questions), suggesting the DDPG-lag baseline is not well-tuned.\"], \"minor_issues\": [\"line 142, distinguish -> distinguishes\", \"Eq.(2), $F$ -> $\\\\mathcal F$\", \"many notations are not defined before using. E.g., $\\\\mathbb Z_+$ in eq. (2), $E$ in line 2 of Algorithm 1.\"], \"questions\": [\"What are the data of visualization in last column in fig.2? Are they training data for neighbor model?\", \"Does the \\\"constrained randomised goal\\\" correspond to \\\"randomised CMDP\\\" in fig.7(d)?\", \"Why do not you use the standard tasks provided by safety-gymnasium (e.g., PointGoal, CarButton, ...) instead of customizing the tasks? The original tasks should be harder because all the layouts (agent, goal and obstacles) are randomized in each episode and they can better test the ability of safety exploration.\", \"In the experiment, why are the performances of DDPG and DDPG-lag very similar in terms of reward or safety violation? DDPG does not take safety into the consideration and only aims to maximize the reward. The lagrangian in DDPG-lag will thus be almost useless if it performs similarly to DDPG.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The authors propose a new shielding method based on contrastive learning for safe RL, which learns a latent representation to distinguish between safe and unsafe state-actions. The reviewers all agree to reject the paper, due to weak theoretical results (e.g. unrealistic requirement on diverse data with current safety definition) and weak experimental results (e.g. weak baselines, safety at the cost of performance degrade). Reviewers also mention writing clarity issues. The authors added new theoretical results in the rebuttal, which however warrant another review process. Reviewers also raise concern about the distribution shift issue. Overall, while I think this paper has a potential, the current status needs to be improved.\", \"additional_comments_on_reviewer_discussion\": \"Reviewer Kt6J raises concerns about incomplete problem framing, lack of theoretical results, inadequate and weak baseline implementation, inconsistent evaluation. The response of authors address most points, but the reviewer suggest still improvement on the presentation is needed. Especially, the new theory that is added during the rebuttal require additional review and should be put in the main paper, which I agree as well.\\n\\nReviewer DQ3Z raises concerns about the need of initial data gathering, the downside when facing dynamic environment, and the sensitivity to hyperparameters. The authors response address the reviewer's points. Nonetheless, Reviewer DQ3Z does not strong advocate the paper and after the discussion phase agree to also reject the current paper based on examining other reviewers' comments.\\n\\nReviewer C6Rd raises concerns about distribution shift, the problem definition, the exclusion of inconclusive features which limits the data diversity, and baseline implementation. After the rebuttal, the concern on distribution shift remains. The reviewer doesn't think the current experiments are sufficient. The practicality of having diverse safe and unsafe data is also not supported due to the exclusion of inconclusive features per current definition. \\n\\n\\nReviewer Ff6i complains about important technical details missing in the writing. Reviewer Ff6i also concerns about the complexity of the method and the resulting stability, the lack of thorough analysis, the lack of comparison with others approaches in terms of performance safety ratio. Unfortunately the reviewer did not respond to the latest author response, but I think the authors's responses are reasonable.\"}", "{\"comment\": \"Thanks to the authors for their responses. However, some of my questions remain.\\n\\n1. Your explanation makes sense to me however in the absence of a concrete example of $h_\\\\theta$, it is still difficult to know how the claims you make can be realised in practice. Can you specify an example of the function $h_\\\\theta$?\\n\\n5/7. To evaluate the functions $g$ and $\\\\mathcal{S}$ at all states, to me it seems that each state will need to have been encountered at least once therefore committing the agent to unsafe state visitations. Can you clarify if this is the case or does the method have a technique that enables the output of these functions to be predicted given visits to other states?\\n\\n6. My point here is that the space of valid actions may not be closed under the Cartesian product operation. As in my example, a robot lifting either foot off the ground (and keeping other foot grounded) may be a valid action but lifting both feet therefore performing both of these individually valid operations simultaneously may be an invalid operation.\"}", "{\"comment\": \"Dear reviewer,\\nThank you very much for your valuable review, comments about novelty, and your questions! We have/will integrate them into the updated paper to address your concerns. Please find answers to your questions below:\\n\\n## Questions and Answers\\n\\n> The paper does not define the safety properties expected from the RL agent.\", \"a\": \"This is a valid concern, and we appreciate you bringing this up. Improved data diversity before ADVICE is activated will only improve results. Theorem 2 in Appendix B validates this statement and also evidences that low data diversity will increase the chance of ADVICE misclassifying features\"}", "{\"comment\": \"We thank you for your discussion so far, it has been extremely helpful and raised some great discussion points. Please find the responses to your further questions below:\\n\\n## Questions and Answers\\n\\n> The distribution shift cannot be addressed by using randomized environment. The distribution will still shift if the policies vary from the initial one even in randomized environment. Meanwhile, I don't think there is anything special to use randomized environment. It's a default setting in safety gym.\\n\\n> Regarding the tasks used for experiment: I disagree with you on \\\"we believe our custom tasks, such as adding random obstacles inside the circle task, create a more challenging environment\\\". The standard tasks in safety gym (e.g., PointGoal, PointButton, PointPush, CarGoal, ...) are harder than the tasks used for your experiment. Their initializing layouts are also random.\", \"a\": \"Thank you for pointing this out. Let us clarify how ADVICE handles inconclusive states. These states are explicitly excluded from the contrastive encoder training, as described in Equation 3, to avoid ambiguity regarding their safety, as we cannot derive facts about them like we can using terminal/accepting states. By focusing only on truly safe and unsafe states, we ensure that the encoder learns meaningful and separable latent representations. As demonstrated in the experimental results (Figures 4, 5, 8), ADVICE creates an effective shield, which even without using the inconclusive data, outperforms the competitive approaches.\\n\\nRegarding your observation about $P(misclassification) < 1/2$, this is exactly what Theorem 2 captures. Thank you for confirming the theoretical foundation underpinning ADVICE. As $\\\\gamma \\\\rightarrow 0$, we have the degenerate case where ADVICE makes a random choice. Accordingly, as the separation margin $\\\\gamma$ decreases\\u2014whether due to limited diversity, noisy data, or suboptimal training\\u2014the bound on misclassification probability remains exponential and non-trivial. This behavior is integral to the robustness of ADVICE. Noise, including poor model training, is explicitly modeled in the theorem through the variance term $\\\\sigma^2$, ensuring the theory remains valid under real-world conditions.\"}", "{\"summary\": \"The paper presents ADVICE (Adaptive Shielding with a Contrastive Autoencoder), a novel post-shielding technique for safe reinforcement learning (RL) in complex black-box environments. ADVICE distinguishes between safe and unsafe state-action pairs during training using a contrastive autoencoder, protecting the RL agent from executing hazardous actions. The method includes an adaptation component based on the agent's recent performance, encouraging exploration when appropriate. Extensive experiments against state-of-the-art safe RL exploration techniques demonstrate that ADVICE significantly reduces safety violations during training while maintaining competitive outcome rewards.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"ADVICE introduces a new way to handle safety in RL using a contrastive autoencoder for distinguishing safe and unsafe actions.\", \"Despite prioritizing safety, ADVICE maintains competitive performance in terms of rewards compared to other methods.\", \"ADVICE does not require prior knowledge about the environment, making it suitable for black-box scenarios.\"], \"weaknesses\": [\"ADVICE requires an initial period to gather data before it can be fully effective, which could be a disadvantage in some scenarios.\", \"The paper suggests that ADVICE might struggle with dynamic environments and could benefit from incorporating temporal context, which would add additional computational load.\", \"The performance of ADVICE is sensitive to hyperparameters like the safety threshold K, which might require careful tuning.\"], \"questions\": [\"Can ADVICE be integrated with other RL algorithms except for DDPG?\", \"What are the potential limitations of using a contrastive autoencoder in the context of safe RL, and how might these be addressed?\", \"What are the implications of ADVICE's cold-start issue on its applicability in real-world scenarios where immediate safety is critical?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer C6Rd,\\n\\nAs the rebuttal deadline approaches, we hope that our responses have satisfactorily addressed all of your concerns. \\n\\nShould you need further clarification or additional explanations, we would be more than happy to provide them.\\n\\nThank you for your time and consideration.\\n\\nSincerely,\\n\\nADVICE Authors\"}", "{\"comment\": \"Dear Reviewer Ff6i,\\n\\nAs the rebuttal deadline approaches, we hope that our responses have satisfactorily addressed all of your concerns. \\n\\nShould you need further clarification or additional explanations, we would be more than happy to provide them.\\n\\nThank you for your time and consideration.\\n\\nSincerely,\\n\\nADVICE Authors\"}", "{\"comment\": \"**Questions and Answers continued from previous comment**\\n\\n> While ADVICE shows smaller cumulative safety violations in Fig. 2, it performs similarly to baselines in some tasks (e.g., Figs. 8(c) and 9). Additionally, the similarity between DDPG-Lag and DDPG results suggests that the DDPG-Lag baseline may not have been well-tuned.\", \"a\": \"DDPG does consider safety indirectly, as it receives a penalty in the reward function for reaching terminal states. DDPG-Lag\\u2019s similar performance in some environments (Figures 8a, 8b) is due to sparse data, making it challenging to apply the Lagrangian penalty meaningfully. In environments with denser failure data, like the randomised circle environment (Figure 8c) and CMDP (Figure 9), DDPG-Lag outperforms DDPG. Increasing DDPG-Lag\\u2019s hyperparameter sensitivity in sparse data environments (Figures 8a and 8b) led to over-penalization, majorly harming performance, so the current hyperparameters represent a well-tuned agent.\"}" ] }
81qyvxW9pe
Diagonalizing Affinity Matrix to Identify Clustering Structure
[ "Zheng Xing", "Weibing Zhao" ]
Affinity matrix-based clustering constitutes an eminent approach within the domain of data mining. Nevertheless, prior research overlooked the opportunity to directly exploit the block-diagonal structure of the affinity matrix for the purpose of identifying cluster formations. In this paper, we propose an affinity matrix-based clustering strategy, termed as DAM, which employs a traversal algorithm to discern high-density clusters within the graph weighted by the affinity matrix, thereby establishing a traversal sequence. This sequence is subsequently utilized to permute the affinity matrix, thereby revealing its intrinsic block-diagonal structure. Moreover, we introduce an innovative split-and-refine algorithm that autonomously detects all diagonal blocks within the permuted matrix, ensuring theoretical optimality in the presence of well-separated clusters. Extensive evaluations on six real-world benchmark image clustering datasets demonstrate the superiority of our method over contemporary state-of-the-art clustering techniques.
[ "Block diagonal", "clustering analysis", "affinity matrix" ]
Reject
https://openreview.net/pdf?id=81qyvxW9pe
https://openreview.net/forum?id=81qyvxW9pe
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ygib8GGadW", "tId40CiIyA", "ZXipRbfATt", "OGMoZkhUnx", "98oag3Rh2Z", "1HUKmZbJA3" ], "note_type": [ "decision", "official_review", "official_review", "official_review", "official_review", "meta_review" ], "note_created": [ 1737523471966, 1730517887144, 1730534716776, 1730205003771, 1730308930926, 1734230716161 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1860/Reviewer_tcHt" ], [ "ICLR.cc/2025/Conference/Submission1860/Reviewer_yy1w" ], [ "ICLR.cc/2025/Conference/Submission1860/Reviewer_N4Tm" ], [ "ICLR.cc/2025/Conference/Submission1860/Reviewer_u8dh" ], [ "ICLR.cc/2025/Conference/Submission1860/Area_Chair_nVDb" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"In this work, the authors propose a clustering strategy based on an affinity matrix. This strategy uses a traversal algorithm to identify high-density clusters within a graph weighted by the affinity matrix, thereby establishing a traversal sequence. This sequence is then used to reorder the affinity matrix, revealing its inherent block-diagonal structure. Extensive experiments confirm the feasibility and effectiveness of the algorithm.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The motivation of the article is relatively easy to understand, and the paper is supported by a solid theoretical foundation.\", \"weaknesses\": \"1.The writing of the article is relatively weak, especially in Section 3.2, where the logic is somewhat lacking.\\n\\n2.A framework-style diagram illustrating the detailed process of the algorithm is missing and would be beneficial.\\n\\n3.Since a traversal method is used to uncover the block-diagonal structure, the computational complexity is likely quite high. Please analyze the computational complexity of the proposed algorithm.\\n\\n4.In the DAM model, the number of clusters is adaptively determined, making it, to some extent, a clustering algorithm for generalized category discovery. However, many comparison algorithms require the number of clusters to be predefined, which makes this comparison somewhat unfair.\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes the Diagonalizing Affinity Matrix (DAM) clustering method, which reorders the affinity matrix into a block-diagonal structure using a density-based traversal algorithm. The method then identifies clusters through a split-and-refine process, purportedly optimizing the clustering structure without needing predefined parameters.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper defines its focus on improving clustering by utilizing the block-diagonal structure of the affinity matrix, which aligns with established clustering challenges.\\n2. DAM does not rely on manual parameter tuning, which could theoretically simplify its application across datasets with consistent structures.\", \"weaknesses\": \"1. The approach essentially repurposes existing block-diagonal concepts with minor adaptations. Both the traversal and split-and-refine strategies lack substantial novelty, building primarily on established affinity-based clustering frameworks. This incremental improvement may not meet the expectations of a top-tier conference seeking fundamentally new insights or methodologies in clustering.\\n2. Despite claims of computational efficiency, the paper lacks any detailed analysis of DAM's scalability, especially for large datasets. The density-based traversal and iterative refinement are potentially computationally expensive, particularly when dealing with dense affinity matrices. This oversight raises concerns about the practical viability of DAM for large-scale applications. Many anchors based methods have achieved great progress.\\n3. The paper primarily compares DAM with older clustering methods (Very few papers in 2024, let alone comparative methods.), bypassing recent advances in deep clustering and self-supervised clustering techniques that have demonstrated state-of-the-art performance on high-dimensional and complex datasets. This selective benchmarking weakens the evidence for DAM's claimed superiority and raises questions about its actual competitiveness.\", \"questions\": \"1. How would DAM perform on datasets with complex, overlapping cluster structures or where the affinity matrix does not naturally exhibit a block-diagonal structure?\\n2. What is the computational complexity of DAM when applied to large-scale datasets, especially regarding time and memory usage?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a strategy for identifying cluster structures from an affinity matrix. This strategy can be divided into two steps: the first step permutes the affinity matrix to form a block-diagonal structure by a density-based traversal algorithm; the second step identifies the cluster structures in the permuted matrix through a split-and-merge approach.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Some theoretical analyses are provided.\", \"weaknesses\": \"1. The analysis of the algorithm's time complexity and space complexity is lacking.\\n2. The contribution section mentions that the strategy is \\\"rapid,\\\" but no efficiency-related experiments are presented.\\n3. Although many comparison algorithms are employed, Table 1 contains numerous missing entries, making the experimental results less convincing.\\n4. Since deep clustering incorporates feature representation learning modules, it typically achieves better performance, especially in image applications. However, according to the tables, DAM, as a traditional algorithm, significantly outperforms various deep learning algorithms. It appears that the performance results of comparison algorithms may come from previous papers, with RGB pixels provided directly as features for algorithms like DEC, while DAM uses newly extracted features. If true, the source of performance should be noted, along with details on how the new features were extracted. More rigorously, the input for each algorithm should be standardized.\\n5. There is room for improvement in the paper\\u2019s writing and organization. For example, Sections 3.1 and 3.2 feel disconnected, making it difficult to understand how the former supports the latter. Additionally, the meanings of $t$, $\\\\tau$, and $\\\\mathcal{C}$ are somewhat confusing.\\n6. In the experiments, DAM's input relies on the output of BDR-B, which itself learns an approximate block-diagonal structure (not yet permuted). Therefore, I am curious how DAM would perform using Gaussian similarity as input directly, and what the results would be if BDR-B's output were paired with DAM's permutation before applying spectral clustering.\\n7. Since a block-diagonal structure is already obtained, directly extracting the connected components of the graph should yield cluster divisions. Hence, what is the purpose or advantage of Section 3.2? Even without permutation, Section 3.2 could still be applied, raising questions about the purpose or advantage of Section 3.1 as well.\", \"questions\": \"See Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a novel clustering method that can directly leverage the block-diagonal form of the affinity matrix to reveal the underlying clustering structure. Specially, first, they use a traversal algorithm to establish a traversal sequence based on the affinity matrix originated from data. Second, this sequence is utilized to permute the affinity matrix to uncover the block-diagonal structure. At lats, they employ a split-and-refine to detect all diagonal blocks within the permuted affinity matrix.\\n\\nThe main contribution of this paper is to propose a new method to detect all diagonal blocks within the matrix.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The authors have done enough experiments.\", \"weaknesses\": \"1.The motivation of this paper is not clear. It is not clear why it is necessary to use the block-diagonal structure of the affinity matrix directly to reveal the underlying clustering structure.\\n\\n2.The writing of the paper needs to be improved. For example, Section 2.1 does not summarize relevant work and divided it into different categories.\\n\\n3.In section 4.2 of the paper, the experimental results are only presented, but not analyzed.\\n\\n4.There are also many works that directly use block-diagonal structure to achieve clustering results, such as [1,2]. It is recommended to compare these algorithms and discuss them in detail.\\n\\n5.It is suggested that the authors can complete the missing experimental results in Table 1 and Table 2, which may be more convincing.\\n\\n[1] One-Step Multi-View Spectral Clustering.\\n\\n[2] Unified\\u2002one-step\\u2002multi-view\\u2002spectral clustering.\", \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"All the reviewers consensually suggest rejecting the paper, and there is no author rebuttal.\", \"additional_comments_on_reviewer_discussion\": \"NA\"}" ] }
81cta3WQVI
EdgeRunner: Auto-regressive Auto-encoder for Artistic Mesh Generation
[ "Jiaxiang Tang", "Zhaoshuo Li", "Zekun Hao", "Xian Liu", "Gang Zeng", "Ming-Yu Liu", "Qinsheng Zhang" ]
Current auto-regressive mesh generation methods suffer from issues such as incompleteness, insufficient detail, and poor generalization. In this paper, we propose an Auto-regressive Auto-encoder (ArAE) model capable of generating high-quality 3D meshes with up to 4,000 faces at a spatial resolution of $512^3$. We introduce a novel mesh tokenization algorithm that efficiently compresses triangular meshes into 1D token sequences, significantly enhancing training efficiency. Furthermore, our model compresses variable-length triangular meshes into a fixed-length latent space, enabling training latent diffusion models for better generalization. Extensive experiments demonstrate the superior quality, diversity, and generalization capabilities of our model in both point cloud and image-conditioned mesh generation tasks.
[ "3D Generation", "Auto-regressive Mesh Generation" ]
Accept (Poster)
https://openreview.net/pdf?id=81cta3WQVI
https://openreview.net/forum?id=81cta3WQVI
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uKjB11Zru6", "rhtKHmRzYE", "rOINIHZs1A", "rNW7bPZaV7", "lah9oq4mS3", "kugUMrrSba", "doJra3bd5t", "dhyol6FsHK", "cxubEGdgQH", "ccQhuQeFTJ", "aMljOeBUMb", "ZuA86jetaJ", "YzCdO3gl9K", "YU5oJXjW9b", "Y2ww42yHaw", "UMhxg2a4y7", "OwxgfDmqXd", "OHixwlbr7I", "MqvElrWpoe", "HCgm9oDo5f", "GBjFIz8L2q", "EB3XZYmNGT", "CFGaLNEdiu", "BYqXCdfurK", "6KPiTp124N", "555qBuaxFI" ], "note_type": [ "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1737523436991, 1730670245160, 1732635395481, 1731987006646, 1732498454261, 1730672642235, 1730591880736, 1732691665852, 1730731578032, 1730389962263, 1732498479853, 1731987082903, 1730719931518, 1731987150379, 1732498510215, 1732498412842, 1732645151427, 1732656111028, 1731987371289, 1731987216486, 1731987054665, 1732498524402, 1732627713774, 1732691874886, 1732498494014, 1734471534745 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1123/Reviewer_3xAw" ], [ "ICLR.cc/2025/Conference/Submission1123/Reviewer_7iBA" ], [ "ICLR.cc/2025/Conference/Submission1123/Authors" ], [ "ICLR.cc/2025/Conference/Submission1123/Authors" ], [ "ICLR.cc/2025/Conference/Submission1123/Reviewer_8Uu4" ], [ "ICLR.cc/2025/Conference/Submission1123/Reviewer_vyBt" ], [ "ICLR.cc/2025/Conference/Submission1123/Authors" ], [ "ICLR.cc/2025/Conference/Submission1123/Reviewer_kGAF" ], [ "ICLR.cc/2025/Conference/Submission1123/Reviewer_Nd3B" ], [ "ICLR.cc/2025/Conference/Submission1123/Authors" ], [ "ICLR.cc/2025/Conference/Submission1123/Authors" ], [ "ICLR.cc/2025/Conference/Submission1123/Reviewer_7iBA" ], [ "ICLR.cc/2025/Conference/Submission1123/Authors" ], [ "ICLR.cc/2025/Conference/Submission1123/Authors" ], [ "ICLR.cc/2025/Conference/Submission1123/Authors" ], [ "ICLR.cc/2025/Conference/Submission1123/Reviewer_vyBt" ], [ "ICLR.cc/2025/Conference/Submission1123/Reviewer_3xAw" ], [ "ICLR.cc/2025/Conference/Submission1123/Authors" ], [ "ICLR.cc/2025/Conference/Submission1123/Authors" ], [ "ICLR.cc/2025/Conference/Submission1123/Authors" ], [ "ICLR.cc/2025/Conference/Submission1123/Authors" ], [ "ICLR.cc/2025/Conference/Submission1123/Reviewer_8Uu4" ], [ "ICLR.cc/2025/Conference/Submission1123/Authors" ], [ "ICLR.cc/2025/Conference/Submission1123/Authors" ], [ "ICLR.cc/2025/Conference/Submission1123/Area_Chair_LwwF" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The paper proposes a method for 3d mesh generation. They propose a mesh serialization copression algorithm to tokenize 3d meshes. An encoder-auto regressive decoder network operating in this representation is trained to compress this representation into a fixed length latent space. Additionally the authors propose a latent diffusion approach for generating 3d mesh representations in this latent space conditioned on images and 3d point clouds.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The proposed mesh tokenization based on maximizing edge sharing between adjacent triangles is sensible, well referenced and very useful in practice. It supports lossless compression at 50% rate designed to reduce long range dependencies between tokens which is beneficial for learning.\\n\\nThe proposed pipeline solves several issues of 3d mesh generation such as the ability to generate variable length outputs through the AR decoder combined with a fixed length latent code which enables the use of diffusion models for conditional generation\\n\\nGenerated examples are shown to be of great quality.\\n\\nLiterature review is comprehensive.\", \"weaknesses\": \"There is no discussion about pretraining the model, loss curves showing training progression, or explaining how unconditional generation works compared to the conditional method.\", \"questions\": \"Is it possible to generate unconditional samples using your approach?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"The authors have addressed most of my concerns and questions. However, I strongly recommend including quantitative evaluations, such as CD (or other suitable metrics), on a larger number of samples. This would help demonstrate robustness across a broader range of cases and rule out the possibility of cherry-picked results. Besides, I suggest including the comparison of inference speed in the manuscript for the final version to help the readers have a better understanding.\"}", "{\"title\": \"Response to Reviewer kGAF\", \"comment\": \"Thank you for your valuable time and insightful comments! We have tried to address your concerns in the updated manuscript and our rebuttal text:\\n\\n**Q1: Difference from MeshGPT and MeshAnything in terms of the core generative approach.**\\n\\nOur core contribution lies in a novel mesh tokenizer designed for improved compression of mesh sequences, which enhances training efficiency and boosts performance. For the core generative approach, auto-regressive mesh generation methods all rely on transformers and next-token prediction.\\n\\n**Q2: The generated geometry may not adhere to the conventions that artists follow.**\\n\\nThe goal of auto-regressive mesh generation is to learn and replicate artistic mesh conventions from large datasets. We believe our work advances this objective and can generate meshes that are indistinguishable from artist-created ones.\\n\\n**Q3: Limited number of generated faces.**\\n\\nAs discussed in the paper, although 4000 faces may not suffice for complex objects, this count is adequate for many common objects, as demonstrated in both the paper and on the project page. Moreover, previous methods are limited to generating up to 1600 faces with lower robustness.\\n\\nTo further increase the face count, one approach is to develop improved tokenization algorithms that achieve a higher compression rate. Another possible direction is to employ local attention and perform sliding-window inference, based on the observation that triangle topology relies more on local features than on global features.\\n\\n**Q4: Unfair comparison with Unique3D.**\\n\\nWe argue that our work is the first to achieve end-to-end image-conditioned artistic mesh generation. Existing image-to-3D methods continue to depend on isosurfacing techniques, such as Marching Cubes, to extract dense meshes, which fundamentally differ from our approach. We selected Unique3D for comparison as it is among the most recent open-source works in the field of image-to-3D, and this comparison aims to highlight the differences between dense meshes and artistic meshes. \\n\\nIn this sense, CLAY also produces dense meshes extracted using Marching Cubes similar to Unique3D, which are far from artistic meshes. We acknowledge that our approach may offer less geometric detail and generalization capability compared to 3D diffusion-based methods; however, this is not our primary focus. As in your own words, we believe \\\"the comparison may not accurately reflect the strengths and weaknesses of each method, as they are optimized for different conditions and face distinct sets of challenges\\\".\"}", "{\"title\": \"Response to Reviewer 7iBA\", \"comment\": \"We sincerely appreciate your great efforts in reviewing this paper. Your constructive advice and valuable comments really help improve our paper. Considering the approaching deadline, please, let us know if you have follow-up concerns. We sincerely hope you can consider our reply in your assessment, and we can further address unclear explanations and remaining concerns if any.\\n\\nOnce more, we are appreciated for the time and effort you've dedicated to our paper.\"}", "{\"summary\": \"This paper introduces EdgeRunner, an auto-regressive mesh generative model that presents a novel and efficient mesh tokenization algorithm capable of handling a higher number of mesh faces and higher resolutions. By encoding variable-length meshes into a unified latent space, the proposed method can generate meshes from point clouds or single-view images. The paper demonstrates a well-motivated and effective approach, creating a cohesive narrative.\\n\\nHowever, key information, such as training data and experimental settings, is missing from the main text, which limits readability. Given the novelty and effectiveness of the method, I suggest accepting the paper, but I strongly recommend that the authors restructure the content to improve readability and accessibility.\", \"soundness\": \"4\", \"presentation\": [\"Line 102: The term \\\"EdgeBreaker\\\" appears for the first time without a reference, making this part challenging to understand.\", \"Line 200: It seems that the mesh tokenization approach is heavily inspired by EdgeBreaker, proposed by Rossignac in 1999. However, there is no mention or citation of this foundational paper. Although the authors provide a comparison in the supplementary material on Line 920, omitting a discussion of this algorithm in the main methods section does not adequately acknowledge its influence.\", \"Line 358: The experiments section begins directly with results, without any mention of training data, settings, or other critical experimental details. These details appear only in the supplementary material, which is insufficient, as they are essential for understanding the experiments.\"], \"contribution\": \"4\", \"strengths\": \"The paper proposes a novel method for mesh tokenization which allows to handle mesh with more faces and in higher resolution. It also introduces a latent space with a unified token length, enhancing the model's generalization ability. These two contributions represent the technical novelties of the paper.\\n\\nThe paper includes a comprehensive comparison with several state-of-the-art mesh generative models, such as MeshAnything, MeshAnythingV2, and Unique3D. This thorough evaluation makes the study well-rounded and complete.\\n\\nThe method is clearly described and easy to follow. The figures and renderings are thoughtfully designed, giving the paper a polished visual appeal. Additionally, the accompanying website provides extensive results, further illustrating the method's effectiveness and aiding in comprehension of its impact.\", \"weaknesses\": \"While I find no weaknesses in the method itself, I believe the presentation of the paper needs restructured. Important information, such as experimental settings and training datasets, currently appears only in the supplementary materials, which makes the paper difficult to follow.\", \"questions\": [\"I have some questions primarily about the implementation and public availability of this work:\", \"Would it be possible to share the mesh tokenization script on an anonymous GitHub? This would help in understanding how the \\\"traversal\\\" process works and can be implemented.\", \"I didn\\u2019t find any mention in the paper regarding the public availability of the code and pre-trained model. Will these resources be released?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a novel mesh tokenization algorithm The tokenization supports lossless face compression and reduces long-range dependencies. This helps in smooth and efficient training of an auto regressive encoder termed as (ArAE). This encoder is capable of compressing point clouds to into fixed-length latent codes representing meshes. ArAE's latent space can be leveraged to train latent diffusion models for better generalization, enabling conditioning of different input modalities such as single-view images. Hence, the method can generate meshes from point clouds or single-view images, showing generalization and multimodal capabilities .\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1) The method of tokenization is based on the half edge data structure and is able to achieve 50% more compression than the existing method. This allows for smoother training and avoiding long range dependencies.\\n\\n2) The paper shows generalization using single-image to mesh examples. The examples shown in the paper are of high quality.\\n\\n3) The paper shows that the fixed latent space learned by the autoregressive model can be used by the diffusion training to learn many modalities including single-image to mesh capabilities.\", \"weaknesses\": \"1) The paper talks about the point cloud sampler but not enough information is provided on it. How is the current method sensitive to the number of points , regions of missing points, and irregularities in the point cloud positions?\\n\\n2) How is the Siddique et.al's method compared with the output quality? The paper mentions the method in context of the tokenization is lossy reconstruction but does not provide the evidence of the final output quality.\\n\\n3) The quality of the single-image to meshes would be more exciting to see if the poses of the input images are varied. For example, dog and frog examples in Figure 6, and compared with the comparing method.\", \"questions\": \"Please refer to the questions in weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 7iBA\", \"comment\": \"Thanks for the reply!\\n\\nFollowing your advice, we conducted a new qualitative comparison experiment to evaluate the Chamfer Distance (CD) and Hausdorff Distance (HD) between the generated meshes and the input point clouds. Due to the lack of widely recognized evaluation datasets, we tried to construct two test datasets from different sources:\\n\\n- **AI-Mesh**: This dataset comprises 30 meshes generated by other image-to-3D models, extracted using the Marching Cubes algorithm.\\n- **Artist-Mesh**: This dataset consists of 100 artistic meshes obtained from [basemesh](https://www.thebasemesh.com/), which were not seen during the training process.\\n\\nAll meshes are normalized into $[-1, 1]^3$. The summarized evaluation scores are presented below:\\n\\n| | | MeshAnything | MeshAnythingV2 | Ours |\\n| ----------- | -------------- | ------------ | -------------- | --------- |\\n| AI-Mesh | CD$\\\\downarrow$ | 0.072 | 0.081 | **0.034** |\\n| | HD$\\\\downarrow$ | 0.170 | 0.241 | **0.079** |\\n| Artist-Mesh | CD$\\\\downarrow$ | 0.066 | 0.050 | **0.036** |\\n| | HD$\\\\downarrow$ | 0.195 | 0.164 | **0.112** |\", \"we_observed_the_following\": [\"Our model achieves the best performance on surface metrics, even when using only point clouds as input. In contrast, the MeshAnything series further leverage surface normals.\", \"MeshAnythingV2 performs significantly worse on the more challenging AI-Mesh dataset, often failing to generate complete surfaces in these cases (consistent with the results shown in Figure 5).\", \"As the deadline for editing the PDF is approaching, we will include these experiments in the final version of the paper.\"]}", "{\"summary\": \"The paper introduces EdgeRunner, an Auto-regressive Auto-encoder (ArAE) model for artistic mesh generation. It proposes a mesh tokenization algorithm to address incompleteness and poor generalization.\\n\\nThe model can generate 3D meshes with up to 4,000 faces at a spatial resolution 512^3. It also compresses variable-length meshes into a fixed-length latent space, enabling the training of latent diffusion models for better generalization. \\n\\nThe main contributions include a mesh tokenization algorithm for lossless face compression, an ArAE model for fixed-length latent code generation, the use of this latent space for training latent diffusion models with better generalization.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"EdgeRunner demonstrates strengths in the mesh tokenization algorithm and the Auto-regressive Auto-encoder framework, which address previous challenges in mesh generation.\\n\\nIt offers a mesh tokenization algorithm for efficient compression into 1D token sequences, enabling mesh generation with up to 4,000 faces at a 512^3 resolution. The model's ability to compress variable-length meshes into a fixed-length latent space facilitates the training of latent diffusion models for enhanced generalization.\", \"weaknesses\": \"A. While the paper presents a compelling case for EdgeRunner's capabilities, a notable weakness is that it does not fundamentally differentiate itself from existing works like MeshGPT and the MeshAnything series in terms of the core generative approach.\\n\\nB. Despite the improvements in tokenization and the auto-regressive framework, the generated 3D geometries may not adhere to the modeling and wiring conventions that human artists typically follow. This could potentially limit the acceptance and practical application of the generated meshes within professional 3D content creation pipelines, where adherence to standard modeling practices is often a requirement.\\n\\nC. The paper also has a limitation in terms of polycount, as the capability of generating meshes with up to 4,000 faces, while an improvement over previous methods, still falls short for many real-world scenarios. The constraint on the number of faces limits the model's ability to capture the complexity and detail required for high-fidelity 3D representations, which further limits the application. \\n\\nD. Another limitation of the paper is the potentially unfair comparison made with Unique3D, as the latter employs a multi-view approach to 3D generation. Unique3D's methodology is inherently challenged by inconsistencies or deformities that can arise from generating a 3D model from multiview images, which is a different set of problems compared to EdgeRunner. This discrepancy means that the comparison may not accurately reflect the strengths and weaknesses of each method, as they are optimized for different conditions and face distinct sets of challenges. Consequently, the paper's claims about EdgeRunner's superiority might not be fully substantiated.\", \"questions\": \"I have a few questions for the authors to consider:\\n\\nThe paper presents an improvement in the number of faces that can be generated compared to previous methods, reaching up to 4,000 faces. However, for many practical applications in industries such as gaming, film, etc, this number is still limited. How to increase the face count further? Are there any technical barriers within the current model that restrict the generation of meshes with even more faces?\\n\\nComparison with Unique3D is not fair. How does the method perform when compared with CLAY? How does EdgeRunner compare in terms of quality and diversity of generated meshes? While the code CLAY is not publicly available, its geometric rendering effects can be observed on Hugging Face and its corresponding products are available for comparison.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes an Auto-regressive Auto-Encoder (ArAE) model for multi-modality mesh generation supporting point clouds and single-view image inputs. Besides, an efficient mesh tokenization inspired by the EdgeBreaker algorithm is designed for more edge sharing during compression.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"(i) The idea is very novel. Adapting the EdgeBreaker algorithm for mesh token compression is super interesting with the key insight \\\"maximize edge sharing between adjacent triangles\\\". It is a kind of Renaissance. I really like this style. This solution avoids the use of some lossy VQ-VAE models and preserves the geometry information during tokenization. Meanwhile, the customized traversal can also avoid the computation of long-range dependencies and ensure the face's orientation consistency in each sub-mesh. In addition, the core idea of using ArAE to handle variable-length mesh into fixed-length latent tokens is awesome. This can support the multi-modality inputs and address the variable-length issue in mesh generation.\\n\\n(ii) The presentation is well-dressed. The writing is clear and easy to follow. The pipeline in Figure 2 is simple and clear to show the workflow of the proposed method. The visual comparison of point cloud conditioned mesh generation in Figure 5 and image conditioned mesh generation in Figure 6 are very clear to demonstrate the advantages of the proposed method over state-of-the-art algorithms.\\n\\n(iii) The performance is very solid. The proposed method not only achieves more visually pleasant generation results with the highest user study score but also yields a much higher compression ratio to boost the model's efficiency. The experiments are sufficient. Especially the user study in Figures 7 and 8. It seems like more faces lead to finer-grained generated meshes. I also like the style of the visual ablation in Figure 8, studying the effect of resolution and image conditioning strategies.\\n\\n(iv) The implementation details are all provided in the appendix. I believe other researchers can easily reproduce the results. I trust the authors for the reproducibility.\", \"weaknesses\": \"Just some minor issues:\\n\\n(i) It would be better to remove the ablation study table from the supplementary to the main table as there are no other quantitative evaluation results. Although the visual comparison is good, I think the main paper should have some numerical comparison.\\n\\n(ii) Maybe Figure 4 and the left part of Figure 3 should be in the same figure according to the description in Section 3.1. I think Figure 4 , instead of the right part of Figure 3, is more related to the left part of Figure 3.\\n\\n(iii) For better understanding, in Line 320, the authors should add some description where to introduce the image condition input since in Line 298 - 302, the ArAE is introduced as a point clouds conditioned mesh generator.\\n\\n(iv) A small typo: (low-poly *v.s.* high-poly) $\\\\rightarrow$ (low-poly *vs.* high-poly)\", \"questions\": \"I am curious about the engineering tricks the authors used to save GPU memory and scale up the training datasets.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 8Uu4\", \"comment\": \"We sincerely appreciate your great efforts in reviewing this paper. Your constructive advice and valuable comments really help improve our paper. Considering the approaching deadline, please, let us know if you have follow-up concerns. We sincerely hope you can consider our reply in your assessment, and we can further address unclear explanations and remaining concerns if any.\\n\\nOnce more, we are appreciated for the time and effort you've dedicated to our paper.\"}", "{\"title\": \"Response to Reviewer 8Uu4\", \"comment\": \"Thank you for your valuable time and insightful comments! We have tried to address your concerns in the updated manuscript and our rebuttal text:\\n\\n**Q1: Presentation.**\\n\\nThank you for the feedback! We have revised the manuscript to enhance its comprehensiveness:\\n\\n- Missing citations in the introduction have been added to appropriately acknowledge EdgeBreaker.\\n- Due to space limitations in the main paper, we found it challenging to present both the original EdgeBreaker algorithm and our variant. Therefore, we have moved the detailed descriptions to the supplementary material and opted to use an example in the main paper to illustrate our method. Additionally, we have added text in the main paper to direct interested readers to the supplementary material for more details.\\n\\n**Q2: Public availability.**\\n\\nThank you for your interest! We promise to release the code recently for better understanding and reproduction of our method. \\n\\nUnfortunately, the models may not be released at the same time due to dataset licensing issues of Objaverse-XL. However, we have made many test samples available on our project page, which should assist future methods in making comparisons.\"}", "{\"summary\": \"This paper focuses on generating artistic meshes with up to 4000 faces and a resolution of 512\\u00b3. It introduces a novel mesh tokenization algorithm based on EdgeBreaker to facilitate lossless face compression. To produce fixed-length latent codes for arbitrary mesh sequences, the authors propose an encoder alongside an auto-regressive decoder. Additionally, they investigate the incorporation of image conditions to guide the generation of the meshes.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed method can generate meshes with up to 4000 faces, surpassing the capabilities of previous baselines.\\n2. The trained model shows strong generalization on novel inputs.\\n3. Extensive experiments highlight the advantages of the proposed method in achieving high-quality mesh generation.\", \"weaknesses\": \"1. The proposed method and the baselines are trained on different datasets (for example, MeshAnything does not have access to Objaverse-XL and reserves 10% of Objaverse for evaluation). As a result, the comparisons can be unfair.\\n2. Is the training sequence unique for each mesh? How did you define the start of the sequence?\\n3. Although the authors report the inference speed, there is no comparison of the inference speed against other methods when generating similar number of faces.\\n4. The details of user study are missing. How many users are there in the test? What is the details of the task or questions that volunteers are asked to do or answer? Are 8 test cases randomly selected? It is better to have more test cases rather than only 8 ones.\\n5. To demonstrate the robustness across all test cases, it is better to involve quantitative evaluations like those in *MeshAnything*, where the authors sample points from both the generated mesh and the ground-truth mesh, calculating the CD or ECD.\", \"questions\": \"1. What does the dashed line mean in Figure 2?\\n2. Does the output number of faces strictly adhere to the face count control? For instance, when using a control token of 1000 to 2000, will the model generate only mesh sequences within that range?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 3xAw\", \"comment\": \"Thank you for your valuable time and insightful comments! We have tried to address your concerns in the updated manuscript and our rebuttal text:\\n\\n**Q1: Lack of discussion about training progress such as loss curves.**\\n\\nThank you for the reminder! Our model is initially trained on approximately 112K data samples from Objaverse and Objaverse-XL. Subsequently, it is fine-tuned using 44K higher-quality samples from Objaverse. During the first training stage, the loss decreases from around 6 to 0.37, while the second stage further reduces the loss to 0.32. Notably, our data augmentation increases the training loss initially; however, the final performance proves to be more robust upon convergence.\\n\\n**Q2: Is it possible to perform unconditional mesh generation?**\\n\\nYes, we conducted early-stage experiments with unconditional generation. Instead of prepending conditional tokens to the sequence, we trained an auto-regressive model using only mesh tokens. However, we found that unconditional generation has limited practical applications. Unlike language models, which are inherently suited for completion tasks, mesh sequences produced by EdgeBreaker are not easily prepared as the input. Consequently, we chose to focus on conditional generation from point clouds or single-view images.\"}", "{\"title\": \"Response to Reviewer vyBt\", \"comment\": \"We sincerely appreciate your great efforts in reviewing this paper. Your constructive advice and valuable comments really help improve our paper. Considering the approaching deadline, please, let us know if you have follow-up concerns. We sincerely hope you can consider our reply in your assessment, and we can further address unclear explanations and remaining concerns if any.\\n\\nOnce more, we are appreciated for the time and effort you've dedicated to our paper.\"}", "{\"title\": \"Response to Reviewer kGAF\", \"comment\": \"We sincerely appreciate your great efforts in reviewing this paper. Your constructive advice and valuable comments really help improve our paper. Considering the approaching deadline, please, let us know if you have follow-up concerns. We sincerely hope you can consider our reply in your assessment, and we can further address unclear explanations and remaining concerns if any.\\n\\nOnce more, we are appreciated for the time and effort you've dedicated to our paper.\"}", "{\"comment\": \"Thanks you authors. The authors have addressed my concerns. I retain my rating.\"}", "{\"title\": \"Thank you for responding to my review\", \"comment\": \"The rebuttal has adressed my queries and I choose to keep my score.\"}", "{\"title\": \"Response to Reviewer Nd3B\", \"comment\": \"Thank you for your valuable time and insightful comments! We have tried to address your concerns in the updated manuscript and our rebuttal text:\\n\\n**Q1: Presentation.**\\n\\nThank you for the suggestion! We have updated the manuscript as follows: \\n\\n- Moved the user study table into the main paper for more quantitative comparison. \\n- Clarified the discussion on image-conditioned generation when introducing face number control to avoid confusion. \\n- Corrected the typo in \\\"vs.\\\"\\n\\n**Q2: Engineering tricks to scale up training.**\\n\\nThank you for your interest! We employ standard engineering techniques, including flash attention, gradient checkpointing, and `bfloat16` mixed-precision training to optimize GPU memory usage. With these optimizations, each batch requires approximately 20 GB of GPU memory, accommodating up to 4,000 faces (roughly 20,000 tokens per sequence). \\n\\nWe promise to release the code soon to facilitate better understanding and reproducibility of our method.\"}", "{\"title\": \"Response to Reviewer vyBt\", \"comment\": \"Thank you for your valuable time and insightful comments! We have tried to address your concerns in the updated manuscript and our rebuttal text:\\n\\n**Q1: Lack of details on the point cloud sampler.**\\n\\nThank you for the reminder! We simply sample 8192 random points from the surface of the input mesh. \\n\\nSince the primary use case for the point-conditioned model is retopology of an existing dense mesh, we introduce random surface perturbations by adding Gaussian noises. This enhances robustness to noisy or irregular surfaces, which are commonly encountered in meshes generated by the Marching Cubes algorithm. \\n\\nHowever, we do not apply augmentations related to point count or missing regions, making the model less effective in such scenarios. Designing augmentations based on the actual data distribution could possibly enable the model to learn to predict with fewer points or handle missing regions.\\n\\n**Q2: Comparison with MeshGPT and VQ-VAE based tokenizer.**\\n\\nIt is challenging to make a fair comparison with MeshGPT due to differences in training settings. For example, MeshGPT is trained on single-category data from ShapeNet (e.g., tables, chairs), while our model is trained on general objects from Objaverse. Additionally, MeshGPT quantizes vertices at a resolution of $128^3$, which results in significant information loss, while we use a finer resolution of $512^3$. \\n\\nPrevious studies have also explored the performance of VQ-VAE. For example, MeshAnything continues to use VQ-VAE but reports that avoiding compression (retaining 3 tokens per vertex) yields better results compared to MeshGPT's default setting of 2 tokens per vertex. Both MeshXL and MeshAnythingV2 have abandoned VQ-VAE entirely, as they found compression to negatively affect performance.\\n\\n**Q3: Results on image-to-mesh generation with varied image poses.**\\n\\nThank you for the suggestion! During training, we align the generated mesh with the pose of the input image, following practices from prior image-to-3D methods.\\n\\nMost test images, however, are 2D paintings collected from the web, making it challenging to alter the pose of the same object. Nonetheless, our project page includes 52 image-conditioned samples, showcasing more challenging poses, such as side or top views.\"}", "{\"title\": \"Response to Reviewer 7iBA\", \"comment\": \"Thank you for your valuable time and insightful comments! We have tried to address your concerns in the updated manuscript and our rebuttal text:\\n\\n**Q1: Unfair comparisons due to different training datasets.**\\n\\nWe acknowledge the difficulty in ensuring that the datasets used are identical. MeshAnything does not release the specific subset list of Objaverse and additionally incorporates data from ShapeNet. Moreover, our model's capability to support longer sequences enables us to utilize a greater number of samples from Objaverse, which plays a critical role in enhancing performance.\\n\\n**Q2: Is the training sequence unique for each mesh? How did you define the start of the sequence?**\\n\\nYes, similar to previous works, we sort the faces in an empirical $y\\\\text{-}z\\\\text{-}x$ order and always begin with the first half-edge of the first face. The EdgeBreaker algorithm is deterministic when the starting half-edge is fixed, which ensures that the sequence is unique for each mesh.\\n\\n**Q3: Comparison of inference speed.**\\n\\nThanks for the advice! Since MeshAnything and MeshAnythingV2 do not support face count control, we report only the average generation speed:\\n\\n| | MeshAnything | MeshAnythingV2 | Ours |\\n| ----------- | ------------ | -------------- | ------ |\\n| Tokens/s | 108.53 | 112.55 | 103.49 |\\n| Triangles/s | 12.06 | 28.03 | 24.44 |\\n\\nDue to the larger number of parameters in our model, the generation speed is slightly slower compared to MeshAnythingV2. However, our model demonstrates greater robustness and delivers better performance.\\n\\n**Q4: Details on user study.**\\n\\nWe have revised the manuscript to include additional details about the user study. Volunteers were asked to evaluate the results based on three criteria: geometry consistency with the input point cloud, visual appearance of the triangle faces, and overall mesh quality. Each volunteer was presented with mixed results from randomly selected methods. Additionally, we have provided more samples\\u201468 point-cloud-conditioned and 52 image-conditioned meshes\\u2014generated by our method on our project page for better comparisons.\\n\\n**Q5: Lack of quantitative evaluations like CD or ECD.**\\n\\nThanks for the advice! We believe that surface metrics may not effectively capture the aesthetic quality of the mesh, which is the primary objective of auto-regressive mesh generation. Additionally, on our more challenging test dataset, baseline methods often fail noticeably and are unable to produce complete mesh surfaces, further diminishing the relevance of surface metrics in evaluating performance.\\n\\n**Q6: Meaning of the dashed line in Figure 2.**\\n\\nThe dashed line indicates that our image-conditioned diffusion model is trained separately. Initially, we train the point-conditioned model (ArAE), as depicted above the dashed line, and subsequently use the learned fixed-length latent space to train the diffusion model, as shown below the dashed line.\\n\\n**Q7: Adherence of face count control.**\\n\\nSince our face count control is learned implicitly, the model cannot be strictly constrained to adhere to the specified range. We employ four learnable tokens to control the face count: unconditional, $(0, 1000)$, $[1000, 2000)$, $[2000, 4000)$. Our observations indicate that the control token $[1000, 2000)$ provides the most robust face count control and generation. In contrast, the other tokens exhibit less robustness and may also be influenced by the complexity of the input shape.\"}", "{\"title\": \"Response to Reviewer Nd3B\", \"comment\": \"We sincerely appreciate your great efforts in reviewing this paper. Your constructive advice and valuable comments really help improve our paper. Considering the approaching deadline, please, let us know if you have follow-up concerns. We sincerely hope you can consider our reply in your assessment, and we can further address unclear explanations and remaining concerns if any.\\n\\nOnce more, we are appreciated for the time and effort you've dedicated to our paper.\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thank authors for updating the manuscript and for the response.\\n\\nI noticed that the third point in the weaknesses I raised has been entirely ignored in the reply. \\n\\nAdditionally, the explanation about dataset licensing issues remains unclear. Many other open-source projects do not seem to encounter problems when using Objaverse-XL. Could authors please clarify this matter in more detail?\\nFurthermore, I am skeptical of the argument that the test samples provided on the project page can adequately support comparisons. Such an approach could potentially involve cherry-picked examples, leading to unfair comparisons. A more rigorous and transparent evaluation would be necessary to ensure fairness.\\n\\nAddressing these points thoroughly would not only strengthen the manuscript but also provide greater transparency and confidence in the results.\"}", "{\"title\": \"Response to Reviewer 8Uu4\", \"comment\": \"Thanks for the reply!\\n\\nWe apologize for the missing point in the presentation. To make it easier for readers to find information, we aim to keep all implementation details in a single location. However, due to page limits of the main paper, we are unable to include the full content without omitting critical information. Instead, we will include additional sentences in the final version to direct interested readers to the supplementary materials.\\n\\nRegarding the dataset issue, this paper is not solely an open-source project. To avoid anonymity violations and to the best of our knowledge, there are very few precedents of major corporations with formal legal departments releasing models trained on datasets with contentious licenses such as Objaverse-XL. Nevertheless, we are actively working to make the checkpoints available as soon as possible. Additionally, we have provided all necessary details, including final training losses, to ensure reproducibility.\\n\\nLastly, we agree that a more extensive quantitative experiment is important. We conducted a new qualitative comparison experiment to evaluate the Chamfer Distance (CD) and Hausdorff Distance (HD) between the generated meshes and the input point clouds. Due to the lack of widely recognized evaluation datasets, we tried to construct two test datasets from different sources:\\n\\n- **AI-Mesh**: This dataset comprises 30 meshes generated by other image-to-3D models, extracted using the Marching Cubes algorithm.\\n- **Artist-Mesh**: This dataset consists of 100 artistic meshes obtained from [basemesh](https://www.thebasemesh.com/), which were not seen during the training process.\\n\\nAll meshes are normalized into $[-1, 1]^3$. The summarized evaluation scores are presented below:\\n\\n| | | MeshAnything | MeshAnythingV2 | Ours |\\n| ----------- | -------------- | ------------ | -------------- | --------- |\\n| AI-Mesh | CD$\\\\downarrow$ | 0.072 | 0.081 | **0.034** |\\n| | HD$\\\\downarrow$ | 0.170 | 0.241 | **0.079** |\\n| Artist-Mesh | CD$\\\\downarrow$ | 0.066 | 0.050 | **0.036** |\\n| | HD$\\\\downarrow$ | 0.195 | 0.164 | **0.112** |\", \"we_observed_the_following\": [\"Our model achieves the best performance on surface metrics, even when using only point clouds as input. In contrast, the MeshAnything series further leverage surface normals.\", \"MeshAnythingV2 performs significantly worse on the more challenging AI-Mesh dataset, often failing to generate complete surfaces in these cases (consistent with the results shown in Figure 5).\", \"As the deadline for editing the PDF is approaching, we will include these experiments in the final version of the paper.\"]}", "{\"title\": \"Response to Reviewer 3xAw\", \"comment\": \"We sincerely appreciate your great efforts in reviewing this paper. Your constructive advice and valuable comments really help improve our paper. Considering the approaching deadline, please, let us know if you have follow-up concerns. We sincerely hope you can consider our reply in your assessment, and we can further address unclear explanations and remaining concerns if any.\\n\\nOnce more, we are appreciated for the time and effort you've dedicated to our paper.\"}", "{\"metareview\": \"The paper introduces EdgeRunner, an auto-regressive auto-encoder with a novel mesh tokenization algorithm for high-quality artistic mesh generation, demonstrating significant advancements in mesh compression and fixed-length latent code learning. Reviewers praised the paper's technical contributions, including the innovative use of EdgeBreaker-inspired tokenization, strong experimental results, and the ability to generalize to point clouds and single-view image inputs, producing visually appealing meshes. However, concerns were raised about limited quantitative evaluations, lack of comparisons to alternative methods like CLAY, the relatively low face count (4,000 faces), and the need for clearer explanations of training settings and dataset usage in the main text.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, reviewers raised concerns about the lack of quantitative evaluations (e.g., Chamfer Distance), fairness of comparisons with alternative methods, insufficient experimental details, and limited face count for real-world applications. The authors addressed these by conducting new experiments to evaluate Chamfer and Hausdorff Distances on two test datasets, clarifying dataset usage, inference speed, and robustness, while also revising the manuscript to include more details on user studies and training processes. Despite some lingering concerns about dataset licensing and comparisons, reviewers acknowledged the improvements and found most issues satisfactorily resolved.\"}" ] }
7zwIEbSTDy
PPT: Patch Order Do Matters In Time Series Pretext Task
[ "Jaeho Kim", "Kwangryeol Park", "Sukmin Yun", "Seulki Lee" ]
Recently, patch-based models have been widely discussed in time series analysis. However, existing pretext tasks for patch-based learning, such as masking, may not capture essential time and channel-wise patch interdependencies in time series data, presumed to result in subpar model performance. In this work, we introduce *Patch order-aware Pretext Task (PPT)*, a new self-supervised patch order learning pretext task for time series classification. PPT exploits the intrinsic sequential order information among patches across time and channel dimensions of time series data, where model training is aided by channel-wise patch permutations. The permutation disrupts patch order consistency across time and channel dimensions with controlled intensity to provide supervisory signals for learning time series order characteristics. To this end, we propose two patch order-aware learning methods: patch order consistency learning, which quantifies patch order correctness, and contrastive learning, which distinguishes weakly permuted patch sequences from strongly permuted ones. With patch order learning, we observe enhanced model performance, e.g., improving up to 7% accuracy for the supervised cardiogram task and outperforming mask-based learning by 5% in the self-supervised human activity recognition task. We also propose ACF-CoS, an evaluation metric that measures the *importance of orderness* for time series datasets, which enables pre-examination of the efficacy of PPT in model training.
[ "Time Series Classification", "Self-Supervised Learning", "Pretext Task" ]
Accept (Poster)
https://openreview.net/pdf?id=7zwIEbSTDy
https://openreview.net/forum?id=7zwIEbSTDy
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yqOT50mOFw", "whC1o1Ue4I", "vaqJGi3s5m", "vYC7DNNCq2", "v32kFoaGEm", "twvkMW51hf", "tvj97gkisE", "q3xt7BobpP", "otZo9Vi5Ps", "orwANoZIbT", "nMFV0qE9YC", "mUixAU6sq9", "mP0ZmIaz61", "h2NVqHwGNF", "gVANAz7U0k", "fT7Lf92NOM", "evFxysgKe0", "dtRgo1Z5I7", "bg5eGmOZ2W", "a8Xwot4eK7", "Wug71j2yiu", "VPIyA0f1PJ", "TGEgUUraPj", "TEdfPzi1WR", "SMM3UhcHaV", "N2apKwmrZP", "LyQknbolKW", "Gusv4W17IG", "EUJ6gT21wM", "DsmLWH9QFR", "8tofriEqSz", "814MxXIKjb", "7fSMFjFYF4", "7KNHYAPPY7", "7EsUtGWjLO", "6T4hARDHSK", "4OjjKNEWZw", "0TGN6AoVCP", "0PrYUck7Wv" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1731990223844, 1737523494420, 1731990181435, 1731989378564, 1732080366921, 1732669556824, 1731989437600, 1731989584804, 1731989244329, 1731989334076, 1732723438128, 1732759275780, 1731989278246, 1732682311075, 1730599375748, 1731989929171, 1732610039368, 1730646209334, 1731989521627, 1731989858568, 1732087157985, 1731989490902, 1732004126225, 1732669479381, 1732495916613, 1731989773095, 1731990268261, 1731989653076, 1732669410699, 1731989181980, 1731989971640, 1734664147981, 1733186812454, 1731990094954, 1730650325807, 1731990045943, 1731989709403, 1731992853957, 1730043735042 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Reviewer_i3b6" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Reviewer_GFNC" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Reviewer_Z8Ek" ], [ "ICLR.cc/2025/Conference/Submission2262/Reviewer_Z8Ek" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Area_Chair_sTyh" ], [ "ICLR.cc/2025/Conference/Submission2262/Reviewer_o6fk" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Area_Chair_sTyh" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Reviewer_i3b6" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Authors" ], [ "ICLR.cc/2025/Conference/Submission2262/Reviewer_i3b6" ], [ "ICLR.cc/2025/Conference/Submission2262/Reviewer_GFNC" ] ], "structured_content_str": [ "{\"comment\": \"**Q4. The gains are not convincing since many other baselines outperform the proposed method in several metrics.**\\n\\nThank you for this insightful feedback. We acknowledge that some of the results of PPT have been outperformed by other baselines in metrics like Recall for Self-Supervised Linear probing and accuracy in Semi-Supervised Learning. However, we would like to cordially request the reviewer to take a closer look at the performance gain obtained by incorporating PPT into PatchTST and PITS.\\n\\nBoth PatchTST and PITS have utilized mask-and-reconstruct and contrastive learning pretext tasks in their original implementations. By simply switching the pretext task to PPT, we observe a strong performance gain. For instance, in the EMO task, the F1 Score improves from 45% to 54% (+9%) for PatchTST, and Accuracy improves from 69% to 75% (+6%) for PITS. For the GL Task, the accuracy improves from 88% to 92% (+4%) for PatchTST and 87% to 92% (+5%) for PITS. In semi-supervised training, the gap is even larger, where we observe that PITS improves from 72% to 83% (+11%) in F1 Score in the 1% limited training data scenario.\\n\\nThese improvements demonstrate the effectiveness of PPT as a pretext task in enhancing the performance of existing time series representation learning methods in tasks that exhibit order information such as human activity recognition, ECG, etc., (We also contribute ACF-COS, a novel metric that can pre-assess the utility of PPT).\\n\\nWhile we acknowledge that our work may not be the state-of-the-art (SOTA) in all metrics across all tasks, we believe that PPT provides a fresh perspective in the direction of representation learning for time series analysis. To the best of our knowledge, we are the first to self-supervise models based on the order information of patches in time series data. Given the increasing adoption of patch-based strategies in time series models, we believe that our contribution extends beyond metric performance and offers valuable insights to the time series community.\\n\\nIn summary, while PPT may not outperform all baselines in every metric, the significant performance gains obtained by incorporating PPT into PatchTST and PITS demonstrate its effectiveness as a pretext task for time series representation learning.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"**Q3. Limited evaluations. Experiments are all one-to-one dataset. The authors do not consider one-to-many, or other settings to test whether the representations are generalizable to cross-dataset settings.**\\n\\nWe thank the reviewer for the valuable comments and the suggestion for further experiments. While we acknowledge the importance of assessing the transferability of representations in cross-datasets, we believe that it is not the primary focus of our current work.\\n\\nOur research is focused on introducing a novel time series specific pretext task, PPT, that leverages the order information of patches in time series data to enhance the performance of existing time series representation learning methods. The main objective we wanted to convey through out paper was the effectiveness of PPT in capturing order-aware and meaningful representations within a single dataset, specifically for tasks where order information is crucial.\\n\\nWe also respectfully disagree that our paper has limited evaluations. We have conducted an extensive number of comparisons against several SOTA baselines in self-supervised linear probing, semi-supervised training, and supervised training, demonstrating the possibility and effectiveness of our newly proposed pretext task. We have also conducted extensive analysis on our patch shuffling strategies, and importance visualization by patches. We also conducted multiple experiments to validate our ACF-COS metric. We have also conducted additional experiments with UniTS, as the reviewer suggested. \\n\\nWhile cross-dataset evaluations can offer additional insights into the generalizability of the learned representations, we argue that they are not essential to validate the effectiveness of PPT in capturing order information. The primary goal of our work is to introduce a novel pretext task that can be integrated into existing patch-based time series representation learning methods to enhance their performance, rather than to develop a universal representation that can be transferred across datasets.\\n\\nMoreover, conducting cross-dataset evaluations would require careful consideration of the compatibility between the source and target datasets, with additional technical concerns such as time length, sampling frequency, etc., We believe that this is currently out of our paper\\u2019s scope, and we hope to explore this in our future research. \\n\\nIn summary, while we appreciate the reviewer's suggestion to conduct cross-dataset evaluations, we believe that **our paper focuses on demonstrating the effectiveness of PPT within a single data, which is a crucial first step in validating the proposed pretext task.** We have conducted extensive one-to-one dataset experiments to evaluate the performance of PPT in comparison to other methods, conduct experiments on UniTS and clustering. We believe that these experiments provide a comprehensive assessment of PPT\\u2019s ability to capture meaningful representations and improve performance in time series classification tasks. Nevertheless, we acknowledge the importance of exploring the generalizability of the learned representations and plan to address this aspect in our future research.\"}", "{\"comment\": \"**Q3. Can the authors provide the additional speed and memory consumption required by PPT.**\\n\\nA3. We appreciate the reviewer's question regarding the additional speed and memory consumption required by PPT. We have conducted further experiments on the PTB task (Channel size of 15, Batch Size of 32, and 30 Patches for each channel) for self-supervised linear probing and have compared our method against mask-based and contrastive-based approaches. The detailed results of training time and memory consumption are reported in **Appendix F.2.**\\n\\nIn summary, PPT employs additional auto-regressive models for constructing context vectors in the training process, slightly increasing the speed and memory consumption compared to previous pretext task methods. However, in the inference stage, these auto-regressive models can be discarded, and only the backbone model is used. In terms of time complexity, mask-based approaches took 70.3 seconds for the whole training, contrastive-based approaches took 92.9 seconds, and PPT took 139.5 seconds. This increased training time is due to the additional computations required for auto-regressive models constructing context vectors. Regarding memory complexity, the mask-based approach had a total of 79.7K parameters, the contrastive-based approach had 79.3K parameters, and PPT had 212K parameters. The higher number of parameters in PPT is primarily attributed to the use of auto-regressive models. \\n\\nDespite the increased computational overhead, we believe that the improved performance achieved by PPT, as demonstrated in our experimental results, justifies the use of this technique. We acknowledge that the current implementation of PPT has room for optimization, and we propose that exploring more efficient model designs for constructing context vectors (e.g., reducing representation dimensions, and re-using the auto-regressive models) would mitigate the computation burden.\"}", "{\"comment\": \"Thank you for your responses. From the experiment results, I think MiniRocket is an extremely lightweight (can run on CPU), extremely efficient (nearly 4+ times faster than PatchTST+PPT), and very competitive time series classification algorithm. In my opinion, currently it is difficult to find a method that can compete with MiniRocket in terms of both performance and efficiency for time series classification task, which limits the application value of many so-called SOTA deep learning methods. However, given that PPT reveals time series characteristics from a novel perspective to enhance the performance of deep models, I tend to keep my score. I suggest that PPT can be expanded to more time series tasks, learn more general time series representations, and improve the actual application value in the real world.\"}", "{\"comment\": \"Dear Reviewer GFNC,\\n\\n\\nWe appreciate the reviewer for the thorough reviews provided for our paper. As we approach the final days of the discussion period, we are writing regarding our work PPT. **We have devoted considerable time and effort to address the reviewer\\u2019s valuable feedback, conducting multiple additional experiments and providing comprehensive explanations** to ensure our novel contributions.\\n\\nIn response to the reviewer\\u2019s suggestion, we have:\\n1) Implemented comparisons with the latest state-of-the-art method (UniTS; NeurIPS 24)\\n2) Performed clustering experiments to evaluate representation performance\\n3) Addressed potential misunderstandings\\n\\n\\n\\n\\nOur work introduces a novel contribution to patch-based time series representation learning, where we believe we are the first to introduce order-aware self-supervision through our carefully designed patch-based pretext task. The results show that our approach is effective for tasks exhibiting both the temporal and channel order information. We have also provided additional metrics \\u201cACF-CoS\\u201d to identify such datasets that exhibit order information, enabling us to pre-assess them prior to using our pretext task. We also note that PPT is compatible with many of the recent patch-based architectures in time series including PatchTST (Transformer), PITS (Linear), and even recent time series language models like GPT4TS.\\n\\n\\n\\n\\nWe want to emphasize that our paper's primary **contribution lies in contributing to the time series community that order-based pretext tasks can be effectively applied to time series data, with promising gains**. Rather than positioning ourselves as achieving state-of-the-art performance, our goal is to open new research directions in this understudied area.\\n\\n\\n\\n\\nWe thank the reviewer for providing helpful suggestions for improving our paper, and **we are eagerly awaiting the response to ensure that we can address any remaining concerns in our final rebuttal.**\\n\\n\\n\\n\\nBest regards,\\n\\n\\nAuthors\"}", "{\"comment\": \"**Q4. \\\"Decrease\\\" in Tab. 4 caption should be \\\"Increase\\\"**\\n\\nA4. We sincerely appreciate the reviewer\\u2019s detailed review of our manuscript. This was indeed a typo that we have now corrected:\", \"original\": \"\\\"Here, a decrease in ACF-COS leads to an improved likelihood of PPT being beneficial.\\\"\", \"revised\": \"\\\"Here, an **increase** in ACF-COS leads to an improved likelihood of PPT being beneficial.\\\"\\n\\nThank you again for your valuable feedback that has helped improve the quality of our paper! We truly appreciate the time and effort spent by the reviewer. If there are additional details or explanations that we can provide, please kindly let us know what we can do more to update our scores.\"}", "{\"comment\": \"**Q2. Does the ACF-COS metric demonstrate robust and broad applicability in measuring the orderness of time series data Specifically, can ACF-COS accurately differentiate between various types of time series tasks, such as high-noise sequences, non-order-dependent tasks, and strongly order-dependent tasks? Additionally, how can we ensure the stability of ACF-COS across different patch sizes, sampling frequencies, or data distributions to reliably evaluate the suitability of PPT for diverse time series datasets?**\\n\\nThank you for providing us the opportunity to further explain our proposed metric ACF-COS. As the reviewer mentioned, PPT was originally designed for time series classification tasks that exhibit order information. PPT exploits the time and channel-wise patch order dependency and supervises this information for model training. However, not all time series exhibit order information, and some tasks may have class labels independent of these temporal or channel-order dependencies. In such tasks, PPT may not be suitable as a pretext task.\\n\\nTo pre-assess such tasks prior to deploying PPT as a pretext task, we proposed ACF-COS, where a high ACF-COS score indicates that order is more important, and a low ACF-COS score indicates that order is less important in the task. ACF-COS is calculated by constructing the autocorrelation vectors of the original and patch-permuted sets of the temporal sequence. A high cosine similarity indicates that permuting these patches did not disrupt the order information, while a low cosine similarity indicates that order information was preserved despite permuting the patch sequences.\\n\\nFrom our extensive experiments conducted on 24 different classification datasets, we showed that ACF-COS is robust in measuring the orderness of time series. These datasets consist of a wide range of tasks where order is not important (White Noise; ACF-COS=0.001), less important (GestureMidAirD1; ACF-COS=0.186), mid important (Cricket; ACF-COS=0.418) and heavily important (Step Function; ACF-COS=0.902). The ACF-COS well identifies the order characteristics in diverse time series tasks (both real-world and synthetic). We also fitted two least square methods indicating the correlation between the ACF-COS score and the performance gain obtained from PPT is positive with a significant p-value. \\n\\nTo mitigate the effect of patch sizes in permuting the sequence, we have reported the results on 3 different patch sizes for each time series dataset and averaged their results for a robust metric. As such, ACF-COS is stable and reliable in evaluating the suitability of PPT for time series tasks.\\n\\nIn summary, our work proposes ways to supervise order information for time series classification tasks while also providing a pre-assessment tool for the effectiveness of PPT, making it a comprehensive contribution to the field. We acknowledge that further research is needed to explore the stability of ACF-COS across different sampling frequencies and data distributions, and we plan to address these aspects in our future work.\"}", "{\"comment\": \"**W1. Consider Rocket and MiniRocket as baselines in our work.**\\n\\nA. We thank the reviewer for providing insightful suggestions and highly agree with the reviewer\\u2019s opinion that incorporating additional baselines could make our experiment more comprehensive. As such, we have performed MiniRocket (successor of Rocket) and UniTS (the most recent SOTA work accepted to NeurIPS 2024 as suggested by reviewer GFNC) experiments with self-supervised linear probing on the PTB tasks, where we report the results below. MiniRocket shows competitive performance compared to SOTA methods, but is still outperformed in key metrics. \\n\\n| Model | Accuracy | F1 score | AUROC | AUPRC | \\n|-------|----------|-----------|--------|--------|\\nMiniRocket | 85.52\\u00b12.17 | 90.19\\u00b11.67 | 88.55\\u00b12.42 | 93.74\\u00b12.94 | \\n| UniTS* | 84.20\\u00b11.01 | 89.57\\u00b10.60 | 88.98\\u00b11.35 | 94.81\\u00b10.84 | \\n| PITS (+PPT) | **86.48\\u00b10.40** | **91.24\\u00b10.26** | **91.83\\u00b11.36** | **96.26\\u00b10.85** |\"}", "{\"comment\": \"**Q2. Considering the need to store additional augmented samples and the need for additional calculation.**\\n\\nA2. We thank the reviewer for raising this point. Here, we respectfully highlight a possible misunderstanding regarding the need to store additional augmented samples for PPT training. PPT **does not require explicit storage** of the augmented samples; instead, it only stores the shuffled patch indexes. During each run, these indexes are called and used to re-order the patches accordingly (In practice, we utilize *torch.gather()* to re-order the patches).\\nThis approach offers two key advantages. First, we can reuse the same shuffled indexes for various samples, enabling us to create diverse augmented shuffled sets. Second, it drastically reduces the storage costs in directly storing raw augmented samples and minimizes the overhead of shuffling patches on the fly for every batch of samples, making PPT an effective technique.\"}", "{\"title\": \"Response to Rebuttals\", \"comment\": \"I thank the authors for their rebuttals and discussion points. The rebuttals unfortunately have not convinced me about the limited evaluation setups (no one-to-many considered which makes it unfair since other prior works do have those), limited gains, lack of pretraining with multiple datasets, and comparisons with SoTA baselines. In fact, it has been shown in the UniTS paper that UniTS outperforms PatchTST, and other baselines, and while your work shows PatchTST can be enhanced in some settings with order aware pretext task, it's not clear what is best. Overall, in my opinion, this paper does not move the state of the art too far and is incremental in its current form. Therefore, I choose to respectfully keep my current score. Thanks.\"}", "{\"comment\": \"Dear **Reviewer Z8Ek**\\n\\nThank you for the constructive review and for helping us improve our work! We are happy to see that we have addressed the reviewer's concerns and have kept the rating towards acceptance. \\n\\nThank you!\\n\\nAuthors.\"}", "{\"comment\": \"**Q1. Time Dimension patch order is intuitive, but channel order is not easy to understand.**\\n\\nA1. We thank the reviewer for bringing this to our attention. While the time dimension patch order focuses on the temporal order dependency within each channel, the channel order captures cross-channel patterns that occur simultaneously at each time point (patch index). The channel order captures how different sensors (channels) appear together across channel dimensions at the same time point, allowing the model to learn cross-channel relationships. For instance, as shown in **Figure 1: Motivation**, we observe peaks occurring in all sensors (accelerometer and force sensors) at the same time point. However, when we permute the top two patches of the accelerometer sensor, these peaks do not co-align at the same time points as they did before, effectively disrupting the original channel order. As such, PPT utilizes this disruption in order information to self-supervise the model.\\n\\nTo further enhance our explanation, we have updated the main manuscript by adding the following explanation in lines 189-192:\\n\\n```markdown\\nWhile time order $\\\\boldP_{(:,d)}$ captures the temporal evolution within individual channels, the channel order sequence $\\\\boldP_{(i,:)}$ represents a snapshot of patterns that appear together across all channels at a single time point, enabling the model to learn both temporal dynamics and concurrent cross-channel relationships.\\n```\\nPlease kindly let us know if the explanation can be further improved! Thank you once again.\"}", "{\"comment\": \"Dear authors,\\n\\nThank you for your revision. These address my concerns. However, I think the current rating is appropriate, so it will remain unchanged.\"}", "{\"summary\": \"This paper introduces a new pretext task designed for patch-based time series models, focusing on the order of time series patches. The Patch order-aware Pretext Task (PPT) is aimed at improving time series classification performance by considering the sequential and channel-wise order of data patches. This task leverages controlled patch permutations to produce supervisory signals for training models to learn inherent order dependencies. PPT incorporates both patch order consistency learning and contrastive learning to handle weakly and strongly permuted patch sequences, achieving better model performance in various time series tasks.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"Novelty in time series pretext task: PPT introduces a novel pretext task that leverages patch order consistency learning and contrastive learning to improve time series classification performance. This is a significant contribution to the field of time series representation learning.\", \"Flexibility and adaptability: By introducing both consistency and contrastive learning, PPT can be incorporated into various task settings, either self-supervised or supervised, making it a versatile method for time series classification.\", \"New metric for evaluating the importance of orderness: The paper also introduces ACF-CoS, which measures the importance of orderness in time series data, providing a new metric for evaluating the effectiveness of PPT and potentially other time series models.\", \"Strong empirical results: The paper demonstrates the effectiveness of PPT in improving time series classification performance across various datasets, showing significant improvements over existing methods.\", \"Clear and well-structured presentation: The paper is well-written and structured, especially the explanation of the PPT task and its illustration with examples. The clarity of the presentation enhances the understanding of the proposed method.\"], \"weaknesses\": [\"Unclear details on consistency loss: The paper do not provide more detail into how the context vectors are generated other than stating that they are generated by an auto-regressive model. More details into the architecture and training of this model would be helpful for better understanding the consistency loss.\", \"Lack of discussion on hyperparameter $\\\\lambda$: The paper does not provide a detailed study or discussion on the hyperparameters $\\\\lambda$, which are to be \\\"learnable parameters\\\". In the paper they cited that these hyperparameters are calculated from the prediction of a separate model, but more details on how this is done would be beneficial. It would also be helpful to provide ablation studies on the effect of such dynamic $\\\\lambda$ on the performance of the model compared to fixed $\\\\lambda$.\"], \"questions\": \"1. Could you provide more details on how the context vectors are generated by the auto-regressive model for the consistency loss? How is the architecture of this model designed, and how is it trained?\\n2. Could you provide more details on how the hyperparameters $\\\\lambda$ are adjusted during training? What are the final results of the learned $\\\\lambda$ values at the end of training, and how do they affect the performance of the model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for providing constructive feedback and insights to improve our work. As the reviewer mentioned, we have thoroughly detailed the strengths and weaknesses of our order-based supervisions for time series throughout our paper. We have also proposed a novel metric ACF-COS, which helps us overcome our weaknesses by pre-assessing the order-awareness in time series tasks. In response to the reviewer's feedback, we have made the following improvements in our work:\\n\\n1. We conducted extensive baseline self-supervised comparisons with UniTS on three different datasets, showing that PPT outperforms UniTS. \\n\\n2. We conducted additional time series clustering task, where we show that PPT outperforms in NMI, and ARI metrics where we compare the clustering performance with respect to the ground truths. Accordingly, we have updated our manuscript in **Appendix I. Time Series Clustering** to incorporate our results.\\n\\nHere, we have prepared a detailed response to the reviewer's questions. We carefully address each of the comments below.\"}", "{\"title\": \"Encouragement to Actively Participate in the Discussion Phase\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your valuable contributions to the review process so far. As we enter the discussion phase, I encourage you to actively engage with the authors and your fellow reviewers. This is a critical opportunity to clarify any open questions, address potential misunderstandings, and ensure that all perspectives are thoroughly considered.\\n\\nYour thoughtful input during this stage is greatly appreciated and is essential for maintaining the rigor and fairness of the review process.\\n\\nThank you for your efforts and dedication.\"}", "{\"summary\": \"The paper introduces a novel Patch Order-aware Pretext Task (PPT) for time series classification, focusing on the importance of patch order in time series data. Traditional patch-based models often overlook the critical order dependencies in time and channel dimensions, which can lead to suboptimal performance. PPT addresses this gap by generating supervisory signals through controlled channel-wise patch permutations, allowing models to learn time series' intrinsic sequential characteristics. The main contributions of the paper are as follows: (1) Patch Order-aware Pretext Task (PPT): This is the first pretext task specifically designed to enhance order-awareness in patch-based time series models. PPT leverages controlled channel-wise permutations to provide structural supervisory signals in both time and channel dimensions. (2) Two Learning Methods: Patch Order Consistency Learning evaluates the correctness of patch order by distinguishing between original and strongly permuted sequences; Contrastive Learning differentiates between weakly and strongly permuted sequences, helping the model capture finer order distinctions. (3) ACF-COS Metric: The paper proposes ACF-COS (Autocorrelation Function with Cosine Similarity), a metric to quantify the \\\"orderness\\\" in time series data, which serves as a pre-evaluation tool to determine if PPT would be beneficial for a particular dataset. (4) Performance Gains: PPT demonstrates significant performance improvements on various time series tasks, especially in scenarios where order-awareness is crucial, such as ECG and human activity recognition (HAR) tasks, with performance boosts of up to 7% and 5% respectively. By integrating order-awareness through PPT, the approach enhances the ability of time series models to utilize structural dependencies across patches, setting a new direction for patch-based time series analysis.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"**Introduction of the Patch Order-aware Pretext Task (PPT)**: This paper presents the first order-aware pretext task specifically designed for patch-based time series models. Unlike traditional tasks such as masking and reconstruction, PPT captures essential order dependencies across time and channel dimensions, a distinctive feature of time series data. By leveraging this natural inductive bias of sequence ordering, PPT provides effective self-supervision signals that help the model better understand structural characteristics in time series classification tasks.\\n\\n**Implementation of Order Consistency and Contrastive Learning for Pretraining**: To achieve order-aware learning within PPT, the authors designed two learning strategies: order consistency learning and contrastive learning. Order consistency learning distinguishes between the original sequence and strongly perturbed sequences, validating the model's understanding of patch order correctness. Contrastive learning differentiates between weakly and strongly perturbed sequences, allowing the model to capture finer distinctions in order dependencies. Together, these learning approaches enhance the model's sensitivity to sequence order, helping it capture the critical ordering information embedded within time series data and improving performance on order-dependent tasks.\\n\\n**Significant Performance Improvement**: PPT demonstrates notable improvements across multiple time series tasks, particularly in scenarios where order information is critical. Experimental results show that PPT enhances accuracy by up to 7% in ECG classification and by 5% in human activity recognition (HAR), underscoring its effectiveness. By integrating PPT in both self-supervised and supervised settings, models are able to better leverage the structural dependencies in time series data, achieving superior classification performance. This advancement not only showcases PPT's value in practical applications but also highlights the potential of order-aware tasks in time series analysis.\", \"weaknesses\": \"**Limited Applicability to Non-Order-Dependent Tasks**: PPT is specifically designed to exploit order dependencies in time series data, which limits its applicability to tasks where sequence order is critical. For datasets or tasks that do not rely heavily on the sequential order of data (e.g., tasks with significant noise or random sequences), PPT may add unnecessary complexity without meaningful performance gains. The paper itself notes that PPT does not perform well in such scenarios, and a robust evaluation metric is required to assess its suitability beforehand.\\n\\n**Dependence on Hyperparameter Tuning for ACF-COS and Permutation Strength**: The effectiveness of PPT depends on the selection of appropriate hyperparameters, especially for the ACF-COS metric and permutation intensity (weak vs. strong permutations). However, the paper provides limited guidance on systematically tuning these parameters for different datasets, which could hinder reproducibility and practical implementation. Users may need to perform extensive trial-and-error testing to optimize PPT for specific datasets, which is resource-intensive and may not always yield consistent results.\\n\\n**Limited Success in Time Series Forecasting**: Although PPT shows promise in classification tasks, it is less effective for time series forecasting applications, where trend continuity is crucial. The patch permutation strategy disrupts these trends, making it challenging for models to capture long-term dependencies needed for accurate forecasting. The paper does not provide a modified approach for forecasting tasks, which restricts PPT\\u2019s utility and impact in a broader range of time series applications.\", \"questions\": \"1. **Is there a method that can be adapted for both classification and forecasting tasks in time series?** Specifically, is it possible to design a pretext task that effectively utilizes order information for classification while preserving trend continuity to support forecasting tasks? If so, can this approach balance prediction accuracy with the advantages of sequence dependency?\\n\\n2. **Does the ACF-COS metric demonstrate robust and broad applicability in measuring the orderness of time series data?** Specifically, can ACF-COS accurately differentiate between various types of time series tasks, such as high-noise sequences, non-order-dependent tasks, and strongly order-dependent tasks? Additionally, how can we ensure the stability of ACF-COS across different patch sizes, sampling frequencies, or data distributions to reliably evaluate the suitability of PPT for diverse time series datasets?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Q1. Is there a method that can be adapted for both classification and forecasting tasks in time series? Specifically, is it possible to design a pretext task that effectively utilizes order information for classification while preserving trend continuity to support forecasting tasks? If so, can this approach balance prediction accuracy with the advantages of sequence dependency?**\\n\\nThank you for this insightful question! While our current manuscript focuses explicitly on time series classification tasks (please kindly refer to our Keywords in the Submission details), our future research direction is to design pretext tasks that can be adapted for both the classification and forecasting setups. We found that PPT is highly effective for classification tasks where order information is crucial, but it results in subpar model performance for forecasting tasks, as noted in our limitations. This is because shuffling the patch order disrupts the \\\"trend\\\" factor, which is an essential criterion for forecasting.\\n\\nAs the reviewer has suggested, a possible direction is to jointly utilize the mask-based pretext task and our order-aware pretext task, ensuring that the trend continuity is preserved for forecasting tasks. We believe that this approach may strike a balance between improving the prediction of forecasting while supervising order information in patches.\\n\\nIn our future work, we plan to explore this direction by developing a hybrid pretext task that combines the strengths of both mask-based and order-aware approaches. By doing so, we aim to create a more versatile pretext task that can be effectively applied to both classification and forecasting tasks, thereby expanding the scope and impact of our research.\\n\\nWe appreciate the reviewer's valuable suggestion and look forward to investigating this promising direction in our future research efforts.\"}", "{\"comment\": \"**Q3. What are the final results of $\\\\lambda$ and how do they affect model performance?**\\n\\n\\nA3. We thank the reviewer for this insightful question. To answer this, we conducted multiple fixed-$\\\\lambda$ experiments and compared them with the results from the learnable $\\\\lambda$ setup. Briefly, we show that the learned $\\\\lambda$s are the best performing compared to the fixed $\\\\lambda$ setup.\\n\\nWe conducted a grid search between $\\\\lambda_{consistency} = [0.1, 0.2, 0.5, 1.0, 2.0]$ and $\\\\lambda_{margin} = [0.1, 0.2, 0.5, 1.0, 2.0]$, resulting in 25 different combinations. We performed our experiments on the PITS (Linear) model and PatchTST (Transformer) with the PTB dataset. The following table shows the F1 Score (average of 5 random seeds) for the top 3 and bottom 1 performing combinations for PITS:\\n\\n| Rank | Model | Consistency | Margin | F1 |\\n|------|-------|-------------|--------|------|\\n| 1 | PITS | Learnable | Learnable | 91.24 \\u00b1 0.72 |\\n| 2 | PITS | 0.1 | 1.0 | 91.20 \\u00b1 0.66 |\\n| 3 | PITS | 0.1 | 0.1 | 91.00 \\u00b1 0.79 |\\n| ... | ... | ... | ... | ... |\\n| 25 | PITS | 0.1 | 0.5 | 89.23 \\u00b1 1.38 |\\n\\nSimilarly, for PatchTST:\\n\\n| Rank | Model | Consistency | Margin | F1 |\\n|------|-------|-------------|--------|------|\\n| 1 | PatchTST | Learnable | Learnable | 88.67 \\u00b1 0.95 |\\n| 2 | PatchTST | 1.0 | 0.1 | 88.57 \\u00b1 0.78 |\\n| 3 | PatchTST | 0.5 | 0.1 | 88.39 \\u00b1 0.81 |\\n| ... | ... | ... | ... | ... |\\n| 25 | PatchTST | 0.1 | 0.1 | 85.61 \\u00b1 1.22 |\\n\\nWe show that the learnable hyperparameters are comparable to the best-performing fixed-hyperparameters that are manually searched, showcasing that the learnable hyperparameter setup can significantly reduce hyperparameter tuning while leading to optimal performance.\\n\\nWe report the full results of the fixed grid search, and also visualize how the hyperparameters are adjusted during the model training for our setup in **Appendix G.4.** We have also updated the main manuscript (**Lines 515-516**) to incorporate this information as follows:\\n\\n```markdown\\n**Learnable and Fixed Loss Coefficients** We conducted experiments with fixed hyperparameters $\\\\lambda_1$ and $\\\\lambda_2$ to compare against the learnable strategy adopted in our work. The results and analysis of these experiments can be found in Appendix G.\\n```\\n\\nWe thank the reviewer for the detailed discussion of our works and suggesting experiments that have highly helped us improve our work. If there are additional explanations that are needed, please do not hesitate to let us know, so that we can further improve our scores. Thank you for your services.\"}", "{\"title\": \"Thank you for the thorough review\", \"comment\": \"Thank you for the thorough review and insightful discussions throughout this review process. We greatly **appreciate the reviewer\\u2019s decision to maintain the rating towards acceptance, recognizing the novel perspective that PPT brings to time series analysis**. As the reviewer has suggested, we will look into expanding PPT in other applications in our future work. We are grateful for the enlightening discussions we have had and the valuable feedback you have provided.\"}", "{\"comment\": \"We thank the reviewer for the thoughtful review of our work. We appreciate that the reviewer has thoroughly gone through our paper, highlighting the novelty of the two learning methods proposed in this work, pinpointing the performance gain obtained through the use of PPT, and the utility of ACF-COS metrics. Here, we have prepared detailed responses to the reviewer\\u2019s feedback and questions. Please let us know if there are additional explanations that we can provide to update our scores!\"}", "{\"comment\": \"**I really appreciate your efforts during the rebuttal period and most of my concerns have been addressed. In the comparison with MiniRocket, although PPT slightly outperforms in terms of key metrics (classification performance), I would prefer to see a comparison with MiniRocket in terms of running time (efficiency).**\\n\\nWe greatly appreciate the reviewer\\u2019s recognition of our efforts during the rebuttal period and are pleased to know that most of the concerns have been addressed. Regarding the comparison between MiniRocket and PPT in terms of the \\u201crunning time efficiency\\u201d, we acknowledge that MiniRocket is indeed a highly efficient algorithm due to its use of deterministic convolution kernels [1].\\n\\nHowever, It is important to note that PPT is designed to be incorporated into time series patch-based self-supervised representation learning, where much of the current time series research has been focusing on [2,3,4]. We point out that PPT can be adapted to many of the patch-based architectures, such as PatchTST and PITS, which is trained end-to-end, with mini-batch updates, while MiniRocket is unable to provide such functionality. This fundamental difference in architecture and training process explains the longer training time of PPT compared to MiniRocket.\\n\\nTo provide a quantitative comparison, we measured the training times for both MiniRocket and PatchTST (+PPT) on the PTB task. MiniRocket took 29.83\\u00b19.16 seconds using an Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz, while PatchTST (+PPT) required 139.5\\u00b110.76 seconds using a single NVIDIA RTX A6000 GPU.\\n\\nWhile we acknowledge the longer training time of PPT, we would like to emphasize that the primary goal of PPT is to introduce a novel pretext task that enhances the performance of patch-based deep architectures using order information between patches, and the trade-off between efficiency and performance is a compromise in this context.\\n\\nWe acknowledge that there is room for further optimization of PPT, such as introducing more efficient auto-regressive models for patch sequence understanding or thinking of ways we can construct context vectors without these auto-regressive models, which could potentially reduce the training time.\\n\\nIn summary, while MiniRocket is indeed a highly efficient time series algorithm, PPT is designed for general, patch-based deep architectures, which achieves improved classification performance compared to existing time series pretext tasks, which is the main focus of our work. We thank the reviewer's feedback and the opportunity to clarify the efficiency aspect of our work. \\n\\n[1] Angus Dempster, et al. ROCKET: Exceptionally fast and accurate time series classification using random convolutional kernels. arxiv preprint arxiv: 1910:13051\\n\\n[2] Lee, Seunghan, Taeyoung Park, and Kibok Lee. \\\"Learning to Embed Time Series Patches Independently.\\\" The Twelfth International Conference on Learning Representations.\\n\\n[3] Cao, Defu, et al. \\\"TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting.\\\" The Twelfth International Conference on Learning Representations\\n\\n[4] Jin, Ming, et al. \\\"Time-LLM: Time Series Forecasting by Reprogramming Large Language Models.\\\" The Twelfth International Conference on Learning Representations.\"}", "{\"comment\": \"Dear Reviewer Z8EK,\\n\\n\\nWe thank the reviewer for providing constructive feedback and highlighting our work as **\\\"a significant contribution to the field of time series representation learning\\\"**. We are delighted and motivated by your appraisals.\\n\\n\\nAs we now approach the end of the discussion period, we kindly ask if there are any uncertainties left that we could address, as we would be more than happy to provide additional clarifications and explanations if needed.\\n\\n\\nWe thank you once again.\\n\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"General Response\", \"comment\": \"We sincerely thank all reviewers for their thorough evaluation of our work.\\n\\nWe appreciate the positive feedback regarding our \\bwork PPT, which was described as **\\u201cnovel, well-written, and easy to understand, a significant contribution to the field of time series representation learning\\u201d with \\u201csufficient\\u201d experimentation and \\u201cstrong empirical results.\\u201d** PPT is indeed a novel self-supervised learning algorithm designed to enhance existing patch-based methodology in time series. We are also encouraged that reviewers have recognized the potential impact of our ACF-COS metric in benefiting other time series models.\\n\\nBased on the reviews provided, we have **conducted 3 new experiments and included an additional baseline**, where we have accordingly updated our manuscript. In detail, **we performed linear probing experiments with UniTS, hyperparameter ablations with a fixed coefficient setup to compare it against our learnable coefficient setup, and a clustering comparison with several SOTA works**. We have updated our manuscripts to incorporate additional details as suggested by the reviewers.\\n\\nThank you once again for the constructive reviews and commentary. We believe our revisions comprehensively address the raised concerns. We remain available to provide any additional clarification. As some reviewers have not yet responded, we would greatly value their feedback to further improve our work.\\n\\nThank you very much!\"}", "{\"comment\": \"**Q2. Provide more details on how the Hyperparameters $\\\\lambda$ are adjusted during training.**\\n\\nA2. We are happy to have the opportunity to further explain how the hyperparameters $\\\\lambda$ are adjusted during training. This strategy was first proposed by Kendall et al. [1] and Liebel and K\\u00f6rner [2] and has been consequently adopted in the time series community [3, 4].\\n\\nIt has been shown that simply aggregating losses from multiple tasks can lead to negative transfer, as training multi-task models requires a balance between task-specific losses. A naive solution would be to perform a grid search over all possible combinations of weights, but the search space grows exponentially, and the weights are fixed throughout the whole training process, not being able to adapt to the training dynamics [4].\\n\\nAs such, the learnable loss function [1, 2] is based on the uncertainty-based method that maximizes the Gaussian likelihood by considering each task's homoscedastic uncertainty. The learnable parameters are jointly optimized using the task-specific losses, with the parameters being regularized so as not to become too small during training.\\n\\nIn our experiment, we adopt this learnable loss strategy to automatically balance the consistency loss and margin loss. Specifically, we introduced learnable parameters $\\\\lambda_1$ and $\\\\lambda_2$ for the consistency loss and margin loss, respectively. These parameters are initialized with a value of 1.0 and are optimized alongside the model parameters during training.\\n\\nDuring training, the model parameters and the learnable loss parameters are updated simultaneously using backpropagation. This allows the model to adaptively adjust the importance of each loss term based on the training dynamics and the uncertainty of each task. Based on the reviewer's suggestion, we have visualized the training dynamics of the learnable lambdas in **Appendix G.4**. We observe that while the parameters were initialized with a value of 1.0, they adjusted their values throughout the training and converged to 0.39 (Consistency) and 0.60 (Margin) for PITS and 0.18 (Consistency) and 0.23 (Margin) for PatchTST, showcasing that the magnitude of the learned hyperparameters may differ for each model. \\n\\n[1] Kendall, Alex, Yarin Gal, and Roberto Cipolla. \\\"Multi-task learning using uncertainty to weigh losses for scene geometry and semantics.\\\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.\\n\\n[2] Liebel, Lukas, and Marco K\\u00f6rner. \\\"Auxiliary tasks in multi-task learning.\\\" arXiv preprint arXiv:1805.06334 (2018).\\n\\n[3] Dong, Jiaxiang, et al. \\\"Simmtm: A simple pre-training framework for masked time-series modeling.\\\" Advances in Neural Information Processing Systems 36 (2024).\\n\\n[4] Kim, Jaeho, et al. \\\"Multitask Deep Learning for Human Activity, Speed, and Body Weight Estimation Using Commercial Smart Insoles.\\\" IEEE Internet of Things Journal 10.18 (2023): 16121-16133.\"}", "{\"comment\": \"**Q5. It's not clear whether this method even works for diverse dataset pretraining settings as in UniTS (https://arxiv.org/abs/2403.00131). Since this method is mainly for pretext tasks, this is an important point to raise given we're in the era of foundation models.**\\n\\nThank you for raising this important point. While we acknowledge the growing importance of foundation models and diverse dataset pretraining setups for time series, as demonstrated by works like UniTS, our work has a different focus and scope.\\n\\nThe primary objective of our paper PPT is to introduce a novel pretext task for time series that leverages the order information of patches in time series data. Our pretext task can be incorporated into existing patch-based time series models like PatchTST and PITS, and have demonstrated their effectiveness in various experimental setups (supervised learning, self-supervised learning, etc.)\\n\\nUnlike UniTS, which aims to develop a unified framework for time series foundation models,**our work is not intended to be a comprehensive solution for diverse dataset pretraining**. Instead, our focus is on improving the performance of existing patch-based methods in time series applications where order information is important, by introducing a new pretext task that captures this order information. \\n\\nWe believe that our contribution is valuable in advancing time series representation learning, particularly in tasks where order information is crucial. While foundation models and diverse dataset pretraining are undoubtedly important areas of research, our work targets a different aspect of time series analysis and contributes to this field in a complementary manner.\"}", "{\"comment\": \"We sincerely appreciate the positive review and the detailed description provided by the reviewer to improve our manuscript. We are encouraged that the reviewer has recognized our work's novelty, flexibility, and strong empirical performance. We particularly appreciate the reviewer's acknowledgment of ACF-COS, our proposed metric for quantifying order information in time series, and agree with the assessment of its potential benefits for broader time series applications.\\n\\nIn response to the reviewer's feedback, we have made the following improvements:\\n\\n1. Updated our main manuscript (Line 255-258) to further incorporate additional details on how the context vectors are generated, and have added Section **F.3: Autoregressive models** in the Appendix for further explanations. \\n\\n2. Performed and reported (**Appendix G.4 Learnable and Fixed Lambda**) analysis on the learned hyperparameters $\\\\lambda$ and compared it against fixed hyperparameter setups (25 combinations). We also updated our main manuscript (**Lines 515-516**) to incorporate this information. Briefly, the learnable hyperparameter approach achieves comparable performance to the optimal fixed hyperparameter setup, effectively eliminating the need for extensive parameter searches. \\n\\nHere, we provide detailed answers to each of the questions.\"}", "{\"comment\": \"Dear Reviewer o6fk,\\n\\nWe thank the reviewer for the positive evaluation of our work. We have provided detailed explanations addressing the reviewer\\u2019s questions. As we now approach the end of the discussion period, we cordially request that the reviewer review our responses. We hope we have adequately addressed the reviewer\\u2019s concerns and would appreciate the consideration of revising our scores if our answers are satisfactory. **Please don\\u2019t hesitate to ask if any uncertainties remain, as we are committed to improving our paper further.**\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Overall Changes Summary.\", \"comment\": \"Thank you for your thorough and constructive feedback on our work! We are grateful that the reviewer has recognized both the novelty and clarity of our paper, as well as acknowledged the soundness of our motivation for using patch order supervision as a pretext task in time series classification and finding our experiments sufficient.\\nBased on the reviewer\\u2019s suggestion, we have \\n1. Conducted additional baseline comparisons,\\n2. Updated the main manuscript (**lines 189-192**) to incorporate additional details of channel order learning to make our explanation more intuitive.\\n3. Reported the speed and memory consumption needed in our work in **Section F.2: Time and Memory Cost of PPT** in the Appendix\\n4. Modified the typo in our table caption.\\n\\nBelow, we have provided detailed responses. Please don't hesitate to seek any additional clarification needed.\"}", "{\"comment\": \"**W1. Lack of SoTA baselines, cite UniTS, iTransformer, Pyraformer, Autoformer.**\\n\\nA1-1. We thank the reviewer for bringing our attention to works that are relevant to ours and helping us to strengthen our related works. Our response is twofold.\\n\\nFirst, we respectfully disagree that our work lacks SoTA baselines. We have made extensive comparisons against the most recent and relevant SSL SoTA works for time series classification, such as PITS (ICLR 2024), TS-GAC (AAAI 2024), SimMTM (NeurIPS 2023), PatchTST (ICLR 2023), and Tf-C (NeurIPS 2022). We have also shown that our pretext task can be incorporated into both linear-based (PITS) and transformer-based (PatchTST) SoTA models, showcasing their effectiveness in various tasks (e.g., Self-Supervised, Semi-Supervised, and Supervised).\\n\\nSecond, as the reviewer suggested, we have cited iTransformer and Autoformer which are the SoTA works in long-term time series forecasting (LTSF) problems (Line 520).\"}", "{\"metareview\": \"(a) Summary of Scientific Claims and Findings\\nThe paper introduces Patch Order-aware Pretext Task (PPT), a novel self-supervised learning methodology tailored for time series classification. The key contributions include:\", \"patch_order_awareness\": \"PPT supervises models using patch permutations, exploiting both time and channel-wise order dependencies in time series data.\", \"two_learning_methods\": \"Patch Order Consistency Learning evaluates patch order correctness, while Contrastive Learning differentiates between weakly and strongly permuted sequences.\", \"acf_cos_metric\": \"The introduction of a dataset-specific metric adds significant value by enabling pre-evaluation of PPT\\u2019s effectiveness.\", \"performance_gains\": \"PPT improves classification accuracy by up to 7% on ECG tasks and 5% on human activity recognition tasks, outperforming masking-based pretext tasks.\\n\\n(b) Strengths of the Paper\", \"novel_and_well_motivated_methodology\": \"The focus on patch order awareness provides a unique perspective for time series representation learning, addressing a gap in current self-supervised methods.\", \"strong_empirical_results\": \"Demonstrates consistent performance improvements across various tasks and datasets, with thorough experiments validating the efficacy of PPT.\", \"clear_writing_and_presentation\": \"The paper is well-structured and communicates complex ideas effectively, supported by comprehensive illustrations and examples.\", \"thorough_rebuttal_and_revisions\": \"The authors addressed reviewer concerns with additional experiments, new baselines (e.g., MiniRocket, UniTS), clustering evaluations, and expanded explanations.\\n\\n(c) Weaknesses of the Paper\", \"limited_applicability_beyond_classification\": \"PPT\\u2019s focus on classification tasks limits its generalizability to forecasting or anomaly detection, where trend continuity or non-order dependencies are more critical.\", \"computational_overhead\": \"The method incurs higher training time and memory usage compared to simpler baselines like MiniRocket, which may hinder scalability.\", \"hyperparameter_sensitivity\": \"The learnable parameters for loss coefficients require further exploration to understand their behavior across diverse datasets.\", \"lack_of_cross_dataset_evaluations\": \"While single-dataset performance is robust, the paper does not explore the transferability of learned representations to different datasets.\\n\\n(d) Reasons for Acceptance\", \"novel_contribution\": \"PPT introduces a novel pretext task and metric specifically tailored to time series, filling a critical gap in self-supervised learning for this domain.\", \"practical_impact\": \"The method significantly improves classification performance on time series tasks, demonstrating its utility for real-world applications.\", \"comprehensive_validation\": \"The paper provides extensive experimental validation, addressing reviewer concerns with additional results and insights during the rebuttal phase.\", \"potential_for_broader_impact\": \"The proposed ACF-CoS metric and patch-order-aware framework offer tools and methodologies that can be extended to other time series problems.\\n\\nDespite minor weaknesses such as limited scope beyond classification and computational overhead, the paper\\u2019s strengths, novel contributions, and robust empirical results justify its acceptance.\", \"additional_comments_on_reviewer_discussion\": \"Points Raised by Reviewers and Author Responses\", \"concern\": \"Reviewers requested further validation of the proposed ACF-CoS metric and its practical utility.\", \"author_response\": \"The authors included experiments correlating ACF-CoS values with PPT\\u2019s effectiveness on various datasets, demonstrating its predictive utility.\", \"evaluation\": \"This addition reinforced the utility of the metric and addressed the reviewers\\u2019 concerns convincingly.\\n\\nThe authors provided a thorough and thoughtful rebuttal, addressing key concerns with additional experiments, including new baselines like MiniRocket and UniTS, expanded analyses of the ACF-CoS metric, and detailed justifications for computational trade-offs. They also improved the paper\\u2019s clarity and presentation, enhancing its overall readability. While limitations such as restricted applicability beyond classification and the lack of cross-dataset evaluations remain, the paper\\u2019s novel contributions, robust empirical validation, and practical utility support its acceptance.\"}", "{\"comment\": \"Dear **Reviewer o6fk,**\\n\\nAs the dues are coming, we kindly request the reviewer to have a look at our responses. We would appreciate it if the reviewer could revise the scores based on our answers, and ask any questions that could help clarify any uncertainties left behind.\\n\\nThank you.\\n\\nAuthors.\"}", "{\"comment\": \"**Q2. The authors do not consider tasks beyond classification. I'm not convinced that the method is truly better than others when considering the representations generated, for example no comparisons with SoTA is shown on anomaly detection or clustering tasks.**\\n\\nWe appreciate the reviewer's feedback and suggestion to evaluate our method on tasks beyond classification. Addressing this feedback, we conducted clustering experiments, as in the works of TF-C, to assess the quality of the representations generated by our method.\\n\\nWe fitted K-means clustering on top of the pre-trained representations of the GL HAR task, which consists of 7 classes. The experiments were performed with 5 different random seeds to ensure robustness. We report the Silhouette Score, Normalized Mutual Information (NMI), and Adjusted Rand Index (ARI) scores to evaluate the clustering performance. We also compared our works to SOTA works such as SimCLR, TS2Vec, TF-C, TimeMAE, UniTS, and other pretext task based methods such as mask and reconstruct, and contrastive learning-based method. \\n\\nIn the below table, we show that PPT achieves the best performance in NMI and ARI scores, demonstrating its effectiveness in generating meaningful representations in clustering tasks. Here, NMI and ARI measure the agreement between the predicted clusters and the ground truth labels. A higher NMI and ARI score suggests that the clustering assignment is close to the ground truth. While we acknowledge that the mask-based pretext task used in PatchTST outperforms PPT in terms of the Silhouette Score, it is important to note that the Silhouette Score is an internal evaluation metric that measures the compactness of clusters without requiring ground truth labels. A higher Silhouette Score indicates that the clusters are well-separated and compact, but it does not necessarily imply that the clustering results align with the true class labels.\\n\\n| Model | Silhouette Score | NMI | ARI |\\n|-------|------------------|-----|-----|\\n| SimCLR | 0.128\\u00b10.037 | 0.472\\u00b10.025 | 0.289\\u00b10.021 |\\n| TS2Vec | 0.128\\u00b10.037 | 0.607\\u00b10.029 | 0.405\\u00b10.024 |\\n| TF-C | 0.143\\u00b10.020 | 0.585\\u00b10.035 | 0.429\\u00b10.026 |\\n| TimeMAE | 0.141\\u00b10.023 | 0.556\\u00b10.017 | 0.429\\u00b10.022 |\\n| UniTS | 0.308\\u00b10.057 | 0.610\\u00b10.105 | 0.475\\u00b10.119 |\\n| PatchTST (Mask) | **0.338\\u00b10.007** | 0.548\\u00b10.016 | 0.384\\u00b10.018 |\\n| PatchTST (CL) | 0.253\\u00b10.009 | 0.444\\u00b10.007 | 0.275\\u00b10.021 |\\n| PatchTST (PPT) | 0.263\\u00b10.023 | **0.644\\u00b10.048** | **0.481\\u00b10.050** |\\n\\nThe result that PPT outperforms other SOTA methods in terms of NMI and ARI scores highlights its ability to capture the internal temporal structure and order relationships within the time series, leading to clustering results that align well with the true class labels. In summary, we show that PPT demonstrates its effectiveness in generating meaningful and informative representations for clustering tasks.\"}", "{\"summary\": \"The authors believe that patch order awareness is very important in time series modeling and introduce PPT (Patch order-aware Pretext Task) for time series classification. The authors also develop ACF-CoS metric for quantifying the importance of patch orderness and interpreting applicable to PPT scenarios. Experimental results demonstrate that PPT shows advantages in self-supervised, fully-supervised, and semi-supervised learning with sparse labels.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Work is novel and motivation regarding patch order is reasonable for time series analysis.\", \"Well written, clear structure, easy to understand\", \"Experiments are sufficient, and the analysis based on ACF-CoS is interesting\"], \"weaknesses\": \"Rocket[1] and MiniRocket[2] are important baselines for time series classification tasks. I think considering them can make the experiment more comprehensive (including classification performance and algorithm efficiency).\\n\\n[1] Angus Dempster, et al. ROCKET: Exceptionally fast and accurate time series classification using random convolutional kernels. arxiv preprint arxiv: 1910:13051 \\n\\n[2] Angus Dempster, et al. MINIROCKET: A Very Fast (Almost) Deterministic Transform for Time Series Classification. arxiv preprint arxiv: 2012:08791\", \"questions\": [\"Time dimension patch order is intuitive, but channel dimension order is not easy to understand and it needs further discussion.\", \"Although the author provides the consumed time of permuting a batch of instances, considering the need to store additional augmented samples and the need for additional calculation, it would be better if the author can provide the additional speed and memory consumption required by PPT.\", \"Confusing content: In Tab. 4 caption: \\u201ca decrease in ACF-CoS leads to an improved likelihood of PPT being beneficial\\u201d. Should \\u201ca decrease\\u201d be modified to \\u201can increase\\u201d? In my understanding, there is a positive correlation between the ACF-COS score and PPT's performance.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**A1-2.** We thank the reviewer for bringing our attention to UniTS, a relevant work to PPT, which was accepted to NeurIPS 2024. We would first like to explain why we were not able to compare our work to UniTS initially and then discuss the experimental comparison with UniTS.\\n\\nFirstly, we were unable to compare our work to UniTS at the time of submission, as the NeurIPS 2024 acceptance date (Sep 26) and ICLR submission dates (Sep 27) were very close. However, we have cited and made extensive comparisons to UniTS in our updated manuscript. We have compared our work with UniTS on self-supervised linear probing and clustering tasks.\\n\\nFor the hyperparameters of UniTS, we conducted a grid search on the learning rate between [5e-5, 5e-4, 1e-4, 1e-3] and d_model between [32, 64]. We set the accumulate gradients to 1 as our work focuses on a one-to-one setup. We report the results with a learning rate of 1e-4 (EMOPain, PTB) and 1e-3 (GL) and d_model=64, as they performed the best.\\n\\nBriefly, while UniTS shows strong and competitive performance, it did not outperform PPT in three of the benchmarks we used. This might be attributed to UniTS's novel contribution towards time series multi-task learning, which focuses on learning generalized representations from multiple tasks such as forecasting, classification, and anomaly detection. Unlike UniTS, PPT is more focused on providing order-aware supervision to time series classification tasks that exhibit order dependency across time and channels, where our focus is to provide a generalized representation within the same time series classification. \\n\\nAs we have noted in our limitation section, PPT did not perform well for forecasting tasks, as permuting the orders of patches for forecasting disturbs the trend pattern, a key factor in time series forecasting. However, PPT is highly effective for tasks that show order information in time series classification. We intend to explore the integration of PPT with multi-task learning frameworks like UniTS to learn more generalized, task-agnostic time series representations in future work.\", \"we_have_also_updated_the_main_manuscript_to_include_the_comparisons_with_units_and_added_the_following_tables_to_showcase_the_results\": \"**Linear Probing Experiment**\\n| Dataset | Model | Accuracy | F1 score | AUROC | AUPRC | \\n|---------|-------|----------|-----------|--------|--------|\\n| EMO | UniTS | 78.37\\u00b10.13 | 29.51\\u00b10.51 | 65.95\\u00b12.95 | 44.84\\u00b12.39 | \\n| EMO | PatchTST (+PPT) | **81.92\\u00b10.58** | **54.19\\u00b12.33** | **84.74\\u00b11.55** | **62.51\\u00b13.09** | \\n\\n| Dataset | Model | Accuracy | F1 score | AUROC | AUPRC | \\n|---------|-------|----------|-----------|--------|--------|\\n| PTB | UniTS* | 84.20\\u00b11.01 | 89.57\\u00b10.60 | 88.98\\u00b11.35 | 94.81\\u00b10.84 |\\n| PTB | PITS (+PPT) | **86.48\\u00b10.40** | **91.24\\u00b10.26** | **91.83\\u00b11.36** | **96.26\\u00b10.85** | \\n\\n| Dataset | Model | Accuracy | F1 score | AUROC | AUPRC | \\n|---------|-------|----------|-----------|--------|--------|\\n| GL | UniTS* | 84.45\\u00b10.98 | 83.93\\u00b11.21 | 98.27\\u00b10.18 | 90.03\\u00b10.93 | \\n| GL | PatchTST* (+PPT) | **92.33\\u00b10.48** | **93.67\\u00b10.45** | **99.28\\u00b10.10** | **96.83\\u00b10.44** | \\n\\nIn summary, while UniTS demonstrates strong performance, PPT outperforms it on the time series classification datasets. We believe that the order-aware supervision provided by PPT is **particularly beneficial for time series classification tasks with order dependencies**. We have updated our manuscript to include these comparisons and insights.\"}", "{\"comment\": \"**Q1. Could you provide more details on how the context vectors are generated by the auto-regressive model for the consistency loss? How is the architecture of this model designed, and how is it trained?**\\n\\nA1. Thank you for your question, which helped us realize that the current manuscript lacked details on the auto-regressive model used for generating context vectors in the consistency loss. We appreciate the opportunity to provide more information about this important component of our approach.\\n\\nThe consistency loss is employed on the sequence of patches in both time and channel order. To generate the context vectors, we use a single-layer unidirectional LSTM that takes the patch sequences as input. The final cell state of the LSTM serves as the context vector, effectively summarizing the sequence of patches.\\n\\nFor consistency learning, these context vectors are assigned pseudo labels: 1 if the sequence of patches is from the original sequence, and 0 if the sequence of patches is from a strongly shuffled sequence. Initially, the auto-regressive model is unable to recognize the correct sequence. However, through end-to-end training alongside the backbone patch model, the model learns to discriminate between the original and strongly shuffled patch sequences.\\n\\nThe consistency learning is applied along both the time and channel dimensions. For the time dimension, the module learns to recognize the correct order of patches that should occur temporally. For the channel dimension, the module learns the cross-channel relationships of patches. As shown in **Fig1:Motivation**, we can observe distinct patterns co-occuring along the channel dimension, and learning this cross-channel relationship has shown to enhance model performance.\\nBased on the review, we were able to update our main manuscript by adding the following explanations (**Line 255-258**)\\n\\n```markdown\\nHere, we employ single-layer uni-directional LSTMs that process the patch sequences. The final cell state of each LSTM serves as the context vector, effectively summarizing the patch sequences. These LSTM modules are trained end-to-end with the backbone model (details in Appendix F)\\n```\"}", "{\"comment\": \"I really appreciate your efforts during the rebuttal period and most of my concerns have been addressed. In the comparison with MiniRocket, although PPT slightly outperforms in terms of key metrics (classification performance), I would prefer to see a comparison with MiniRocket in terms of running time (efficiency).\"}", "{\"summary\": \"The paper proposes a self supervised patch order learning pretext tasks for time series classification. Experiments are shown to quantify the benefits of path order learning. An evaluation metric to measure importance of orderness is also proposed.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper offers a perspective on ordering of patches in self supervised TS learning, and presents experiments on some datasets to show benefits and drawbacks of the approach.\", \"weaknesses\": \"1. Lack of SoTA baselines, compare against and cite:\\n\\n- S. Gao et al, UNITS: A Unified Multi-Task Time Series Model, https://arxiv.org/abs/2403.00131, NeurIPS 2024\\n- Yong Liu et al, itransformer: Inverted transformers are effective for time series forecasting, ICLR 2024\\n- Shizhan Liu et al. Pyraformer: Low-complexity pyramidal attention for long-range time series modeling and forecasting, ICLR 2021\\n- Haixu Wu et al. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting, NeurIPS 2021\\n\\nIn particular, UniTS has been shown to outperform PatchTST and other baselines in many setups.\\n\\n2. The authors do not consider tasks beyond classification. I'm not convinced that the method is truly better than others when considering the representations generated, for example no comparisons with SoTA is shown on anomaly detection or clustering tasks. See for example, TF-C Appendix G, which is a baseline method authors compare against.\\n\\n- Zhang et al, Self-Supervised Contrastive Pre-Training for Time Series via Time-Frequency Consistency, NeurIPS 2022\\n\\n3. Limited evaluations. Experiments are all one-to-one dataset. The authors do not consider one-to-many, or other settings to test whether the representations are generalizable to cross-dataset settings etc. \\n\\n4. The gains are not convincing since many other baselines outperform the proposed method in several metrics.The improvement over GPT4TS is marginal.\\n\\n5. It's not clear whether this method even works for diverse dataset pretraining settings as in UniTS (https://arxiv.org/abs/2403.00131). Since this method is mainly for pretext tasks, this is an important point to raise given we're in the era of foundation models.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7zsWni0qzC
Interpretable Neural ODEs for Gene Regulatory Network Discovery under Perturbations
[ "Zaikang Lin", "Sei Chang", "Aaron Zweig", "Elham Azizi", "David A. Knowles" ]
Modern high-throughput biological datasets with thousands of perturbations provide the opportunity for large-scale discovery of causal graphs that represent the regulatory interactions between genes. Numerous methods have been proposed to infer a directed acyclic graph (DAG) corresponding to the underlying gene regulatory network (GRN) that captures causal gene relationships. However, existing models have restrictive assumptions (e.g. linearity, acyclicity), limited scalability, and/or fail to address the dynamic nature of biological processes such as cellular differentiation. We propose PerturbODE, a novel framework that incorporates biologically informative neural ordinary differential equations (neural ODEs) to model cell state trajectories under perturbations and derive the causal GRN from the neural ODE's parameters. We demonstrate PerturbODE's efficacy in trajectory prediction and GRN inference across simulated and real over-expression datasets.
[ "Gene Regulatory Network", "Dynamical Systems", "Single-cell RNA-sequencing", "Neural ODE", "Causal model", "Causal discovery" ]
Reject
https://openreview.net/pdf?id=7zsWni0qzC
https://openreview.net/forum?id=7zsWni0qzC
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qtlFRME3Du", "qse6kknwME", "l5yLZNoWKT", "jxR6BqLKc4", "bGsrDMoz6T", "YIn8xCNURW", "UieGEO5qRb", "R50EQ6nAOi", "Ifw2pqes2r", "I8FrCkV56p", "Cb9O0sZTWx", "AmiznBHmyS", "AhTiOuUEvt", "3E4VRZuWpj" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "decision" ], "note_created": [ 1734692392694, 1732787364586, 1732790954682, 1730684744760, 1732793061596, 1732791859715, 1730709907682, 1732904710850, 1730440627723, 1732818886255, 1732791006922, 1732844482724, 1729805155926, 1737523920224 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8602/Area_Chair_XU1c" ], [ "ICLR.cc/2025/Conference/Submission8602/Authors" ], [ "ICLR.cc/2025/Conference/Submission8602/Authors" ], [ "ICLR.cc/2025/Conference/Submission8602/Reviewer_HJyF" ], [ "ICLR.cc/2025/Conference/Submission8602/Authors" ], [ "ICLR.cc/2025/Conference/Submission8602/Authors" ], [ "ICLR.cc/2025/Conference/Submission8602/Reviewer_Js4p" ], [ "ICLR.cc/2025/Conference/Submission8602/Reviewer_ZGrZ" ], [ "ICLR.cc/2025/Conference/Submission8602/Reviewer_kwHy" ], [ "ICLR.cc/2025/Conference/Submission8602/Reviewer_HJyF" ], [ "ICLR.cc/2025/Conference/Submission8602/Authors" ], [ "ICLR.cc/2025/Conference/Submission8602/Reviewer_kwHy" ], [ "ICLR.cc/2025/Conference/Submission8602/Reviewer_ZGrZ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"metareview\": \"The paper introduces PerturbODE, a framework using neural ODEs to model cell state trajectories and infer causal GRNs, addressing limitations like scalability, linearity, and acyclicity in existing methods. Reviewers praised its novelty and biological relevance but raised concerns about validation, benchmarking, and biological interpretability, particularly regarding the large number of predicted edges and limited evidence for modeling cyclic dependencies. Despite improvements in the rebuttal, further validation and broader evaluations are needed.\", \"additional_comments_on_reviewer_discussion\": \"Most reviewers engaged in the discussion and acknowledged the rebuttal. While they recognize the potential of the work, they believe it requires further revisions to reach a publishable standard.\"}", "{\"comment\": [\"In response to weaknesses:\", \"Figure 5 has been updated.\", \"Identifiability is a good question; although our empirical results suggest this model is identifiable as it can generalize to unseen perturbations, there are few proven results about identifying an ODE from interventional data, especially non-linear ODEs. The paper [1] shows some barriers to proving results even for simple systems, and suggests the practical tactic of regularizing weight parameters, which we follow in our model.\", \"Could you please elaborate on the suggestion about extension towards entropy-regularized OT?\", \"It is a great suggestion to include an ablation study. I have included an ablation study in Appendix 6.6. Evaluating recall and p-value for PerturbODE trained on TF Atlas against various L1 penalty coefficient. Additionally, I evaluated recall and p-value with a various number of perturbations included in training.\", \"Typo is fixed.\"], \"in_response_to_questions\": [\"Regarding the other 95% of the modules, I have now included a gene enrichment analysis in section 4.2.3 for the 6 highlighted modules and an analysis for all modules in appendix 6.12. Thank you for your suggestion. Most modules show some enrichment for some pathways. I also plotted a histogram of genes selected by 10 randomly selected modules in appendix 6.12.\", \"It is indeed more realistic to have different perturbation strength for each gene. I have included a version of our model with tunable perturbation strength for each gene, denoted PerturbODE*. It outperforms other versions of our model in terms of GRN inference. Thank you for your input! Nevertheless, for prediction of response to unseen interventions, we used the original formulation with a fixed perturbation strength since it would be difficult to determine a perturbation strength for unseen interventions.\", \"The GRN is essentially obtained by the multiplication of two coefficient weight matrices of the MLP with an additional scaling to account for the rate of activation controlled by \\u03b1. The intuition is now added in Appendix 6.10 with a real example of E. coli flagella.\", \"[1] Aliee, Hananeh, Fabian J. Theis, and Niki Kilbertus. \\\"Beyond predictions in neural ODEs: identification and interventions.\\\" arXiv preprint arXiv:2106.12430 (2021).\"]}", "{\"comment\": \"In response to weaknesses:\\n\\n* We were previously not aware of BoolODE GRN models used in Beeline. We hope to include experiments measuring against that simulator in a later draft of the work.\\n\\n* Sorry about the confusion, we only suggest that PerturbODE resembles real gene regulatory systems through the adaptation of the network motif (pattern) of negative autoregulation without any priors. Real gene regulatory systems in E. coli and yeast utilize negative autoregulation to stabilize gene expression. However, it would be great to simulate data with ground truth networks with cycles using BoolODE. [1, pp 24-30] . \\n\\n* We agree there are many GRN inference methods and it's difficult to compare against all of them. In this work we focused on comparing PerturbODE against other causal methods that directly encode GRN as matrices in neural networks and directly as matrices in linear models. \\n\\n* It would be interesting to include comparison to methods such as GENIE3 for further analysis. Other ODE methods, such as Phoenix and [2] were mostly designed for time series data with a single perturbation or no perturbation. Further Phoenix processes data as aligned pseudotrajectories. It is a completely different task as PerturbODE deals with cells that are not aligned. In theory, we could infer pseudotime on TF Atlas and use pseudotrajectories or pseudotime as time series data, but this requires significant variation to the current architecture. It would be interesting to experiment PerturbODE on time series data for future evaluation. For trajectory inference, Prescient runs on PCA space, while PerturbODE is designed to run on the direct gene expression space. Other methods such as scGEN are foundation models. It would not be a fair comparison since PerturbODE only trained on TF Atlas, but it would be interesting to experiment. \\n\\n[1] Uri Alon. An Introduction to Systems Biology: Design Principles of Biological Circuits. CRC Press Taylor & Francis Group, A Chapman & Hall Book, 2006.\\n\\n[2] Jackson, C. A., Beheler-Amass, M., Tj\\u00e4rnberg, A., Suresh, I., Shang-mei Hickey, A., Bonneau, R., & Gresham, D. (2023). Simultaneous estimation of gene regulatory network structure and RNA kinetics from single cell gene expression. Center For Genomics and Systems Biology, New York University. \\n\\n\\nResponse to questions is posted in a separate comment due to word limit.\", \"title\": \"In response to weaknesses:\"}", "{\"summary\": \"The authors present PerturbODE, a method based on biologically informed neural ordinary differential equations for modelling perturbation-specific cell trajectories and inferring gene regulatory networks (GRNs) from perturbational (interventional) data. They empirically validate their approach through an extensive set of experiments on both synthetic and real systems. Through their empirical experiments, the authors show that PerturbODE can achieve state-of-the-art performance in high-dimensional settings for the task of GRN inference, relative to counterpart baseline methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This work addresses the challenging problem of GRN inference and cell trajectory inference - both important problems in computational biology. The authors propose a novel method based on neural ODEs that incorporates biologically informed dynamics to address both problems. PerturbODE has three key strengths that work towards addressing these longstanding problems in the computational biology community:\\n\\n1. GRN inference over high-dimensional systems is a challenging task. The authors demonstrate that their novel method, PerturbODE, is able to learn GRNs for high-dimensional systems. \\n2. A challenging problem in trajectory inference of cells is predicting the response of cellular systems under unseen interventions/perturbations. The authors show that PerturbODE is able to achieve improved performance on this task relative to counterpart baselines.\\n3. PerturbODE is able to model and learn cyclic dependencies, pertinent to GRN inference.\", \"weaknesses\": [\"Although this paper works towards addressing important and challenging problems of GRN inference and cell response prediction, there remain several shortcomings that limit the contributions of this paper (see below).\", \"If one of the claims is that PerturbODE can model cyclic dependencies between variables (genes), why evaluate on systems where the data generative processes adhere from directed acyclic graphs (DAGs)? If I understand correctly, SERGIO is limited to simulating systems given a DAG representation of a GRN. In this case, it may be useful to consider other biological system simulation tools in addition to SERGIO, for example, the framework in [1] could be used. At the minimum, this can be done in smaller systems (10-100 genes) to empirically validate the claims that PerturbODE can model cyclic dependencies.\", \"Likewise, for results on GRN inference on the transcription factor (TF) atlas, it is not clear if the ground truth GRNs are DAGs or contain cycles. In 4.2.2, the authors discuss that PerturbODE is able to predict negative feedback loops. However, in Appendix 6.7, the ground truth GRNs which are used for evaluation on the TF atlas (shown in Figures 15, 16, and 17), appear to be acyclic. It is not clear how the assessment of PerturbODE's ability to learn cyclic dependencies in GRNs is accomplished in this section. Further explanation and details are required to convey the significance of this result (and likewise this contribution).\", \"This work focuses on the task of GRN inference from interventional data. However, only a select few baseline methods are considered and compared. There exist many methods specifically tailored for GRN inference. To give a few examples, I refer the authors to [1]. I think it is okay if other GRN inference methods are not included, but it would be beneficial to include some discussion and justification on why the baselines considered in this work are sufficient for fair evaluation.\", \"For experiments in 4.2.1 for predicting response on held-out interventions. The authors compare only to baselines that are not necessarily tailored for this task. There exist state-of-the-art methods that address this problem, for example [2, 3]. To effectively showcase the utility of PerturbODE, it would be beneficial to compare to an existing method designed for predicting response to perturbations in biological systems since the NOTEARS baselines are not designed for this task. Without a biological-specific baseline for predicting response to perturbations, it is difficult to assess how PerturbODE performs relative to state-of-the-art methods. Moreover, it would be helpful to include a baseline that does not learn a GRN and is just trained to predict the perturbational response. Such a method should not work in the setting of unseen interventions since it will never see those conditions, but can be informative of whether or not learning the GRN is actually beneficial for this task.\", \"In general, there are still various details missing regarding experimental details, empirical validation of discovered GRNs, and justification of design choices of the method (see questions below). This makes the paper somewhat incomplete and hinders the presentation of the method and results. I think if these items are addressed, it would improve the overall quality of the paper.\"], \"questions\": [\"Lines 245 - 246: Beyond use for benchmarking, what is the justification of classifying negative edges as non-existing edges? Why not take the absolute value over $\\\\mathbf{W}$ and consider all edges as positive edges?\", \"The authors state that the structural hamming distance (SHD) metric would strongly favour predictions of empty graphs due to sparsity of ground truth GRNs (Lines 318 - 320). Do the methods tend to return empty graphs such that this would be the case? If the methods do return empty graphs, would this not be reflected in the other metrics, while still getting a performance metric through SHD that gives some global view on how the methods perform on predicting the GRN graph.\", \"For the experiments in 4.1 and 4.2, why not compute other additional metrics? For instance, given that the data has intervention, the negative log-likelihood of data points under intervention (I-NLL, done in DCDFG) could be computed. Possibly area under the ROC (AUROC) could also be computed to give an additional view of the results.\", \"In Figures 2 and 4, what are the mean and std for box plots computed over? Different graphs?\", \"For performance evaluation, the authors threshold $\\\\mathbf{W}$ to construct a binary adjacency matrix so that the evaluation metrics can be computed. This is done by selecting some small value $\\\\epsilon$ and setting edge weights below $\\\\epsilon$ to $0$ and above $\\\\epsilon$ to $1$. The determination of $\\\\epsilon$ depends on a parameter $c$. This seems like a limiting factor in the sense that as $c$ is changed, performance may change\", \"How do you decide the value of $c$? Is this value determined via SERGIO?\", \"Is this treated as a settable parameter? If so, how would the results change as the value of $c$ is changed?\", \"On the SERGIO simulated data the authors claim that PeturbODE gives significantly higher precision, recall, and F1 scores compared to DCDFG, NOTEARS, and NOTEARS-LR, while comparable scores DCDI, especially in the high dimensional case (400 genes). Compared to DCDI, it appears PerturbODE also yields a significantly higher variance on Recall and F1 score, while performing overall worse on precision. Is there intuition as to why PerturbODE yields such a high variance on Recall? I assume since there is variance on recall, this leads to variance on F1 score as well.\", \"For the task of predicting the response of unseen interventions (In Table 1) is there intuition as to why PerturbODE performs worse on the 2-Wasserstein metric, but significantly better on Pearson correlation? In general, this is a very difficult task. It could help to observe how the numbers for these metrics look for seen interventions (see earlier question regarding Interventional-NLL).\", \"For the result presented In Figure 4, the authors state GRN inference is done for a system of 817 genes. But the ground truth GRNs shown in Appendix 6.7 have far less genes. Are the GRNs shown in Appendix 6.7 Figures 15, 16, and 17 only the non-zero edges in the ground truth GRNs?\", \"If the objective is to learn a flow map for pushing initial samples over time (eq 3), why not use the recent advances in flow matching [4], which can achieve the same result in a significantly more efficient manner using simulation-free training, compared to Neural ODEs? This could potentially enable further scaling.\"], \"references\": \"[1] Pratapa, Aditya, et al. \\\"Benchmarking algorithms for gene regulatory network inference from single-cell transcriptomic data.\\\" Nature methods 17.2 (2020): 147-154.\\n\\n[2] Lotfollahi, Mohammad, et al. \\\"Predicting cellular responses to complex perturbations in high\\u2010throughput screens.\\\" Molecular systems biology 19.6 (2023): e11517.\\n\\n[3] Roohani, Yusuf, Kexin Huang, and Jure Leskovec. \\\"Predicting transcriptional outcomes of novel multigene perturbations with GEARS.\\\" Nature Biotechnology 42.6 (2024): 927-935.\\n\\n[4] Lipman, Yaron, et al. \\\"Flow Matching for Generative Modeling.\\\" International Conference on Learning Representations. (2023)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": [\"In response to weaknesses:\", \"It is a good point that there are large discrepancies in the number of edges predicted. We have included the recall scores plotted against different sparsity levels for experiment on TF Atlas. (see appendix 6.53) In addition, I have now included AUPRC for the results for simulated data. In addition, I have added a statistical inference with stability analysis in appendix 6.11 for further evaluation of edge selection.\", \"Large scale GRN discovery methods tend to overpredict false positives. PerturbODE is intended to be a method that identifies potential gene regulatory networks for biologists, who would conduct more in-depth experiments to manually validate the potential gene regulatory networks, preferably with Chip-seq data.\", \"PerturbODE is designed to train on datasets with hundreds of perturbations with known interventional targets. Uncertainty could arise when discerning which genes are under perturbation given drug treatment and environmental stress. However, it would be interesting to see PerturbODE training on these datasets (drug treatment and environmental stress) with multiple time points in the future.\", \"Thank you for your suggestion! I have now included a gene enrichment analysis in section 4.2.3 and more details in appendix 6.12. I also plotted histograms of genes selected by modules in appendix 6.12.\", \"It is a great suggestion to compare to methods in BEELINE. It is a good future direction that will be included in later draft of this work.\", \"Other ODE methods, such as [1] and [2] were mostly designed for time series data with a single perturbation or no perturbation. Further Phoenix processes data as aligned pseudotrajectories. It is a completely different task as PerturbODE deals with cells that are not aligned. In theory, we could infer pseudotime on TF Atlas and use pseudotrajectories or pseudotime as time series data, but this would require large variation to the current architecture. However, it would be interesting to compare PerturbODE on time series data for future evaluation.\"], \"in_response_to_questions\": [\"log scaled p-values have been included in results.\", \"Decay is linear, and it should always be present in the biological system. At equilibrium the decay is balanced out with basal and regulatory expressions. For instance, let W be a positive scalar representing decay, and b is a positive scalar for basal expression. X(t) is a dynamical system describing the evolution of gene expressions. dX/dt = b - WX. At equilibrium, b - WX* = 0. Hence, X* = b/W. Decay and basal expression still happen simultaneously, but they are cancelled out infinitesimally. I hope that I understood your question correctly. If not, feel free to make further comments. (Decay exists mostly to maintain stability. From an ODE perspective, it creates a trapping region that ensures a maximum gene expression level. Since the MLP part of the ODE is bounded, as X grows larger, the linear decay -WX will eventually dominate and decrease expression again. )\", \"It would be interesting to include experiments on datasets with multiple time points in a later draft of this work. Thank you for your suggestion!\", \"[1] Hossain, I., Fanfani, V., Fischer, J., Quackenbush, J., & Burkholz, R. (2024). Biologically informed NeuralODEs for genome-wide regulatory dynamics. bioRxiv. https://doi.org/10.1101/2023.02.24.529835\", \"[2] Jackson, C. A., Beheler-Amass, M., Tj\\u00e4rnberg, A., Suresh, I., Shang-mei Hickey, A., Bonneau, R., & Gresham, D. (2023). Simultaneous estimation of gene regulatory network structure and RNA kinetics from single cell gene expression. Center For Genomics and Systems Biology, New York University.\"]}", "{\"comment\": \"In response to weaknesses:\\n\\n1. PerturbODE doesn't outperform DCDI, but DCDI suffers in scalability. Therefore, DCDFG was developed with additional low-rank structure to improve scalability. I have now given more discussions in section 4.1. Further, it would be interesting to include more experiments with cyclic networks simulated by BoolODE in future draft of this work as suggested by other reviewer. \\n\\n2. More discussions about performance variations have been added in section 4.1.\", \"in_response_to_questions\": \"1. Thank you for your question. Model complexity could be varied by changing the number of modules in the hyperparameter. See appendix 6.2.2 for the change in validation loss with varying number of modules used.\\n\\n2. PerturbODE's main contribution is a highly scalable and biologically realistic approach to discover gene regulatory networks at large scale from perturbation data. Gene modules are discovered directly from model parameters, which allows efficient computation as well as adherence to biological realism. The gene module structure embedded in a two-layer MLP is inspired by the real network motif of multiple output feedforward loops known to be prevalent in yeast and E. coli. (see Appendix 6.10)\\n\\n3. PerturbODE is intended to be a method that identifies potential gene regulatory networks for biologists, who would conduct more experiments to validate the potential gene regulatory networks with additional experimental Chip-seq data. We argue the model's capacity to generalize to unseen perturbations does demonstrate its usefulness for application, as the modules it's learning are capturing something real about the biology. \\n\\n4. Ideally, we would conduct perturbation experiments. It could be an interesting future direction with the right resources.\"}", "{\"summary\": \"This paper introduces PerturbODE, a novel framework that uses neural ordinary differential equations (ODEs) to model gene regulatory networks (GRNs) from single-cell perturbation data. PerturbODE addresses limitations of existing causal graph discovery methods by (i) incorporating biologically-informed ODEs to model cell state trajectories under perturbations, (ii) does estimation in a latent space and (iii) derive GRNs from the ODE parameters by aligning latent and input space. The method can capture non-linear and cyclic gene interactions, which is an advantage over many existing methods and maps cell states to a lower-dimensional \\\"gene module\\\" space, which provides high-level insights into the system. Experiments on simulated and real single-cell RNA-seq datasets show PerturbODE outperforms existing methods in GRN inference and prediction tasks. The authors demonstrate PerturbODE's ability to recover biologically meaningful network structures like negative autoregulation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"allows to model cyclic GRNs, which is not addressed that much in the literature yet. The authors also provide references that this is an important aspect of biological systems\", \"the model is quite intuitive in its formulation\"], \"weaknesses\": [\"It's very hard to read Figure 5\", \"I am wondering how identifiable the model is and what number of perturbations is sufficient for this.\", \"there is no extension towards entropy-regularized OT\", \"an ablation study on the gamma parameter (scaling the sparsity effect) would be interesting to see\"], \"typos\": [\"sometimes it's \\\"non-linear\\\" (l53) and sometimes it's \\\"nonlinear\\\" (e.g. l45)\"], \"questions\": [\"It seems a subset of the modules is recovering relevant biological pathways. However, it only seems to be a very small amount? Can you comment on the other 95% of the modules? Does a Gene-set-enrichtment-analysis confirm your findings?\", \"Setting delta_j to a fixed value, here 1, would imply that all perturbations have the same strength on its gene. I guess this is not a reasonable assumption, since some perturbations won't have an effect at all?\", \"Can you clarify how you combine A and B exactly to get the GRNs?\", \"I'm happy to adjust my score based on the answers to the Questions & Weaknesses subsections.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the efforts of the authors to address my concerns. I think the work has potential, but it needs to go through a round of revision before it is ready for publication. I maintain my current score and look forward to a future version that is more adequate. I hope the comments below will help improve the future version of this work.\\n\\n1. I understand that PerturbODE performs better at different levels of sparsity. I think this would have to be on the main text. I really do not think that the recall scores reported on the main text are informative given the large discrepancy in recall scores.\\n2. Not sure if I understood the response to my question number 2. Doesn't the large number of selected genes also mean that your method will also overpredict false positives?\\n3. Drug datasets would indeed be interesting to see. You can focus on datasets where the targets of the drugs are known.\\n4. I thank the authors for adding gene enrichment analysis results. Doing a similar thing for the baselines and comparing them might help understand which method is selecting more biologically relevant genes.\"}", "{\"summary\": \"The paper titled \\u201cInterpretable Neural ODEs for Gene Regulatory Network Discovery under Perturbations\\u201d presents PerturbODE, a neural network designed to model cell state trajectories and derive causal gene regulatory networks (GRNs) from its parameters. The authors propose a model that utilizes neural networks to infer changes in gene expression levels based on their current states. A notable contribution of this work is its ability to model GRNs with negative edges and explicitly specify perturbed genes. The study demonstrates PerturbODE\\u2019s effectiveness in deriving the ground truth GRN and highlights its potential to predict cellular responses to novel perturbations.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1.PerturbODE offers excellent interpretability, with parameters that reveal clear relationships between genes and the inferred gene modules.\\n\\n2.The innovative approach of PerturbODE effectively captures key biological processes, such as cellular differentiation and negative feedback regulation.\\n\\n3.The authors have conducted comprehensive experiments, comparing PerturbODE with other methods on both simulated and large-scale perturbational scRNA-seq datasets. They also explore its capability to identify negative autoregulation and infer gene modules.\\n4.The paper is well-organized, featuring clear explanations of the methodology.\", \"weaknesses\": \"1.It would be beneficial to include more discussions and experiments addressing instances where PerturbODE does not outperform other models.\\n\\n2.In Section 4.2.1, a more detailed discussion about performance variations could enhance clarity and understanding.\", \"questions\": \"1.Have you considered using a more complex or simpler model? How might this impact performance?\\n\\n2.Could you elaborate on how this work contributes to our understanding and discovery of gene modules?\\n\\n3. In the experimental phase, the paper primarily presents metric-based results, showing that the method outperforms others. However, in solving a specific GRN problem, the focus should be on whether the inferred GRN can identify key genes or transcription factors (TFs) that can then undergo downstream analysis. For a biological application, simply comparing metrics does not sufficiently demonstrate the model\\u2019s effectiveness.\\n\\n4. In GRN inference problems, perturbation experiments for some key genes are also an important downstream analysis. Including some perturbation experiments could help validate the accuracy of the inferred GRN.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for answering my many questions and implementing some of my suggestions!\\n\\nIn general, I am happy to see the authors implemented some of my feedback and improved the manuscript. With this, I am happy to raise my score from 3 -> 5. Unfortunately, I remain of the opinion that this work still requires further refinement, further justification of choices pertaining to the methodology, and further comparison with existing methods in the area.\"}", "{\"title\": \"In response to questions:\", \"comment\": [\"In response to questions:\", \"While other models infer conditional densities for each edge, PerturbODE directly infers positive, negative regulation, or no edge. Evaluating edge existence using absolute values might benefit PerturbODE; however, I believe it's incorrect to label an edge as a true positive when the predicted edge is negative but the corresponding ground truth edge is positive.\", \"For SHD, consider a sparse empty graph with 400 edges with dimension: 100x100. Method 1 predicts 300 of the 500 edges correctly but contains another 300 false positives. SHD for Method 1 would be 500, but SHD for an empty graph is 400, which is lower. On the other hand, precision/recall for Method 1 would be 0.5 and 0.6. Precision/recall for an empty graph would be 0 and 0. It would, however, make sense to use SHD if the ground truth network is dense.\", \"For negative log likelihood, DCDFG learns conditional densities on each edge with predefined parameterization for the densities. For evaluation, unseen data is given directly to the model to compute the negative log likelihood (product of all conditional densities of edges). The same method would be impossible for PerturbODE since there is no predefined density. Additionally, I have now included AUPRC in the results.\", \"I have now added the mean and std in a table in the appendix 6.5.1.\", \"It is indeed a limiting factor that thresholds affect results. Therefore, we now have included AUPRC in results. Also, c isn\\u2019t a hyperparameter. c determines the threshold. It is chosen after training so that the model selects a reasonable number of edges (no more than 30%). (DCDFG uses a binary search to find the optimal threshold for the largest possible DAG.)\", \"There is considerable variation in recall scores for PerturbODE especially in the simulated yeast dataset. This is likely due to the high sparsity in the ground truth GRN, which leads to weak signals in the simulated dataset. This results in false negatives. Further, L1 penalty is enforced on the individual matrix. As multiplication of sparse matrices is not always sparse, the number of predicted edges tends to fluctuate. Denser predictions would have higher recall scores.\", \"For TF perturbations, when PerturbODE doesn't predict well it tends to shoot far off. However, in terms of the median W2 distance PerturbODE outperforms other models, and it does better in general. (see appendix 6.52)\", \"The ground truth GRNs shown in appendix 6.7 are the known edges with high confidence. There is a limited number of well understood human GRNs as ground truths. It is an inevitable limitation in the field.\", \"Flow matching isn't concerned with the \\\"true\\\" vector field; the paper focuses on a conditional linear vector field that induces the correct final distribution. But we're interested in the path a cell \\\"actually\\\" takes (which is very unlikely to be linear in expression space) and make biologically plausible assumptions via our parameterization of the ODE. For example, the decay term might not be learned in a flow matching objective, but is imperative for a biological model to guarantee cells reach a stationary state.\"]}", "{\"comment\": \"We are very much looking forward to future perturbation experiments mentioned in your paper. In fact, metrics alone often fail to capture the full picture. For example, in the field of computer vision, even if accuracy reaches 99%, it may still lack practical value. This is even more pronounced in biology, where metrics are a very weak form of evaluation. Only real-world applications that prove effective can truly demonstrate a model\\u2019s performance. Therefore, we will maintain our current score.\"}", "{\"summary\": \"The authors propose PerturbODE, a framework based on neural ODEs to infer causal GRNs from scRNA-seq data. PerturbODE aims to address limitations of prior methods regarding scalability, and modeling capacity. Two simulated experiments are performed as well as an experiment using real-life perturbation data (TF atlas).\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The study is well-motivated and the writing is clear. Addressing causality in GRN inference is important as most methods rely on gene co-expression networks.\", \"The proposed method scales well to large datasets, which is essential in an era where large-scale omics datasets are increasingly common.\", \"The model can handle both perfect and imperfect interventions, which can model a wide range of experimental conditions.\"], \"weaknesses\": \"My main concerns with this paper relate to experiments. It is not evident from the validation analysis that the proposed method is superior. Furthermore, the study would benefit from incorporating a larger number of real life datasets and baselines. See detailed comments below.\\n\\n- There is a large discrepancy in the number of selected edges between PerturbODE and the baselines for the atlas TF dataset (103,941 for PerturbODE and fewer than 500 for NO-TEARS and NO-TEARS-LR). It is not clear how meaningful the reported recall scores are given such discrepancy. Tuning parameters to select more edges in the baselines or including other baselines from causality literature [1-2] is necessary to provide a more accurate comparison.\\n- The total number of genes used in the TF atlas is 817, for a total of 667,489 possible edges. Selecting 100k from these (around 1/6) will likely inflate recall scores and, furthermore, is not very biologically plausible (you would expect around 3 interactions per gene on average [3]).\\n- Only one real experimental dataset is used (TF atlas) in the benchmark. Incorporating other datasets (e.g., from drug treatment, environmental stress) is necessary to support the claims made in this paper.\\n- The analysis is largely performed on edges and gene modules. More biological validation, e.g., gene enrichment analysis of top regulated genes, would enhance the biological validity of the model.\\n- Comparing against established methods for (non-causal) GRN inference (e.g., from the BEELINE benchmark [4]) would help the reader determine the utility of such causal models.\\n- Other methods have proposed neural ODEs for GRN inference [5]. A comparison with these or explanation why they are not applicable may be necessary.\\n\\n[1] [https://arxiv.org/abs/2301.01849](https://arxiv.org/abs/2301.01849#)\\n\\n[2] https://arxiv.org/abs/2107.10483\\n\\n[3] https://academic.oup.com/nar/article/50/W1/W398/6591524\\n\\n[4] https://www.nature.com/articles/s41592-019-0690-6\\n\\n[5] https://genomebiology.biomedcentral.com/articles/10.1186/s13059-024-03264-0\", \"questions\": [\"Would help to show $-\\\\log_{10}$(p-values) rather than p-values in plots.\", \"How important is the decay component in (1)? RNA decay may or may not be plausible for cells 7 days after perturbation.\", \"As I understand it, the TF atlas consists of 2 time points. Can the method work with scRNA-seq datasets with multiple time points?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}" ] }
7zPd1TjRc1
SANIA: Polyak-type Optimization Framework Leads to Scale Invariant Stochastic Algorithms
[ "Farshed Abdukhakimov", "XIANG CHULU", "Dmitry Kamzolov", "Robert M. Gower", "Martin Takáč" ]
Adaptive optimization methods are widely recognized as among the most popular approaches for training Deep Neural Networks (DNNs). Techniques such as Adam, AdaGrad, and AdaHessian utilize a preconditioner that modifies the search direction by incorporating information about the curvature of the objective function. However, despite their adaptive characteristics, these methods still require manual fine-tuning of the step-size. This, in turn, impacts the time required to solve a particular problem. This paper presents an optimization framework named SANIA to tackle these challenges. Beyond eliminating the need for manual step-size hyperparameter settings, SANIA incorporates techniques to address poorly scaled or ill-conditioned problems. We also explore several preconditioning methods, including Hutchinson's method, which approximates the Hessian diagonal of the loss function. We conclude with an extensive empirical examination of the proposed techniques across classification tasks, covering both convex and non-convex contexts.
[ "optimization", "learning rate free", "adaptive optimizers", "polyak step-size" ]
Reject
https://openreview.net/pdf?id=7zPd1TjRc1
https://openreview.net/forum?id=7zPd1TjRc1
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zmXVJPWAIL", "xZw3Fp7SUo", "tiUDWmdCfs", "r0H2UNBQuu", "oJuFIHt79Z", "nETZIBuTEN", "kXSmUlg8qR", "kPEap9d8gH", "in6jeRwCNb", "hRg9q2fBp1", "gDGobntFk4", "cRaJbjLOpT", "bWNH4xYz3W", "YUH5RV1ckz", "V66YabssxT", "TUo1ipLcQK", "SL3sXSV7Yv", "RmwJBBB4Rk", "NshwhsbLIB", "KbNHXbLtn4", "I7CFIx9XcG", "GLrCSk5obB", "FhxDeIvsxR", "EJmAHSG4gH", "7Rga5B7fif", "4P9U2LirUw", "4EXpN5IZzG", "3ICMSvmRY8", "1ypd1gRiVE", "1DQNFtI4aL", "11GUqcc1GS", "0PQJ773L9F" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review" ], "note_created": [ 1732387953068, 1732387030641, 1733014875935, 1732817710631, 1732387578992, 1732388139304, 1732387632090, 1730513305200, 1733175509974, 1733302286390, 1732387167385, 1732388094217, 1732388028461, 1732817828430, 1733144495396, 1730706344210, 1732817763250, 1730399607876, 1732386966047, 1732386659546, 1732387407450, 1732387231813, 1732386774705, 1737523435164, 1733302472111, 1732387862083, 1732817669455, 1733161556566, 1732387696629, 1731425137660, 1732387759565, 1735022035706 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Reviewer_iCAR" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Reviewer_zkYN" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Reviewer_jmo1" ], [ "ICLR.cc/2025/Conference/Submission1095/Reviewer_zTra" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Reviewer_iCAR" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Reviewer_zTra" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Reviewer_jmo1" ], [ "ICLR.cc/2025/Conference/Submission1095/Authors" ], [ "ICLR.cc/2025/Conference/Submission1095/Area_Chair_wsxe" ] ], "structured_content_str": [ "{\"title\": \"Clarification on $f_i^*$\", \"comment\": \"> **Reviewer:** \\\"In Line 209, what is the meaning of 'Otherwise, we replaced step-size parameter \\u03b3t to parameter fi\\u2217?' The authors should explain more clearly.\\\"\\n\\nWe recognize that this statement required clarification. It has been revised, and we trust it is now clearer. Please refer to the end of page 4 for the updated wording.\"}", "{\"title\": \"Link between scale invariance and using SANIA\", \"comment\": \"> **Question**: Could the authors provide a formal link between scale invariance and using SANIA? It seems that there is some overlap between the two, but SANIA does not ensure scale invariance by itself. It would be interesting for potential users of SANIA to provide conditions for their algorithm to be scale-invariant.\\n\\n**Response**: It is accurate that SANIA does not guarantee scale invariance by itself and to design an algorithm that enjoys this property one must ensure the following conditions are satisfied:\\n\\n1. *Diagonal Transformations*: The algorithm should behave identically under transformations of the form \\\\( w \\\\to Vw \\\\), where \\\\( V \\\\) is a diagonal, non-degenerate matrix. This ensures it is independent of feature scaling.\\n\\n2. *Invariant Preconditioning*: Any preconditioning applied, such as in adaptive gradient methods, must adapt only to scales (diagonal transformations) and not to rotations. This implies avoiding operations like taking square roots of preconditioners if they break scale invariance.\\n\\n3. *Scale-Invariant Updates*: The step size or learning rate, denoted as \\\\( \\\\gamma_t \\\\), should \\nremain unaffected by scaling transformations. This requires careful design of step-size rules.\\n\\n4. *Consistent Function Values*: The function values and gradients evaluated during optimization must remain consistent after scaling, ensuring identical convergence paths.\\n\\n5. *Bijective Mapping*: There must be a one-to-one mapping between the scaled and original parameter spaces, ensuring equivalent updates in both spaces.\\n\\nThese principles are critical for algorithms like SANIA AdaGrad-SQR and SANIA Adam-SQR, which maintain identical performance across scaled datasets. For more proofs and details please refer to Appendix D.1 and D.2.\"}", "{\"comment\": \"Thank you for the thorough response. I have no more questions and will keep my score the same.\"}", "{\"title\": \"Awaiting further discussion\", \"comment\": \"Dear Reviewer zTra,\\n\\nThank you again for your thoughtful feedback on our paper. We have addressed all the questions and concerns you raised in our responses. We wanted to kindly follow up to see if you have any additional questions or require further clarifications. If our responses address your concerns, we would greatly appreciate it if you could consider updating your score to reflect this.\\n\\nThank you for your time and consideration.\"}", "{\"title\": \"KATE and Sophia\", \"comment\": \"We have included updated experiments with KATE, however due to time constraints we are still running experiments with Sophia. Nevertheless, we have found both of them to be highly relevant to our research and included in Lemma 5 in Appendix as special cases of SANIA General Framework\\n\\nThank you again for your valuable feedback and thoughtful suggestions. \\n\\nPlease let us know if some concerns are still unanswered.\"}", "{\"title\": \"Derivation of SPS\", \"comment\": \"> **Reviewer**: \\\"In Page 2, SPS has been derived, why not just cite it?\\\"\\n\\nThe main reason for including the derivation of SPS is to provide intuition for readers who may not be familiar with this method. We believe that presenting this compact and concise derivation enhances the coherence and accessibility of our paper without detracting from the key ideas discussed later. \\n\\nLastly, we have corrected minor grammatical errors and addressed the missed citations you kindly pointed out. \\n\\nThank you once again for your valuable feedback.\"}", "{\"title\": \"Rebuttal to Reviewer zkYN\", \"comment\": \"**Dear Reviewer zkYN,**\\n\\nWe sincerely thank you for your thoughtful review of our work and for highlighting both the strengths and weaknesses of our proposed method. We deeply appreciate your acknowledgment of the soundness, ease of realization, and presentation quality of our approach. Your constructive feedback and insightful questions have guided us to identify areas for improvement and further experimentation. Below, we provide detailed responses to the points raised under the weaknesses and questions sections.\"}", "{\"summary\": \"The work tries to incorporate a Polyak-type optimization framework with existing optimizers for machine learning. In the new method, the optimization direction and step size are spontaneously determined by a simple local optimization for each step. The new method is not difficult in realization and appears to work. From the numerical results, the test accuracy of network trained with the new method is comparable to existing methods (but not better than them).\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The new method can be naturally incorporated with existing methods and the realization is not difficult.\", \"weaknesses\": \"From the numerical methods, the new method does not show improvements compared to previous methods.\", \"questions\": \"1. Instability of the Euler method and noises are believed to be helpful for generalization (see Ref[1]). Relatively large step size is helpful for the training process to jump out of bad local minima. In the new framework, when the steps are spontaneously obtained, the instability are also almost inhibited spontaneously. This may be a severe problem for the new frame work.\\n\\n2. From the numerical results, it seems that the test accuracy of Sania is slightly worse than existing methods. The author should also compare the results with SGD, since it usually has competitive generalization ability.\\n\\n3. In Line 209, what is the meaning of \\\"Otherwise, we replaced step-size parameter \\u03b3t to parameter fi\\u2217?\\\". The authors should explain more clearly.\\n\\n4. Due to the anisotropic property of the loss landscape, the estimate f* may not be a good guess for the local optimization. How will this affect the performance of the new method?\\n\\n[1] The Implicit Regularization of Dynamical Stability in Stochastic Gradient Descent, Lei Wu, Weijie J. Su, ICML 2023;\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"TO ALL REVIEWERS\", \"comment\": \"Dear Reviewers,\\n\\nWith the discussion period extended, we would like to take this opportunity to further refine and improve our paper based on your insights. We have thoroughly addressed all your questions and concerns but feel the discussion has not yet reached its full potential. \\n\\nCould you kindly advise us on any additional steps we could take for you to consider revising your score?\\n\\nWe deeply appreciate the time and effort you have dedicated to reviewing our work and value your feedback greatly.\\n\\nBest regards,\\nAuthors\"}", "{\"title\": \"Thank You for Your Feedback \\u2013 Kindly Consider a Score Update\", \"comment\": \"Dear Reviewer jmo1,\\n\\nThank you once again for your thorough review and constructive feedback, which have significantly contributed to improving our paper. We are glad to hear that all your concerns have been addressed and that you have no further questions or remarks. \\n\\nGiven the originality, relevance, and correctness of our work, as well as the improvements made, such as clarifying SANIA notations, correcting Figure 1, and including training loss plots, we believe the revised submission provides substantial value to the community. \\n\\nIf you feel our revisions fully address your feedback, we kindly ask you to consider updating your score to reflect this. We greatly appreciate your time, effort, and support throughout this process.\"}", "{\"title\": \"Minor errors\", \"comment\": \"Finally, we have updated Figure 1 based on your feedback and added training loss to our plots (Figure 5, 6 and 7). We hope you find the revised version to be an improvement\\n\\nThank you once again for your insightful comments and suggestions.\"}", "{\"title\": \"Rebuttal to Reviewer iCAR\", \"comment\": \"**Dear Reviewer iCAR,**\\n\\nWe sincerely appreciate your thorough feedback and constructive comments, which we believe will significantly enhance the clarity and readability of our paper. We are deeply encouraged that you found our work novel and interesting. Moving forward, we are committed to addressing your inquiries and concerns in detail. \\n\\nYour observation regarding numerical experiments comparing SANIA to other tuning-free methods is highly relevant and aligns with one of the directions we are actively pursuing in our ongoing research. \\n\\nWe apologize for not providing a clearer definition of the \\u201cinterpolation condition.\\u201d To address this, we have added a concise paragraph that offers additional details and intuition about this assumption on page 3.\"}", "{\"title\": \"Estimate of $f^*$ and local optimization\", \"comment\": \"> **Reviewer:** \\\"Due to the anisotropic property of the loss landscape, the estimate f\\u2217 may not be a good guess for the local optimization. How will this affect the performance of the new method?\\\"\\n\\nCould the reviewer kindly elaborate on this comment?\\n\\nWe are grateful for your feedback and will incorporate detailed responses and additional experiments to address your concerns in the revised manuscript. Thank you for your valuable insights and suggestions.\"}", "{\"title\": \"Awaiting further discussion\", \"comment\": \"Dear Reviewer iCAR,\\n\\nThank you again for your thoughtful feedback on our paper. We have addressed all the questions and concerns you raised in our responses. We wanted to kindly follow up to see if you have any additional questions or require further clarifications. If our responses address your concerns, we would greatly appreciate it if you could consider updating your score to reflect this.\\n\\nThank you for your time and consideration.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your answer. I have no more questions or remarks, and I have decided to maintain my rating.\"}", "{\"summary\": \"This paper introduces cubic Newton method with adaptive Polyak step-size, enhancing robustness and efficiency in non-convex tasks. It also proposes scale-invariant variants of AdaGrad and Adam, which improve optimizer performance on poorly scaled data. Extensive experiments validate the effectiveness of these methods across diverse optimization scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Thorough theoretical analysis.\\n2. Clear writing\", \"weaknesses\": \"1. It would be great to have theoretical/practical comparisons with Sophia [1], which also uses Hutchinson's based method to compute their preconditioner.\\n2. KATE [2] removes square root to ensure scale invariance of Adagrad. It would be great to have theoretical/empirical comparison with KATE. \\n\\nI am willing to improve the score if the above are addressed.\\n\\n[1] Liu, Hong, et al. \\\"Sophia: A scalable stochastic second-order optimizer for language model pre-training.\\\" arXiv preprint arXiv:2305.14342 (2023).\\n[2] Choudhury, Sayantan, et al. \\\"Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad.\\\" arXiv preprint arXiv:2403.02648 (2024).\", \"questions\": \"What are your thoughts on how learning rate schedules such as cosine decay etc compose with $\\\\lambda_t$ schedule defined in (19)?\\n\\nHow does $\\\\lambda_t$ change during training?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Awaiting further discussion\", \"comment\": \"Dear Reviewer zkYN,\\n\\nThank you again for your thoughtful feedback on our paper. We have addressed all the questions and concerns you raised in our responses. We wanted to kindly follow up to see if you have any additional questions or require further clarifications. If our responses address your concerns, we would greatly appreciate it if you could consider updating your score to reflect this.\\n\\nThank you for your time and consideration.\"}", "{\"summary\": \"This paper presents SANIA, which is a method that doesn't require manual fine-tuning of the learning rate in commonly used stochastic optimization algorithms, leading to faster optimization. The authors present a framework which generalises common stochastic optimization algorithms. The authors also consider affine and scale invariance which seeks to address poorly scaled or ill-conditioned problems. The authors compare performance of SANIA with commonly used algorithms (which have had fine-tuned parameters) for training classifiers for MNIST, FashionMNIST, CIFAR10 and SVHN.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Provides a novel formulation of stochastic optimization algorithms and a novel method that does not require fine-tuning of the learning rate parameter. Investigation into scale invariance is novel. Theorems, equations and ideas are presented clearly. Numerical experiments show that the SANIA methodology achieves similar performance to algorithms with fine-tuned learning rates, and improved robustness as the training curves fluctuate less.\", \"weaknesses\": \"Numerical experiments lack a comparison to other options for tuning-free methods.\", \"minor_comments\": \"Occasional incorrect grammar, including after equation (7): \\\"This leads us to Stochastic Polyak step-size method.\\\" should read \\\"This leads us to the Stochastic Polyak step-size method.\\\" Also bad grammar in the statement of Theorem 1. \\\"Another way to derive this formulation is by solving\\\" (4), I think it may be better to reference appendix B.2 to make it clear why this holds. \\\"interpolation condition\\\" isn't very clearly defined in my opinion, it may be better to be more clear. \\\"in practice it displays better convergence to the true Hessian than other\\nsimilar methods like BFGS\\\" - I think this needs a reference.\", \"questions\": \"In Page 2, SPS has been derived, why not just cite it? The difference between g_t and m_t is unclear to me, these seem to often mean the same thing? Why not keep the notation consistent? In (6) you have written \\\\| w \\\\|_{B_t} is a Euclidean norm, would it be better to say \\\" \\\\| \\\\cdot \\\\|_{B_t} is a Euclidean norm\\\"?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Section 2: SANIA notations\", \"comment\": \"> **Reviewer:** \\\" At the end of Section 2, the authors use the notations: \\\"SANIA $I_d$\\\", \\\"SANIA $(V^{-1})^2$\\\" and \\\"SANIA $\\\\mathrm{diag}(H^{-1})$\\\", while writing that there is no preconditioning. Thus, I understand that \\\"SANIA $A$\\\" refers to Eqn. (6) with $D_t = A$. But there is no formal definition of \\\"SANIA $A$\\\". It should appear somewhere. \\\"\\n\\nWe apologize for any misunderstanding regarding the notation used to describe different preconditioning matrices for SANIA. In Section 2.4 (Affine and Scale Invariance), we discuss several matrices used for preconditioning in SANIA to illustrate scale invariance (note that choice of $I$ as preconditioning will not lead to invariance). Before detailing them, we would like to remind you that the update rule for these methods is described in Eq. (18): \\n$$\\nw_{t+1} = w_t - \\\\lambda_t B_t^{-1} m_t.\\n$$\\nIn this update rule, the selection of \\\\( B_t \\\\) plays a crucial role in the algorithm, and Section 2.4 proposes choices for \\\\( B_t \\\\) that ensure SANIA remains scale invariant. \\n\\n1. **SANIA $\\\\(diag((V^{-1})^2)\\\\)$**: One option is to utilize the squared inverse of the vector $\\\\( V \\\\)$ used to scale the dataset, which simulates badly scaled data. \\n2. **SANIA $\\\\(diag(H^{-1})\\\\)$**: Another approach is to calculate the diagonal of the Hessian matrix. \\n3. **SANIA $\\\\(\\\\mathit{I}_d\\\\)$**: As a baseline ablation study, we include SANIA $\\\\(\\\\mathit{I}_d\\\\)$, where $\\\\( B_t = \\\\mathit{I}_d \\\\)$, effectively using no preconditioner. \\n\\nSince Section 2.4 does not mention SANIA $\\\\( A \\\\)$, could you please clarify what you are referring to as SANIA $\\\\( A \\\\)$?\"}", "{\"title\": \"REBUTTAL TO ALL REVIEWERS\", \"comment\": \"We would like to sincerely thank all the reviewers for their careful reading of our submission and their thoughtful and constructive feedback. We greatly appreciate the time and effort spent on reviewing our work and for providing suggestions that will help improve our paper.\\n\\nWe are grateful that all reviewers recognized several key strengths of our paper. Reviewer jmo1 appreciated the originality of our work and acknowledged that it is relevant to the community, particularly highlighting the practical application of the generalized Polyak step-size in second-order methods. Reviewer zTra commended the thorough theoretical analysis and clear writing, noting the robustness and efficiency of our proposed method across diverse non-convex optimization scenarios. Reviewer zkYN emphasized the ease of integrating our framework with existing optimizers and mentioned the excellent presentation of our ideas. Finally, Reviewer iCAR highlighted the novelty of our formulation, especially the investigation into scale invariance, and acknowledged the robustness of SANIA in comparison to fine-tuned algorithms. These strengths collectively underscore the significance and potential impact of our contribution to the field.\\n\\nIn response to the issues raised, we have carefully addressed all the points mentioned in the reviews, including clarifications, additional discussions, and improvements to the experimental section. We have uploaded a revised version of the paper, which includes these updates and directly addresses all the comments and suggestions. We hope that these changes address your concerns and improve the overall quality of our submission.\\n\\nThank you again for your valuable feedback and suggestions. We are confident that these improvements have strengthened our paper significantly.\"}", "{\"title\": \"Learning rate schedules\", \"comment\": \"> **Question**: What are your thoughts on how learning rate schedules such as cosine decay, etc., compose with the $\\\\(\\\\lambda_t\\\\)$ schedule defined in (19)? How does $\\\\(\\\\lambda_t\\\\)$ change during training?\\n\\nIt is true that learning rate schedules such as exponential decay, cosine annealing, and other procedures are widely used in practice. However, one problem persists \\u2013 different schedules are often suited to different scenarios. For example: \\n1) **Exponential decay** works best for problems where a steady reduction in the learning rate is advantageous. \\n2) **Reduce-on-plateau** is ideal for training pipelines where the convergence speed varies significantly. \\n3) **Cosine annealing** is well-suited for tasks that benefit from smooth reductions in the learning rate, such as fine-tuning. \\n\\nIn contrast, the $\\\\(\\\\lambda_t\\\\)$ schedule has an update rule directly influenced by the function landscape, which may alleviate the need for manually selecting a specific learning rate schedule. To provide insights into the behavior of $\\\\(\\\\lambda_t\\\\)$, we have included numerical experiments demonstrating the evolution of SANIA\\u2019s step size compared to fine-tuned methods and methods employing learning rate schedules. \\n\\nDue to time constraints, we present these experiments using a synthetic dataset. Nonetheless, we are eager to investigate the behavior of $\\\\(\\\\lambda_t\\\\)$ in large-scale problems as part of future work. Please find the aforementioned experiments in Figure 10 of the Appendix in the revised version of our paper.\"}", "{\"title\": \"Rebuttal to Reviewer zTra\", \"comment\": \"**Dear Reviewer zTra,**\\n\\nWe are extremely delighted to learn that you found our work easy to understand, and we sincerely thank you for appreciating our theoretical analysis. We have carefully studied the two papers you kindly shared with us and found both to be highly relevant to our work. We have now referenced and discussed both of these works in the revised version of our paper.\"}", "{\"title\": \"Rebuttal to Reviewer jmo1\", \"comment\": \"**Dear Reviewer jmo1,**\\n\\nWe sincerely thank you for your thorough review, positive comments on the originality and clarity of our paper, and for recognizing its significance to the optimization community. We have carefully considered your remarks, concerns, and questions, and we will do our utmost to address them effectively.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Plans to Include Sophia Results in Ongoing Experiments\", \"comment\": \"Dear Reviewer zTra,\\n\\nThank you for your thoughtful feedback and for highlighting the importance of empirical comparisons with Sophia and KATE. \\n\\nWe have included updated experiments with KATE and discussed both works as special cases in our general framework. \\n\\nRegarding Sophia, we are pleased to inform you that we are already running experiments, and the results will be included in future revisions. We greatly appreciate your recognition of the theoretical analysis and practical relevance of our work, and if our ongoing efforts address your concerns, we would be grateful if you could consider an adjustment to your score.\"}", "{\"title\": \"Numerical comparison to SGD\", \"comment\": \"> **Reviewer:** \\\"From the numerical results, it seems that the test accuracy of SANIA is slightly worse than existing methods. The author should also compare the results with SGD, since it usually has competitive generalization ability.\\\"\\n\\nWe agree that SGD is a strong baseline with well-established generalization capabilities. To address this, we have conducted additional experiments comparing SANIA to manually fine-tuned SGD on real datasets. These comparisons have been included in the Appendix E (Figure 5) . The results indicate that SANIA performs competitively with SGD while eliminating the need for a tuned learning rate schedule, highlighting its practical utility.\\nDue to time constraints, we were unable to include large-scale experiments; however, we plan to explore a broader range of problems with SANIA in future work. If we obtain the results in time, we will add large-scale experiments by the end of the discussion deadline and upload the updated findings.\\n\\nThank you for this valuable suggestion.\"}", "{\"title\": \"Awaiting further discussion\", \"comment\": \"Dear Reviewer jmo1,\\n\\nThank you again for your thoughtful feedback on our paper. We have addressed all the questions and concerns you raised in our responses. We wanted to kindly follow up to see if you have any additional questions or require further clarifications. If our responses address your concerns, we would greatly appreciate it if you could consider updating your score to reflect this.\\n\\nThank you for your time and consideration.\"}", "{\"comment\": \"Thank you for the effort put in the rebuttal. I highly recommend comparing empirically with Sophia for future revisions. I decided to maintain my score.\"}", "{\"title\": \"Additional experiments\", \"comment\": \"> **Reviewer:** \\\"From the numerical methods, the new method does not show improvements compared to previous methods.\\\"\\n\\nWe acknowledge that SANIA\\u2019s performance, as presented, is comparable to existing methods and does not show significant improvements. One key difference, however, is that SANIA operates without requiring hyperparameter tuning, unlike traditional optimizers such as Adam or Adagrad, which often require careful tuning to achieve their best performance. This feature of SANIA can potentially help to save substantial amounts of resources. To further illustrate this, we have added additional experiments to Appendix E (Figures 3, 5, 6, 7) comparing SANIA\\u2019s performance against Adam, Adagrad, SGD and KATE under suboptimal tuning conditions and their best-tuned step sizes. These results highlight that SANIA\\u2019s performance remains robust across a wider range of scenarios.\"}", "{\"summary\": \"This paper proposes a training algorithm generalizing the \\\"Polyak step-size\\\" to second-order algorithms, such as cubic Newton or quasi-Newton.\\n\\nLet $f$ be a function to minimize and $w\\\\_t$ the current estimate of the argmin.\\nFor an order-1 method, the Polyak step-size comes down to choosing the step-size such that we would attain the minimum $w^*$ in one step if $f$ was affine between $w\\\\_t$ and $w^*$. For order-2 methods, the authors distinguish the metric used for the parameter space, denoted by $B_t$, and the metric related to the curvature of the local approximation of $f$, represented by a matrix $D_t$.\\n\\nThen, the authors propose variations of existing algorithms based on their generalization of Polyak step-size.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"# Originality\\n\\nAs far as I know, the work presented in this paper is original.\\n\\n\\n# Clarity\\n\\nOverall, the paper is easy to follow.\\n\\n\\n# Quality\\n\\nI am very grateful to the authors for providing additional experiments (Appendix E), which show not only test accuracy, but also test loss. I would have been even better to provide the training loss, since the proposed algorithms have been designed to improve the optimization process (not the generalization).\\n\\nOverall, the paper seems to be correct.\\n\\n# Significance\", \"this_paper_provides_practical_uses_of_sp2\": \"A Second Order Stochastic Polyak*, Li et al., 2023.\\nSo the present paper is relevant for the community.\", \"weaknesses\": \"# Originality\", \"this_paper_can_be_seen_as_a_practical_application_of_sp2\": \"A Second Order Stochastic Polyak*, Li et al., 2023.\\n\\n\\n# Clarity\\n\\nAt the end of Section 2, the authors use the notations: \\\"SANIA $I_d$\\\", \\\"SANIA $(V^{-1})^2$\\\" and \\\"SANIA $\\\\mathrm{diag}(H^{-1})$\\\", while writing that there is no preconditioning. Thus, I understand that \\\"SANIA $A$\\\" refers to Eqn. (6) with $D_t = A$. But there is no formal definition of \\\"SANIA $A$\\\". It should appear somewhere.\\n\\nJust above Eqn. (9), one can read: $m_t = m_t$, which should be \\\"$m_t = g_t$\\\" (?).\\n\\nSeveral mistakes, Figure 1, first plot: \\n * the legend is difficult to understand, since several curves have the same label;\\n * some curves do not seem to correspond to their label: apparently, light green should be \\\"SANIA $(V^{-1})^2$\\\";\\n * the purple curve is dotted, while it is not dotted in the legend.\", \"questions\": \"Could the authors provide a formal link between scale invariance and using SANIA? It seems that there is some overlap between the two, but SANIA does not ensure scale invariance by itself. It would be interesting for potential users of SANIA to provide conditions for their algorithm to be scale-invariant.\\n\\nAddition to Fig. 5: how does SANIA compare to concurrent methods in terms of training loss?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Noise and generalization problems\", \"comment\": \"> **Reviewer:** \\\"Instability of the Euler method and noises are believed to be helpful for generalization... This may be a severe problem for the new framework.\\\"\\n\\nIt is true that injecting noise can play a regularizing role in training for stochastic gradient descent methods. Please note that our method can, and does, take large step sizes, which allows it to jump out of local minima. Regarding the reviewer\\u2019s remark about our step size spontaneously inhibiting desired instabilities, we are unsure of the intended meaning. Could the reviewer kindly clarify their comment?\\nIf the concern pertains to whether SANIA\\u2019s step-size determination process prevents the addition of controlled noise or perturbations to the optimization process, the answer is no. In fact, it would be possible to explore how such modifications might enhance generalization performance, and we view this as an interesting avenue for future research.\\n\\nWe thank the reviewer for bringing the additional reference to our attention, and we will include it in the revised version of the paper.\"}", "{\"metareview\": \"This paper studies a general framework for Preconditioned and Second-order Polyak methods. In particular, the paper proposes a first Stochastic Cubic Newton method with polyak step-size and also introduces the new scale invariant versions of AdaGrad and Adam, which make them invariant to some basis transformations. The paper also provides a few experiments to demonstrate the performance of the algorithm. The improvements (if any) are not very significant but the authors highlight that the key benefit of this work is need for limited tuning of hyperparameters.\\n\\nThe reviews for the paper are very much on the borderline. The reviewers while acknowledging that the work is interesting also highlighted the weakness of lack of comparison with schedule free optimization and somewhat limited empirical evidence. After reading through the paper, I was extremely disappointed with the empirical evidence and I believe these weakness are very important to address before publication. The paper's core claim is about the eliminating the need for manual stepsize hyperparameter setting, however the experiments are done on extremely small settings (where hyperparameter tuning is of not much value) and lack proper comparisons. I think this paper would greatly benefit from thorough empirical analysis on larger settings (e.g. ImageNet, ResNet, VIT). I recommend rejection in the current form.\", \"additional_comments_on_reviewer_discussion\": \"The authors addressed some important concerns of the reviewers (especially related to the presentation). The authors also added additional comparisons. However, I think that the empirical analysis of the paper is extremely limited and does not support the core claims of the paper (as discuss in the metaview).\"}" ] }
7zNYY1E2fq
Block-Attention for Efficient Prefilling
[ "Dongyang Ma", "Yan Wang", "Tian Lan" ]
We introduce Block-attention, an attention mechanism designed to address the increased inference latency and cost in Retrieval-Augmented Generation (RAG) scenarios. Traditional approaches often encode the entire context in an auto-regressive manner. Instead, Block-attention divides retrieved documents into discrete blocks, with each block independently calculating key-value (KV) states except for the final block. In RAG scenarios, by defining each passage as a block, Block-attention enables us to reuse the KV states of passages that have been seen before, thereby significantly reducing the latency and the computation overhead during inference. The implementation of Block-attention involves block segmentation, position re-encoding, and fine-tuning the LLM to adapt to the Block-attention mechanism. Experiments on 11 diverse benchmarks, including RAG, ICL, and general domains, demonstrate that after block fine-tuning, the Block-attention model not only achieves performance comparable to that of full-attention models, but can also seamlessly switch between the block and full attention modes without any performance loss. Notably, Block-attention significantly reduces the time to first token (TTFT) and floating point operations (FLOPs) to a very low level. It only takes 45 ms to output the first token for an input sequence with a total length of 32K. Compared to the full-attention models, the TTFT and corresponding FLOPs are reduced by 98.7\% and 99.8\%, respectively. Additionally, in Appendix A, we elaborate on how Block-attention is applied in Game AI scenario and the substantial potential benefits it entails. We strongly suggest researchers in the gaming field not to overlook this section.
[ "LLM", "RAG", "parallel context encoding", "efficient language model" ]
Accept (Poster)
https://openreview.net/pdf?id=7zNYY1E2fq
https://openreview.net/forum?id=7zNYY1E2fq
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xYPE7ez75o", "qGRQSQ4ljR", "pa8EHmyupt", "ln5ECY0vh9", "iT5cvYSge8", "hD6Swf1mvR", "h9EbliL4PT", "gBlUposq8E", "aTnXj58SJo", "a5TFJt6bPQ", "YrDTRrmWUd", "YMy0wSDsLp", "S3JehCshDJ", "N2R8vrMIhK", "M2oDy6nZk8", "7HsodqHGeZ", "6squzkH8tk", "4P31okS9Xs", "0TVXoRqhAi", "0PXGXAx54l" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732079768352, 1732657496836, 1732657276351, 1730615481789, 1737523546091, 1733189903289, 1732080072626, 1732866219605, 1732699063116, 1732092715898, 1732699000190, 1730693281253, 1732079013679, 1730766077359, 1730599454366, 1732092336482, 1734884029619, 1732802333565, 1732078854527, 1732083390309 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2970/Authors" ], [ "ICLR.cc/2025/Conference/Submission2970/Reviewer_MNR9" ], [ "ICLR.cc/2025/Conference/Submission2970/Reviewer_wUEZ" ], [ "ICLR.cc/2025/Conference/Submission2970/Reviewer_1DTJ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2970/Reviewer_2J6u" ], [ "ICLR.cc/2025/Conference/Submission2970/Authors" ], [ "ICLR.cc/2025/Conference/Submission2970/Authors" ], [ "ICLR.cc/2025/Conference/Submission2970/Authors" ], [ "ICLR.cc/2025/Conference/Submission2970/Authors" ], [ "ICLR.cc/2025/Conference/Submission2970/Authors" ], [ "ICLR.cc/2025/Conference/Submission2970/Reviewer_wUEZ" ], [ "ICLR.cc/2025/Conference/Submission2970/Authors" ], [ "ICLR.cc/2025/Conference/Submission2970/Reviewer_2J6u" ], [ "ICLR.cc/2025/Conference/Submission2970/Reviewer_MNR9" ], [ "ICLR.cc/2025/Conference/Submission2970/Authors" ], [ "ICLR.cc/2025/Conference/Submission2970/Area_Chair_Dybj" ], [ "ICLR.cc/2025/Conference/Submission2970/Reviewer_wUEZ" ], [ "ICLR.cc/2025/Conference/Submission2970/Authors" ], [ "ICLR.cc/2025/Conference/Submission2970/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for your valuable suggestions and concerns. Most of your concern is highly insightful, indicating that you are an expert in this field. We will provide a detailed explanation and address your concerns in the subsequent comments.\\n\\n**Weakness: Contribution and limitations compared to existing works. & Question 1, 2, 3** \\n\\nWe sincerely apologize for the lack of clear explanation regarding the differences from PromptCache in our paper, as well as the missing reference: Superposition Prompting: Improving and Accelerating Retrieval-Augmented Generation. Please allow us to utilize this rebuttal opportunity to further clarify the distinctions and comparisons between our work and these two relevant works.\\n\\n**Differences and Comparisons with PromptCache:** In a real RAG scenario, PromptCache can be implemented in two ways. The first is the standard implementation method described in its paper, that is, not handling positional IDs at all. The accuracy of the PromptCache re-implemented according to this method is lower than that of block-attention (Please refer to the second and third rows in the table below). And the results of PromptCache after fine-tuning are the same as those of block-ft-w/o-pos, which are also lower than those of block-ft models (Please refer to the fourth and fifth rows in the table below).\\n\\n| | tqa|\\t2wiki\\t|nq|\\thqa|\\n| :--- | :--- | :--- |:--- |:--- |\\nLlama3-block-w/o-ft\\t|62.7|\\t48.1|\\t44.3|\\t39.6|\\nLlama3-promptCache\\t|60.8|\\t37.1|42.4|32.9|\\nLlama3-Superposition|\\t57.9|\\t35.4\\t|37.9|\\t33.4|\\nLlama3-block-ft\\t|73.0|\\t73.3|\\t56.2|\\t68.5|\\nLlama3-promptCache-ft\\t|70.8\\t|66.8|\\t52.9|61.4|\\n\\nAnother implement manner for PromptCache is exactly the one implemented in our paper. In this regard, we will reuse cached passages only on the condition that the positional IDs remain undisturbed. For instance, if the position ID of the preceding token of Doc A is 1000, then the cached KV states will be utilized only when the start positional ID associated with the cached Doc A is greater than 1000. When PromptCache is implemented in this manner, we can guarantee that there will be no reversed positional IDs, and its performance is exactly the same as that of block-w/o-ft. However, as we explained in footnote 6, the efficiency of PromptCache implemented in this way is considerably lower than that of the block-attention method. Quantitatively speaking, if the same amount of memory as that used by block-attention is employed, PromptCache can only reduce the TTFT (Time to First Token) by 50%. If it is desired to achieve the same TTFT as that of block-attention, PromptCache needs to consume k times the amount of memory (where k is the number of retrieved documents).\\n\\nFurthermore, following the suggestion of Reviewer 1DTJ, we have uploaded two examples in the Appendix A (The authors undertake to refrain from cherry-picking). Through these two instances, you can gain an intuitive understanding of the distinctions between Llama3-block-ft, Llama3-vanilla-sft, Llama3-block-w/o-ft, and PromptCache. The models after applying PromptCache will encounter serious fluency issues due to the disorder of positional IDs. These issues won't be accurately reflected in a research paper's automatic evaluation, but obviously, they are unacceptable in real-world scenarios.\\n\\nIn the next version of our work, in order to prevent such confusion, we will update the implementation approach of PromptCache to the first type and incorporate the results of the above-mentioned table into the paper.\\n\\n**Differences and Comparisons with Superposition Prompting: ** The main idea behind SuperPosition prompting is to allocate passages to parallel paths for processing, thus reducing the inference cost. Its potential shortcoming is that each path can only attend to a single passage, whereas some questions might necessitate integrating information from multiple passages to be answered.\\n\\nUpon applying SuperPosition prompting to Llama3-vanilla-sft, the accuracy turns out to be lower than that of Llama3-block-w/o-ft and Llama3-PromptCache. The detailed results can also be seen in the table above.\\n\\nFurthermore, given that the decrease in the effectiveness of this method stems from the fact that each path can only attend to a single passage, it is quite challenging to enhance its accuracy even through further fine-tuning\\n\\n**Question 4: the generalizability issue**\\n\\nPlease refer to our general response to all the reviewers, where we have provided a detailed clarification of some queries related to \\\"additional fine-tuning\\\" and \\\"generalizability & avoid overfitting\\\". The model trained in our experiments will surely experience a decrease in generalizability, for we only use 2wiki and TQA as the training data. However, please note that this decrease is caused by our experimental setup rather than the block-attention mechanism\"}", "{\"comment\": \"Thank you for he response. All my concerns have been addressed, and I recognize the contributions of this paper. Therefore, I will raise my score.\"}", "{\"title\": \"Follow up comments\", \"comment\": [\"I want to thank the authors for answering my questions.\", \"Regarding [1], I am a bit confused by your explanation. Comparing Figure 1 in your submission, and Figure 3b in [1], I see no difference in attention patterns and based on that [1] should be also able to attend to the documents in the exact way similar to the proposed Block-Attention approach. After reading [1], I noticed that there the authors proposed an optional hyper-parameter \\\"k\\\" that can control how many documents the method attends to and to my understanding setting it to all the documents (that I suppose is the setting you are considering) should lead to the same config as the proposed method.\", \"Regarding the comparison with PromptCache, thanks for the clarification. As you acknowledged, PromptCache can also achieve the same accuracy as the proposed approach. In this case, the main metric to look into should be efficiency and wall-time. However, I still couldn't find a head to head comparison in terms of wall-time between these approaches. Also as a follow up question, how much more memory PromptCache would require to achieve to the same accuracy and the same efficiency in practice on the datasets you considered?\"]}", "{\"summary\": \"This paper tackles the efficiency challenges in RAG. The key observation is that different paragraphs retrieved in RAG are independent to each other. Therefore, it is possible to perform independent self-attention within each paragraph, and only allow the final question query to attend to all previous passages. This will dramatically improve TTFT in long-context RAG. The authors identify two important steps to recover accuracy after converting traditional self-attention to block attention (proposed in this paper): positional re-encoding and fine-tuning. Their final results show no loss of accuracy and significant speedup in long-context RAG applications.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The idea is simple and reasonable: different retrieved passages are not necessarily related to each other so the attention mask can be sparsified.\\n2. Positional re-encoding is an elegant way to solve inconsistencies in positional encodings.\\n3. Finetuning results show no loss of accuracy and significant speedup over self-attention baselines.\", \"weaknesses\": \"1. The method requires fine-tuning, which limits its scalability.\\n2. The evaluation seems to be a bit weak. \\n- For example, the method trains on TQA/2Wiki's training set and evaluates on their validation sets. The results on these two benchmarks are not zero-shot and are not very representative. \\n3. It's unclear what is the overhead associated with positional re-encoding.\\n4. Can you construct some examples where different retrieved passages are related to each other? In this case, will the proposed method fail? For example, you might retrieve several chapters in a textbook, where later chapters are dependent on earlier chapters.\", \"questions\": \"Please respond to my questions in \\\"Weaknesses\\\".\\n\\n===\", \"post_rebuttal\": \"my concerns have been resolved and I will keep my score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Official Comment by Reviewer 2J6u\", \"comment\": \"Thank you for your comments and I decide to keep my initial rating.\"}", "{\"comment\": \"*Question 5: positional re-encoding for other positional encoding methods*\\n\\nConsidering that the majority of open-source LLMs are implemented based on RoPE, our paper solely presents the position re-encoding based on RoPE. In the subsequent version of the paper, we will integrate a more in-depth and comprehensive analysis of various positional encoding methods.\\n\\nYour main concerns primarily arise from the distinctions and comparisons between block attention and related works, including PromptCache and Superposition Prompting. Since we have furnished detailed explanations and comparisons in this comment, we sincerely hope that you will consider adjusting your score. Should you have any follow - up questions, we will be more than glad to provide you with the answers.\"}", "{\"comment\": \"Thank you for accepting our clarification. We will carefully analyze the differences and connections between our work and related works in the next version of the paper.\"}", "{\"comment\": \"Thanks for your response!\"}", "{\"comment\": \"**Question 2: Can the authors elaborate on how the performance of Block-Attention varies with input length and retrieval volume? Are there any thresholds beyond which the performance might degrade?**\\n\\nSure no problem. First, we'd like to introduce our experience of using block attention in real-world applications (not just in this paper). The performance of block-attention will only decline significantly (be lower than the vanilla-sft model) under one circumstance: when the number of blocks in the Inference stage is greater than the maximum number of blocks in the Training stage, and the length of the last block is very short, the performance of block-attention will be inferior to that of self-attention. We can intuitively understand it as the fact that since the number of tokens with global attention (the last blocks) is too small, the models fail to fully \\\"integrate\\\" the knowledge from different blocks during the prefill stage. In practice, we have a strategy to address the potential drop caused by this issue: we may set a hyperparameter max_blocks = n, and then only the first n passages will use block-attention, and the subsequent documents will switch back to self-attention.\\n\\nIn this paper, since most of the datasets have only 10 associated passages, we didn't conduct this experiment. After you raised this suggestion, we implemented an additional experiment on NaturalQuestion-Open: we changed the number of retrieved documents (only in the inference stage) as well as the number of blocks with global attention, and observed the performance changes of the block-attention model.\\n\\n|Num. of passages|3|5|10|15|20|30|\\n| :--- | :--- | :--- | :--- | :--- | :--- | :--- |\\n|Accuracy (Global block num. =1)|54.5\\t|55.9\\t|56.2\\t|56.6\\t|54.1\\t|52.4|\\nAccuracy (Global block num. =3)\\t|53.9|\\t55.8|\\t56.9|\\t57.2|\\t55.7\\t|54.7|\\nAccuracy (Global block num. =5)\\t|54.0\\t|55.7\\t|57.1|\\t57.3|\\t56.8\\t|57.2|\", \"we_may_easily_observe_that\": \"When the maximum number of blocks in the training stage is 10, the model can extrapolate to at most 15 blocks in the inference stage, and any more than that will lead to a decline in accuracy. However, if we increase the number of blocks with global attention (for example, by switching the last 5 blocks back to self-attention), we can successfully \\\"reverse\\\" this downward trend.\\n\\n**Question 3: Given the reliance on fine-tuning, how do the authors plan to address potential overfitting issues, particularly in diverse RAG settings?**\\n\\nPlease refer to our general response to all the reviewers, where we have provided a detailed clarification of some queries related to \\\"additional fine-tuning\\\" and \\\"generalizability & avoid overfitting\\\". Once we adopt the block-attention mechanism during the SFT stage of the foundation model, the overfitting issues will no longer exist. \\n\\nThus, our plan is not confined to technology alone. We will also keep on encouraging certain open-source LLM teams, such as Llama, Qwen, DeepSeek, and Hunyuan, to incorporate the block-attention mechanism into their future models. Stay tuned!\"}", "{\"comment\": \"Thank you for your comments. I'm delighted to engage in this follow-up discussion with you.\\n\\n**Regarding [1]**, please note that Figure 3.2 in this paper is merely a conceptual diagram, not the actual attention masks. Please directly refer to Section 3.2.1 of the original paper. The author emphasizes:\\n\\n\\u201cWe emphasize that this resulting attention pattern is a construct for visualization only\\u2014in reality, all calls to the LLM use fully dense attention, although on relatively smaller context lengths. Concretely, each of the dashed boxes in Figure 2 is a separate LLM call.\\u201d\\n\\nConsequently, as we previously stated, the main idea of [1] is to split **one multi-passages query** into **multiple single-passage queries**, which differs significantly from our approach. As for their hyper-parameter k, it is used to control the number of minimum retaining paths in each step of the iteratively pruning process. Even if they set k to all documents, what they do is still to decompose the original query into k single-document query in each step. The additional experimental results we've obtained also confirm this point.\\n\\n**Regarding PromptCache**: Sure, no problem. We'll provide the detailed information regarding memory usage. Within the experimental setting described in the manuscript (where each query consists of 10 passages), in order to achieve the TTFT and FLOPs-TFT mentioned in the paper on the Natural_Question_open dataset, the block-attention model is required to cache 36,100 passages. Each passage has an average length of 148 tokens, with an average memory usage of 18.47 MB, resulting in a total memory consumption of **650GB**. \\n\\nAs for the PromptCache model, if it aims to reach the same level of accuracy and latency as the block-attention model, the total memory consumption will be 650GB \\u00d7 10 = **6.5TB**.\\n\\nIn addition, please note that in real-world applications, with the help of an infra-team and the Prefill-Decoding Disaggregating framework, the actual memory consumption required is 1 to 2 orders of magnitude lower than the aforementioned two values.\\n\\nI hope my reply can address your concerns, and I'm also looking forward to further discussions with you.\"}", "{\"summary\": \"The paper proposes Block Attention for retrieval augmented generation (RAG). The idea is to confine attention within each retrieved document (a.k.a. block) and only let the query attend to all documents. In this way, the KVs for each document can be cached and reused if the document re-appears in multiple queries. The method also proposes to explicitly correct RoPE positional encoding before reusing the cached KVs. After fine-tuning, this technique can retain the accuracy of the baseline while speeding up the inference due to smaller attention span and avoiding the KV recomputation thanks to the block caching.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The method is simple and easy to implement: It only involves limiting the attention computation within each document in RAG scenarios and caching the KVs for each block for document re-use as proposed in PromptCache [Gim et. al, 2024]. The newly introduced RoPE rotation correction is also straightforward to implement.\", \"Local attention and prompt caching are sound techniques and suitable for RAG applications.\", \"The paper is clear and easy to follow.\"], \"weaknesses\": \"My primary concern is regarding the paper's contribution compared to the existing works and the need for a better experimental analysis to understand where this method stands among previous methods that share very similar ideas or how it improves them. The authors can find more detailed comments and suggestions below.\\n\\n- Contribution and limitations compared to existing works\\n\\nTo me the main contributions of the paper to improve the efficiency of RAG can be summarized as 1) limiting the attention within each document 2) caching the KVs of the documents to reuse them 3) Adjusting the rotation in the RoPE position encoding based on the position of the reappearing cached document. Points 1 and 2 have been already proposed in prior works. E.g. PromptCache [Gim et. al. 2024] proposed the same techniques to speed up the RAG applications. This makes contribution 3, the main new addition to the existing works. However, according to the experiments reported in the submission, the RoPE position correction does not lead to any improvement over PromptCache (lines 353-356). To my understanding this makes fine-tuning the only source of improvement over PromptCache, which is an expected gain which comes with known challenges of such a fine-tuning.\\n\\nI want to also point the authors to [1] below that is a closely relevant prior art which is currently missing from the draft. [1] also addresses the three aforementioned points above. Namely, it proposes limiting attention to attention blocks, caching the KVs for the documents to re-use them, and addresses the position encoding problems by overlaying the documents as parallel edges without requiring any fine-tuning. I would suggest the authors add a discussion on the advantages of the proposed method compared to [1] and also add controlled experimental comparisons to better highlight the contributions.\\n\\n[1] Superposition Prompting: Improving and Accelerating Retrieval-Augmented Generation, Merth et. al.\\n\\n* Experimental Results\\n\\nThe experimental section needs to be revised to clearly highlight the main sources of improvements and contributions between the proposed method and existing prior methods (under same settings, e.g. with or without fine-tuning). Based on the submission, comparisons with similar approaches are missing (see [1] above), additional speed up on top of PromptCache (if any) is not verified, and it is not clear to me if anything other than the fine-tuning is the source of accuracy improvement over PromptCache. The authors can find more detailed comments and questions regarding the experiments in the following section.\", \"questions\": \"- Additional questions and comments:\\n\\n1) To my understanding, the main technical contribution of the method compared to the previous approaches is the proposal to explicitly re-adjust the positional encodings. What is the practical advantage of the proposed RoPE encoding adjustment? Does it lead to any improvement on the final RAG task quality metrics over PromptCache? From the current manuscript I did not find an ablation showing its importance to improve prior arts. \\n\\n2) What is the comparison between the proposed method and PromptCache in terms of efficiency? How much does the proposed method improve PromptCache in terms of wall-time?\\n\\n3) How does the method compare with respect to the Superposition Prompting paper above [1] both in terms of accuracy and efficiency?\\n\\n4) The paper mentions fine-tuning on RAG-specific datasets necessary for the proposed approach. Expectedly, fine-tuning can improve the accuracy on the specific tasks at hand, but it is also important to maintain the generalizability of the pre-trained models. This can be difficult especially when the training recipes are not available, or they are instruction fine-tuned on non-available data. How does the performance of the reported models change on common benchmarks such as MMLU and HumanEval after the suggested fine-tuning?\\n\\n5) The positional encoding re-adjustment is only discussed for RoPE. As a suggestion for further improvement, it would be nice to have a more thorough analysis on different positional encoding approaches.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your valuable suggestions\\uff01\\n\\nWe have elaborated on the issue of \\u201cadditional fine-tuning\\u201d in the general response to all reviewers. We believe that for any new attention mechanism, this cost is inevitable. However, as long as the post-training of some open-sourced foundation model support the block-attention manner, then we no longer need this additional fine-tuning process. \\n\\nWe apologize for forgetting to verify this aspect in our experiments. In real-world applications, it has been observed that the Block-Attention mechanism tends to show a decrease in effectiveness only when applied to very small-size models (those with fewer than 1 billion parameters). Nevertheless, on models with parameter counts varying from **1.5 billion to 130 billion**, it sustains an accuracy that is either comparable to or even better than that of the vanilla-sft models.\"}", "{\"summary\": \"This paper introduces the Block-Attention method, an efficient approach tailored for RAG scenarios that enhances inference efficiency while preserving performance through fine-tuning. Block-Attention operates by dividing the input sequence into multiple independent blocks, each calculating its key-value (KV) states separately via self-attention. Only the final block can attend to previous blocks, allowing the user query to reference all prior retrieved documents. Evaluations on four RAG benchmarks reveal that Block-Attention, after fine-tuning, maintains comparable accuracy to traditional self-attention models (e.g., 68.4% vs. 67.9% on Llama3) and even slightly outperforms in some cases (e.g., 62.3% vs. 59.6% on Mistral). The efficiency benefits, measured in TTFT and FLOPs, grow with input length; at 32K input length, Block-Attention\\u2019s TTFT and FLOPs reach just 1.3% and 0.2%, respectively, of those in standard self-attention models.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Novel and practical idea\", \"easy to follow\", \"Simplicity: The solution is relatively simple to implement and can be integrated into existing LLM architectures.\", \"Impressive performance: this method reduces Time to First Token (TTFT) by up to 98.7% and FLOPs by up to 99.8%.\"], \"weaknesses\": \"### Needs for finetuning:\\nThe method requires additional fine-tuning, which might be resource-intensive for larger models. It would be better to explore a training-free approach with the proposed method.\", \"questions\": \"Only tested on relatively small models (7B-8B parameters) - would it work as well on larger models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents the Block-Attention mechanism aimed at optimizing inference efficiency in retrieval-augmented generation (RAG) scenarios. The authors assert that by modifying the attention calculation to process tokens in independent blocks, they can significantly enhance both accuracy and computational efficiency compared to traditional self-attention. The proposed method is evaluated against well-known RAG benchmarks, and the authors claim notable improvements in inference speed and resource consumption.\\n\\nWhile the topic is pertinent, the contributions of the paper raise several concerns. The approach, although presented as innovative, relies on established principles of attention mechanisms without sufficiently distinguishing itself from previous work. Additionally, the experimental results, while promising, lack depth in exploring the nuances of the proposed method's effectiveness across varying conditions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is generally well-organized, with clear explanations of the Block-Attention mechanism and its proposed advantages.\\n2. The investigation into improving inference efficiency in LLMs through Block-Attention is timely.\\n3. The reported gains in inference speed and reduced computational load are compelling.\", \"weaknesses\": \"1. The concept of segmenting attention into blocks is not entirely novel and has been explored in various forms in the literature. The paper does not convincingly articulate how Block-Attention offers distinct advantages over similar approaches, such as sparse attention methods or hierarchical attention mechanisms.\\n2. The paper presents empirical results without adequately establishing the theoretical underpinnings of the Block-Attention mechanism. For example, the claim that \\\"tokens in all blocks except the last are restricted to attending only to information within their own block\\\" (Section 3.4) needs further elaboration on how this restriction impacts the model's representational capacity compared to full self-attention.\\n3. The experimental section would benefit from a broader set of benchmarks and comparison against state-of-the-art methods beyond just self-attention. \\n4. The authors should discuss how the model performs under different conditions, such as varying levels of noise in the retrieved passages or the diversity of user queries.\", \"questions\": \"1. How do the authors justify the choice of block size in the Block-Attention mechanism, and what implications does this have for different types of queries?\\n2. Can the authors elaborate on how the performance of Block-Attention varies with input length and retrieval volume? Are there any thresholds beyond which the performance might degrade?\\n3. Given the reliance on fine-tuning, how do the authors plan to address potential overfitting issues, particularly in diverse RAG settings?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your valuable suggestions\\uff01\", \"for_weakness_1\": \"articulating Block-Attention offers distinct advantages over similar approaches, such as sparse attention methods or hierarchical attention mechanisms**, we will explain in detail the differences between us and sparse attention methods or hierarchical attention mechanisms and update them in the manuscript:\", \"differences_between_block_attention_and_sparse_attention\": \"Our approach and sparse attention are two different and orthogonal routes. The goal of Sparse Attention is to achieve efficient RAG by compressing the amount of computation in attention. However, Block-attention does not conduct compression operations on the amount of attention computation (independently encoding different blocks will actually reduce the pre-filling cost to some extent, but this is just a side effect, not our main purpose). Instead, our main goal is to make the LLM reuse as many previous computation results as possible to achieve efficient RAG. The challenge of sparse attention is whether the compression operations can avoid losing important information, while the challenge faced by block-attention is how to reuse as many previous computation results as possible.\\n\\nAs for the **hierarchical attention mechanism**, we have discussed the main differences with a representative work, \\\"Parallel context windows for large language models\\\" (lines 349-355): The goal of most hierarchical attention mechanisms is still to extend the context window to handle long-context that longer than the model's window size. When the input length does not exceed the context window, adopting this mechanism will significantly reduce the model's accuracy.\\n\\n**weakness 2: Lack of theoretical analysis**\\n\\nWe sincerely apologize for the insufficient theoretical analysis. The block-attention mechanism stems from some intuitive observations within real-world applications. Hence, we placed greater emphasis on empirical analysis in the paper. In accordance with your suggestion, we will incorporate more extensive elaboration on the block-attention mechanism in the next version of the paper.\\n\\n**Weakness 3: comparison against state-of-the-art methods beyond just self-attention.**\\n\\nIn fact, we have already made comparisons with two state-of-the-art methods, namely PCW and PromptCache respectively. The results of PCW are even lower than those of the weak baseline, and the results of PromptCache are exactly the same as our weak baseline. To avoid redundant content in the table, instead of putting them in the table, we described them in words from line 349 to 355.\\n\\nIn addition, following the suggestion of Reviewer wUEZ, we have supplemented the comparison with two related works. Please refer to our **response to Reviewer wUEZ** for the detailed results.\\n\\n**Question 1: How do the authors justify the choice of block size in the Block-Attention mechanism, and what implications does this have for different types of queries?**\\n\\nActually, we didn't configure a hyperparameter like \\\"block size\\\" for the model. The quantity of blocks is determined by the number of semantically independent parts in the prompt. For instance, in the RAG scenario introduced in this paper, each retrieved passage will be segmented into an individual block. If 100 passages are retrieved, then the prompt will be partitioned into 102 blocks (1 system block + 100 passages + 1 user query block). \\n\\nMoreover, we can also give a brief introduction to the principles of block division when handling other types of prompts (in non-RAG scenarios). In the coding scenario (such as Copilot), each class and function will be segmented into an independent block. In the few-shot learning scenario, the examples within the prompt will be placed into an independent block. There are also some more general rules. For instance, we can utilize symbols like \\\"```\\\" and \\\"\\\\n\\\\n\\\" as block delimiters. Thus, in the training set of our LLM, the blocks are already predefined by these pre-defined rules along with manual labeling.\\n\\nDuring the Inference phase, the block division becomes even more straightforward. Apart from the aforementioned rules, those long text blocks that frequently reappear in different prompts will be defined as an independent block. We can readily obtain these high-frequency blocks through log analysis.\"}", "{\"metareview\": \"a) The paper proposes Block Attention for retrieval augmented generation (RAG) with multiple documents (blocks). In this work, each document is assumed to be independent from the others, therefore, it is possible to reduce computation by performing independent self-attention within each block, and only allow the final question query to attend to all previous blocks. In this way, the KVs for each document can be cached and reused. The method introduces an explicitly correction of the positional encoding and fine-tuning, to retain the performance of the baseline model, but much faster inference.\\n\\nb) The paper is well presented and easy to follow. The method is simple and easy to implement. Positional re-encoding is an elegant way to solve inconsistencies in positional encodings. The reported gains in inference speed and reduced computational load are compelling.\\n\\nc) The concept of separating attention into blocks for RAG is not novel. It is already considered in works such as PromptCache and Superposition Prompting. In particular PromptCache seems very similar to the proposed idea, but missing the adjustment of the positional encoding and the fine-tuning. \\n\\nd) The final decision for this paper was not easy. The paper has many similarities with PromptCache. Therefore the proposed idea is not novel and this should be clearer in the main paper. The main contributions are re-positional encoding and fine-tuning, which bring the method to the performance of the full self-attention, but with reduced computation. However, fine-tuning requires data and has the risk of over-specializing the model, losing generalization. Authors show results for in-domain and out-domain performance and the model seems to perform comparable to full self-attention also on out-domain. In addition, authors point out that block-attention could be used on the post-training phase, which would avoid the need of fine-tuning, although they did not provide results on that probably due to the training cost and time, but I do not see any reason why it should not work. \\nConsidering all of that, I believe that the proposed ideas of re-positional encoding for improving PromptCache by itself can be quite important for RAG and deserves publication as it can produce similar retrieval performance as full-attention but with a fraction of the computational cost. Said that, the authors should do a good job on adapting the paper to reviewers comments and this discussion and make clearer what are the actual contributions compared with previous work.\", \"additional_comments_on_reviewer_discussion\": \"Rev. 2J6u considered the main drawback the need for fine-tuning of the model. After authors rebuttal, rev. maintained their score of 6.\\n\\nRev. wUEZ pointed out the strong similarities of the paper to previous work. After a detailed discussion with authors, the comparative with related work is clearer. The main contribution of this work is the re-positional encoding and fine-tuning, that, when applied jointly can bring an important improvement on the retrieval and with performance comparable to full self-attention with a fraction of time. The final score of rev. is 5. \\n\\nRev. 1DTJ had an initial score of 5 because of some doubts and unclear parts in the paper. Authors did a good job in answering and rev. increased their score to 6.\\n\\nRev. MNR9 had a similar evaluation of the paper, with doubts that were clarified with the rebuttal. Thus, their final score was increased to 6. \\n\\nAfter reading the review, my final decision was not clear. So, as suggested by the SAC, I also reviewed the paper. The paper is well written and the proposed idea of re-positional encoding for the blocks with fine-tuning is quite important for improved retrieval results. The main drawback is the fine-tuning, but it could be avoided if the method is integrated in the post-training of a model and therefore I would assign a score of 6.\\n\\nOverall, I agree with rev. wUEZ analysis, but as expressed by rev. MNR9 in the internal discussion, I still consider that the proposed contribution is enough for acceptance as I believe that it could be impactful for improved RAG systems. However, authors should improve their presentation of the contributions with a correct analysis of related work in the final version of the paper.\"}", "{\"title\": \"Post-rebuttal response\", \"comment\": \"Regarding the discussion on comparison with PromptCache, I acknowledge that the main improvement of the proposed modification to the PromptCache algorithm is on the memory side and in terms of accuracy and latency, PromptCache can perform the same with or without the proposed positional ID adjustments. I suggest the authors to clarify this in the revised version of the paper and change the messaging to convey this summary.\\n\\nRe comparison with [1], their *Question* gets attended to each *Block*, and then the *Response* gets to attend to all the *Blocks* and the *Question* itself, as a result they still implement Figure 3.2 (although through multiple LLM calls) and at the end their response is generated based on multiple *Blocks* if they are all relevant to the question and the response. So your statement that \\\"some questions might necessitate integrating information from multiple passages to be answered\\\"and they can't do that is to me incorrect. The only difference that I can see here is that in your approach the *Question* attends to all the *Block*s at the same time, and for them the *Question* attends to each *Block* separately and the aggregation is postponed to the *Response* generation phase. However, this lets them to significantly parallelize the computation and reduce the latency compared to the proposed Block-Attention. A careful study is required to understand the benefits and shortcomings of each of these approaches both in terms accuracy and latency.\\n\\nI want to thank the authors for answering my questions. In summary, my main concern regarding contributions and limitations compared to the existing works mostly remains. However, authors' responses clarified the main advantage over the PromptCache baseline and I increased my score accordingly. I encourage the authors to modify the paper to highlight and focus on their main advantage over PromptCache (memory saving) and add discussions to delineate the differences, and contributions in terms of accuracy, memory and latency to the existing works.\"}", "{\"title\": \"A general comment to all reviewer's concerns about \\\"Needs additional fine-tuning\\\" and \\\"generalizability & avoid overfitting\", \"comment\": \"Thank you to all the reviewers for your valuable suggestions. I'm very glad to have the opportunity to engage in such an insightful discussion with you all on OpenReview. In this general response addressed to everyone, I'd like to briefly introduce the background of Block-Attention and the key points focused on in this research paper. I believe it can effectively address your concerns regarding regarding the issues of \\\"Needs additional fine-tuning\\\" and \\\"generalizability & avoid overfitting\\\".\\n\\nFirstly, any new attention mechanism requires fine-tuning, and Block-Attention is no exception. By way of analogy, transitioning from Alibi to RoPe also requires retraining the entire model from scratch. In real-world applications, the best approach to using Block-Attention is integrating it in the post-training stage of the foundation model. Specifically, for the LLM during this stage, we simultaneously adopt the self-attention mechanism and the block-attention mechanism to represent the training data. Regarding the division of blocks, some are manually divided during the annotation process, and others are divided according to rules. For example, in the RAG domain, each passage should be divided into one block; in the code domain, each function and class should be divided into one block; for few-shot tasks, each example can be divided into an independent block. The resultant model can adapt to both attention mechanisms simultaneously, and its results on common benchmarks are almost identical to those of the model without Block-Attention, without any performance loss. More importantly, after obtaining such a foundation model, we no longer need additional fine-tuning since the model has already completely adapted to the Block-Attention manner.\\n\\nHowever, since there are currently no open-sourced foundation models that support block-attention, if we want to use the block-attention mechanism in downstream tasks to improve inference efficiency, we must conduct an additional fine-tuning. The adaptation cost in this aspect is inevitable. The difference only lies in whether this cost is borne during the training process of the foundation model or during the fine-tuning stage of the downstream tasks. Fortunately, this cost is not high. We don't need to change the attention-mechanism in the pre-training stage. It can be achieved simply by adjusting the attention masks in the downstream fine-tuning.\\n\\nIn this paper, due to confidentiality reasons, we are unable to release our internal SFT and Preference Optimization data. Therefore, in this submission, for the sake of re-implementation considerations, we focus on the RAG (Retrieval-Augmented Generation) scenario and only use publicly available dataset to train the model to verify the effectiveness of block-attention. The model trained in this way will surely have a decline in generalizability. However, please kindly note that this decline is caused by our experimental settings rather than by block-attention. We will also keep on encouraging certain open-source LLM teams, such as Llama, Qwen, DeepSeek, and Hunyuan, to incorporate the block-attention mechanism into their future models. Stay tuned!\"}", "{\"comment\": \"Thanks for your valuable suggestions\\uff01\", \"for_weakness_1\": \"The method requires fine-tuning, which limits its scalability. We have elaborated on the issue of \\u201cadditional fine-tuning\\u201d in the general response to all reviewers. We believe that for any new attention mechanism, this cost is inevitable. However, as long as the post-training of some open-sourced foundation model support the block-attention manner, then we no longer need this additional fine-tuning process.\\n\\n**Weakness 2: The evaluation seems to be a bit weak.**\\n\\nWe quickly conducted an evaluation on NarrativeQA (https://huggingface.co/datasets/deepmind/narrativeqa) and this is the result:\\n\\n||NarrtiveQA|\\n| :--- | :--- |\\nLlama3-vanilla-sft|59.8|\\nLlama3-block-ft|60.5|\\n\\n\\nWe will supplement more comperhensive experiments in the next version of the paper.\\n\\n**Weakness 3: It's unclear what is the overhead associated with positional re-encoding.**\\n\\nThe cost associated with positional re-encoding is so low that it can be disregarded (< 0.001 ms). As a result, we did not conduct an analysis on the corresponding overhead in the paper.\\n\\n**Weakness 4: Examples where different retrieved passages are related to each other? **\\n\\nFirst of all, based on our observations in real-world applications and experiments, the model that has undergone block-fine tuning can easily handle such cases.\\n\\nThen, following your suggestion, we selected a case from the novel Harry Potter and the dataset HPD [1] to demonstrate the effectiveness of the proposed method in this scenario. Specifically, we took a consecutive piece of text from Harry Potter Book 6-Chapter 30 as the knowledge background (about 3,000 tokens) and then randomly divided it into 10 blocks. In this case, different retrieved passages are related to each other, but the block-attention model will encode each passage independently. Based on this background, we raised two questions for both the Llama3-vanilla-sft model and the Llama3-block-ft model.\\n\\nDue to space limitations, we have updated the detailed case in Appendix A of the manuscript. By carefully observing the output results of the two models, it can be found that for Question 1: What will Harry Potter's next actions be after the funeral? Both the Llama3-vanilla-sft and the Llama3-block-ft provided good answers. And for the more difficult Question 2 which requires reasoning based on multiple passages: Which characters are currently hated by Harry Potter? Among them, who is the one he hates the most? Unfortunately, the Llama3-vanilla-sft failed to output the correct answer, while the Llama3-block-ft successfully identified the relationships between Harry Potter, Lord Voldemort, and Severus Snape and output the correct answer.\\n\\nThe most crucial reason why the Llama3-block-ft model can handle such cases lies in the full-attention design of the last block. Intuitively, this full-attention block is capable of discerning the semantic relationships (if any) between blocks and properly leveraging them to generate the output. We once attempted to apply block-attention to the last block as well and found that the accuracy would plummet directly to the level of the no-RAG model.\\n\\nWe are confident that these comments will address the majority of your concerns about our study. Should you have any follow-up questions, we will be more than glad to provide you with the answers.\\n\\n\\n[1] Large Language Models Meet Harry Potter: A Bilingual Dataset for Aligning Dialogue Agents with Characters.\"}" ] }
7zJDTnogdG
Electrocardiogram Foundation Model Using Temporally Augmented Patient Contrastive Learning
[ "Gul Rukh Khattak", "Konstantinos Patlatzoglou", "Yixiu Liang", "Libor Pastika", "Boroumand Zeidaabadi", "Joseph Barker", "Mehak Gurnani", "Antonio H. Ribeiro", "Jeffrey Annis", "Antonio Luiz Pinho Ribeiro", "nicholas peters", "Junbo Ge", "Daniel B. Kramer", "Jonathan W. Waks", "Evan Brittain", "Arunashis Sau", "Fu Siong Ng" ]
Electrocardiograms ($ECGs$) capture the electrical activity of the heart, offering rich diagnostic and prognostic insights. Traditionally, electrocardiograms are interpreted by human experts, but deep learning is now encroaching on this domain and combining human-like intelligence with machine precision for a deeper insight. Self-supervised pretraining is essential for maximising the potential of scarce medical data. Applied to $ECGs$, patient-contrastive learning has shown promising results, by utilising the natural variations in the cardiac signals. In this study, we introduce **T**emporally **A**ugmented **P**atient **C**ontrastive **L**earning of **R**epresentations ($TA\text{-}PCLR$), a novel approach that incorporates temporal augmentations into a patient contrastive self-supervised foundation model. Trained on one of the largest diverse cohorts of more than six million unlabelled electrocardiograms from three continents, we demonstrate the efficacy of our approach and show its value as a feature extraction tool for small and medium-sized labeled datasets. We also validate the performance on an open-source external cohort, surpassing other pretraining approaches while outperforming an ensemble of fully supervised deep networks on some labels. Additionally, we conduct a detailed exploration of how the pretraining and labeled electrocardiogram dataset distributions impact supervised task performance.
[ "Contrastive learning", "Sef-supervised pre-training", "Electrocardiograms", "Deep learning", "Foundation model." ]
Reject
https://openreview.net/pdf?id=7zJDTnogdG
https://openreview.net/forum?id=7zJDTnogdG
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zFe3CKfARw", "rm0DQomSmz", "loH4dnFgnb", "lncTxmH3Jl", "iLU4n4iRYa", "iBQHkzdIgk", "iAKex2IMuH", "dKnP0A2Q5Z", "cMmGOdx0xZ", "c82bla5jDK", "ZlxkxFrGqj", "ZS20axpsYe", "Y0tSwp5Ffe", "VbCs9BlaMy", "UcDduoIXbF", "QUv8pc1rhk", "EzhvnIq6Ku", "8r7pTcNP4l", "1rLdUqqFE1", "1Ufbt3uVQB", "0XnXsxmcZM" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "meta_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732575342107, 1732575048823, 1730702365903, 1732793022458, 1737524009215, 1732541004200, 1732541228960, 1732564146332, 1732796160842, 1730623902206, 1730706749624, 1734649728523, 1730706464498, 1732547749267, 1730616081780, 1732549279926, 1732717038163, 1732570592891, 1732574551215, 1731086566944, 1732563788265 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9841/Authors" ], [ "ICLR.cc/2025/Conference/Submission9841/Authors" ], [ "ICLR.cc/2025/Conference/Submission9841/Reviewer_7RW1" ], [ "ICLR.cc/2025/Conference/Submission9841/Reviewer_7RW1" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9841/Authors" ], [ "ICLR.cc/2025/Conference/Submission9841/Authors" ], [ "ICLR.cc/2025/Conference/Submission9841/Authors" ], [ "ICLR.cc/2025/Conference/Submission9841/Authors" ], [ "ICLR.cc/2025/Conference/Submission9841/Reviewer_poSx" ], [ "ICLR.cc/2025/Conference/Submission9841/Reviewer_pv5E" ], [ "ICLR.cc/2025/Conference/Submission9841/Area_Chair_KRSN" ], [ "ICLR.cc/2025/Conference/Submission9841/Reviewer_hUMx" ], [ "ICLR.cc/2025/Conference/Submission9841/Authors" ], [ "ICLR.cc/2025/Conference/Submission9841/Reviewer_T2x4" ], [ "ICLR.cc/2025/Conference/Submission9841/Authors" ], [ "ICLR.cc/2025/Conference/Submission9841/Reviewer_poSx" ], [ "ICLR.cc/2025/Conference/Submission9841/Authors" ], [ "ICLR.cc/2025/Conference/Submission9841/Authors" ], [ "ICLR.cc/2025/Conference/Submission9841/Reviewer_QAYi" ], [ "ICLR.cc/2025/Conference/Submission9841/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response 3 to Reviewer QAYi\", \"comment\": \"**Reviewer comment:**\\nMore discussion is warranted as to why performance on the fully supervised benchmarks would become similar after a finetuning dataset of 100k is reached.\\n\\n**Response:**\\nThe performance in this case saturates at around 200k since the test is similar to a linear probe but using an MLP head, to optimize training times in the scope of current work. We wished to highlight the performance in the low-data paradigm for the simple setup but subsequent tests for the external cohort in Table 4 demonstrate superior performance to SOTA supervised learning with fine-tuning the feature-generating backbone. Additional results in Appendix B Table 7 provide a wider comparison.\\n\\n**Reviewer comment:**\\nThe paper could have benefited from additional non-tabular figures, such as a data scaling graph.\\n\\n**Response:**\\nThanks, we have replaced the previous Table 3 with Figure 2.\\n\\n**Reviewer comment:**\\nIs it fitting for this paper to introduce a foundation model given the nature of the experimentation performed, as well as those related points raised under \\\"Weaknesses\\\"?\\n\\n**Response:**\\nWe have responded to the previous comments. A foundation model is defined by generalization achieved through training on large unlabelled data. The greatly superior performance of our model for a range of different labels, achieved through our pretraining approach and large cohort, positions our model as a foundation model that can be exploited for any generic task. We have added the additional results requested in the appendix.\\n\\n**Reviewer comment:**\\nI started (and sort of left) the paper wondering: Is TA-PCLR a combination of individual existing methods which form a novel aggregate approach? Even by name, it is clear that TA-PCLR builds off the PCLR method, and yet how is never made totally clear - one would have to read the PCLR paper to learn this. Is it simply adding random temporal cropping to the PCLR method? Was the method/cropping approach inspired by other work in the ECG or time-series space?\\n\\n**Response:**\\nYou are correct that the TAPCLR is a combination of existing patient contrastive and temporal augmentations that we have cited. We have tried to further clarify specifically in Section 3.3 to \\nexplain the choice and details about the augmentations. We select these particular augmentations as being harmless to the physiological characteristics of an ECG. The temporal augmentations included random cropping and zero masking of 20% of the signal.\\n\\n**Reviewer comment:**\\nConsidering, \\\"The number of ECGs from each patient greatly differs and thus the training epoch is defined as one complete iteration for all unique patients with the positive views randomly sampled at training time. In this way, the training is not biased by patients having more ECGs while still exploiting the available data diversity.\\\" I agree that this maintains better overall population diversity, however, I would be curious to know whether this would lead to biasing the model away from representing the conditions of those people with chronic health issues (i.e., the people who do have those higher ECG counts). What if it were those people who were the most important to represent well?\\n\\n**Response:**\\nThe particular strategy not only balances the training for different subjects but also exploits the high number of ECGs when available. Each epoch uses a different pair thus utilizing all the data diversity for learning the representations. The fact that the pretraining performs best for the BIDMC with most ECGs per patient demonstrates that the training is able to benefit from these multiple ECGs.\"}", "{\"title\": \"Response 2 to Reviewer QAYi\", \"comment\": \"**Reviewer comment:**\\nCertain methodological matters are somewhat ambiguous. It was unclear whether hyperparameter tuning was performed. It is not made abundantly clear that the positive pairs are necessarily augmented instances of the same recording (and not from different recordings of the same patient). It is unclear whether there was any signal preprocessing (such as standardization), and if not, this may be worth stating.\\n\\n**Response:**\\nWe apologize that the text did not clarify. A detailed hyperparameter tuning was not performed but the learning rate was optimized, from a range of 0.1 to 0.00001. The positive pairs are two different ECGs from the same patient taken at different times. Apart from the bandpass filtering mentioned no other preprocessing of the data was performed. Although the experiments conducted in Section 4.1 involved standardization of the ECG features for the supervised training.\\n \\n**Reviewer comment:**\\nCertain purported claims are left unsubstantiated, for example: \\\"Human perception is limited by low visual accuracy, gaps in theoretical knowledge, and the complexity of the diverse, non-linear, interrelations.\\\"\\n\\n**Response:**\\nThese observations have been made in past literature (https://www.nature.com/articles/s41597-023-02153-8). We have added the reference in the paper.\\n\\n**Reviewer comment:**\\nCertain references may be misleading provided its context, for example, \\\"Hybrid techniques combining contrastive learning and generative pretraining based on transformer architecture have been implemented to train foundation models for ECG feature extraction (Song et al., 2024; McKeen et al., 2024)\\\" may imply that ECG-FM (McKeen et al., 2024) using generative pretraining, which is not true.\\n\\n**Response:**\\nThanks we have corrected the mistake as you rightly pointed out.\\n\\n**Reviewer comment:**\\nCertain definitions are questionable, for example, the statement that, \\\"There are diverse contrastive learning approaches, differing in their definitions of the positive and negative instances and the loss computations\\\" negates how certain contrastive methods do not apply here, such as using a triplet loss. \\n\\n**Response:**\\nWe have rephrased using \\u201cmostly\\u201d.\\n\\n**Reviewer comment:**\\nOr also \\\"In the visual image domain, the positives are augmentations (transformations) of the same image while the negatives are augmentations from others\\\", which is not necessarily true. \\n\\n**Response:**\\nRephrased inserting \\u201cusually\\u201d .\\n\\n**Reviewer comment:**\\nThis also occurs in the literature background. Simply making less specific claims or by allowing for the existence of other alternatives (e.g., stating \\\"may\\\" or \\\"usually\\\") could solve the issue. Another is \\\"The feature-generating model is frozen while a single neuron is trained to predict each label\\\", which is misleading since neurons are computational nodes, it is the linear layer parameters being trained.\\n\\n**Response:**\\nThe neuron is the computational node in a linear layer but here the neuron is mentioned to emphasize that no hidden layers are used and the linear layer consists of one neuron for each target label.\\n\\n**Reviewer comment:**\\nCertain methods, such as Song et al., 2024 and 3KG, should likely be referenced in the literature background in greater detail. There is some methodology mixed into the results section (Song et al., 2024), which may or may not be necessary, but can hurt the flow.\\n\\n**Response:**\\nThe description of Song et al., 2024, etc is removed from the results section. Some details are added in Section 2 concerning these approaches but space constraints do not allow more detailed descriptions.\\n\\n**Reviewer comment:**\\nThere are typos (\\\"same train/test splits..\\\") and a random section link (\\\"Unique patients1\\\").\\n\\n**Response:**\\nCorrected\\n\\n**Reviewer comment:**\\nCalling the dataset-specific training and evaluation \\\"the first investigation of its kind\\\" is an ambitious statement, where the abstract had me expecting some kind of domain shift/adaptation methodology to be applied. The analysis seems good, does emphasize their multi-source dataset, and is important to understand how data distributions affect performance (e.g., healthy versus unhealthy subjects); However, the results of this analysis left me unsure as to what explicit value it added to this particular study.\\n\\n**Response:**\\nWe rephrase the abstract removing \\\"the first investigation of its kind\\\". The analysis is highly pertinent to the particular domain as in literature for ECG classification, often labe-specific comparisons are performed that may not be meaningful. We also wish to stress that our multi-centered dataset improves a generalization of the approach for diverse data distributions.\"}", "{\"summary\": \"This paper presents an ECG-based foundation model using a large-scale dataset of six million ECGs from different institutions. The authors propose to use zero masking and random cropping as augmentation strategies.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"1. The main strength of the paper is the scale of the dataset that they have access to, which combines data from multiple cohorts.\\n2. The aim of building an ECG foundation model is interesting and important. \\n3. The authors evaluate their proposed framework for multiple downstream prediction tasks.\", \"weaknesses\": \"1. The proposed work lacks novelty. The framework does not possess originality expected by ICLR and is very similar to existing frameworks. In fact, it is unclear to me what the novelty is. The methods section presents the vanilla InfoNCE loss and then discusses the augmentations, preprocessing, architecture and training scheme.\\n2. Comparison to existing SOTA frameworks is very limited. The authors do mention relevant papers but only compare to a few in the results section. There are many notable works in this area that are worth comparing to. \\n3. Did the authors conduct hyperparameter tuning for their proposed framework and baselines? What was the approach? I appreciate that the final values are listed in the appendix however this is not sufficient. \\n4. The authors did not conduct any statistical significance testing nor did they provide confidence intervals to understand whether the performance improvements are actually worthwhile. \\n5. Numbers should be presented with three significant figures.\\n6. There are empty rows in Tables 4 and 6, why is that?\", \"questions\": \"Please see above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for addressing my questions and concerns. Unfortunately I will not change my score considering that the authors did not conduct hyperparameter tuning, limited comparisons to SOTA work, and due to lack of novelty.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response 1 to Reviewer T2x4\", \"comment\": \"We highly appreciate your insightful comments and value your interest in indicating weaknesses in our work. You highlighted important research questions that we carefully tried to address. We regret that our work was not satisfactory for your approval previously but we have greatly revised the paper organization and presentation that we would be thankful for you to go through. We would be grateful for your time to read our responses and view the revised manuscript.\\n\\nWe have restructured Table 2 and replaced the previous Table 3 with a graph. We have tried to rephrase where our previous explanation could be confusing or misleading and provided additional results in Appendix A, B, and C that might address most of your concerns and we are hopeful will result in revising your previous assessment. We will now go through each comment and present our response and the corresponding changes in the new version of our paper.\\n\\n**Reviewer comment:**\\nThe paper appears to lack analytical depth. Adding an exploratory data analysis (EDA) would provide a more comprehensive understanding of data distribution and representation, which could, in turn, enhance the reliability of the results.\\n\\n**Response:**\\nWe have added some demographics of the datasets in Appendix A Table 6, and we also provide references in Section 3.1 where further details can be obtained.\\n\\n**Reviewer comment:**\\nAdditionally, assessing the representation distribution across different data types could further substantiate the FM\\u2019s robustness. For example, first, using methods like PCA or t-SNE, visualize the embeddings generated by the model in 2D or 3D to examine whether different data types fall within similar distributions or are distinctly separated. This analysis helps assess if the model represents various data types consistently.\\nSecond, analyze whether different data types are well-mixed or independently clustered to understand if the model extracts consistent features across types or demonstrates bias toward certain types.\\nThird, examine specific features the model prioritizes across data types (e.g., certain words in text, patterns in images, frequency ranges in signals) to determine if the model consistently learns representations across data types.\\nLastly, feature correlation analysis assesses the relationships between key features learned by the model, which is especially insightful for multimodal data to understand how the model connects specific features across types.\\n\\n**Response:**\\nWe have added some interpretability analysis in Appendix B for the PTB-XL dataset. We would like to highlight that our model is self-supervised so the learned representations are generic and thus suitable to a foundation model. The tasks are multi-label and composite so the same ECG can manifest different conditions and the same label encompasses several modes. Figure 4 shows how the principle components of the embedding encode different superclasses. It can be appreciated that the embedding space places similar ECGs in nearby locations. Some of the composite classes can be observed to form several clusters. The ECG encoding may not be limited to only cardiac diseases but all the underlying structure distribution. Figure plots tsne components for only those features that highly correlate to a given label sex, age, and normal/abnormal. The different classes fall on opposite sides of PC1 while a regression target like age shows a gradient. We address your third comment by Figure 6 using Gradcam to show how the model represents different classes. STTC class is related to the ST segment of an ECG so we look at some positive and negative samples. It was interesting to note that the model is classifying by giving more importance to the ST segment for negative classes (normal). It is also more intuitive that instead of recognizing several abnormal modes it finds it more efficient to recognize the normal ST segment. We have added a feature correlation map in Figure 5 to signify how the learned features relate to each other.\\n**Reviewer comment:**\\nThe model primarily relies on temporal transformations such as zero-masking and cropping, with limited exploration of other data augmentation techniques.\\n\\n**Response:**\\nThe selection of temporal augmentation is based on the fact that the signal in this case is important biological data with clinical implications for the scale, frequency, and underlying patterns so augmentations like scaling, warping, rotating, etc may result in features that will be agnostic to important clinical information for a potential downstream task thus the model will loose generalization that is an important aspect of FM. We experimented with some variations of the masking and also tried a novel augmentation using raw/filtered ECGs. We now include these results in Table 2.\"}", "{\"title\": \"Response 2 to Reviewer T2x4\", \"comment\": \"**Reviewer comment:**\\nMoreover, the training approach employs conventional contrastive learning without attempting novel methods.\\n\\n**Response:**\\nThe particular loss function employed resulted in robust performance and far surpassing a much larger supervised and pretraining approach. In the scope of current work, we limit ourselves to our best-performing representation learning approach although we did experiment with variations of the loss function, but the performance was not comparable. We have previously published work employing triplet loss and VAE.\\n\\n**Reviewer comment:**\\nThe choice to rely solely on masking lacks sufficient justification, making it challenging to fully accept this design decision. Given the reliance on a limited set of augmentation techniques (https://dl.acm.org/doi/10.1145/3511808.3557591, https://arxiv.org/abs/2204.04360), it is difficult to assess their true effectiveness. To strengthen the study and provide clearer evidence of the augmentation strategy\\u2019s impact, it would be beneficial to incorporate and compare additional methods discussed in previous research, as outlined in this paper and this paper. Such a comparative approach would provide deeper insights and enhance the robustness and reliability of the findings regarding the model\\u2019s performance.\\n\\n**Response:**\\nThe ECG signal is a snapshot of a biological process where any augmentation cannot be safely incorporated as the training process discards features encoding the augmentation dimension. The selection of the particular augmentations is based on the fact that such random masking and cropping do not change the signal characteristics and improve the generalization of the approach which is an important consideration for a foundation model. The two publications that you suggest (https://dl.acm.org/doi/10.1145/3511808.3557591, https://arxiv.org/abs/2204.04360) also strengthen our conviction that the use of augmentation cannot be applied without an understanding of the tasks. These research endeavors are focussed on data augmentation for a particular supervised task and while some augmentations might improve the performance on one task, it may have adverse effects on others. https://arxiv.org/abs/2204.04360 in particular notes that zero-masking being \\u201clabel preserving\\u201d, provides a more robust performance across a range of labels. We also experimented with random masking across leads, random lead masking, and using unfiltered signals as augmentations but the performance did not change so we retained the simpler configuration. We have revised Section 3.3 which defends and explains our choice of augmentations. We have also added the suggested references with the following explanation:\\n\\n*Section 2:\\n\\u201cPast work has shown that while some augmentations might improve the performance on one task, they may have adverse effects on others (Lee et al., 2022; Raghu et al., 2022).\\u201d\\n*\\n\\n**Reviewer comment:**\\nTo substantiate the proposed approach, it is essential to compare and analyze its performance against various existing learning approaches. .....Comparing the model\\u2019s performance against currently available models, such as those discussed in (https://arxiv.org/abs/2408.05178), would provide a solid benchmark.\\n\\n**Response:**\\nIn the main paper, we only limited comparisons to where the original publications report a metric. A lot of performance gain can be obtained by proper optimisation so we think it is unfair to report our version that may not be as well optimised. Table 5 includes Song et al. (2024) (https://arxiv.org/abs/2407.07110) which is a foundation model trained by a hybrid MAE+ contrastive learning. We have also added some performance comparisons with a range of other pretraining methodologies in Appendix B, Table 7 concluding MAE-based approaches like STMEM (https://iclr.cc/media/iclr-2024/Slides/18470.pdf).\\n\\n**Reviewer comment:**\\nAdditionally, applying alternative contrastive learning techniques, such as MoCo, could reveal how different contrastive frameworks... model\\u2019s predictive accuracy, robustness, and generalization across different ECG data subsets.\\n\\n**Response:**\\nWe have tried to address all of your concerns but adding Appendix B Table 7 and restructuring Table 2 as an ablation study for augmentations. We have also added interpretability research and we hope that you can appreciate the performance edge that the proposed method has and how we have integrated clinical understanding into our model design. While in future work we can further explore if the loss function can be further optimized.\"}", "{\"title\": \"Response 2 to Reviewer hUMx\", \"comment\": \"**Reviewer comment:**\\n\\n3/ Poorly designed experiments\\n\\nThe experimental design was so fragmented and confusing. There lacked of a systematic framework and experimental design to fairly benchmark the proposed method over the SOA methods over various tasks and datasets.\\n\\n**Response:**\\nThe experimentation can be divided into a proof of concept in Section 4.1 that performs some ablation study using the supervised tasks labels from the pre-training dataset. Section 4.2 provides additional investigation regarding the impact of racial and health diversity on both the pre-training cohort and labelled dataset that provides valuable insights for future work. Section 4.3 is limited to comparison to SOTA techniques that report performance for PTB-XL since it will not be fair to compare our implementationof other approaches that may be not be sufficiently optimised but we incorporated an additional Appendix B that compares our approach to a wider range of pretraining applications including approaches that use auxiliary information like ECG reports to provide zero-shot inference. We believe that integrating such information adds bias that can hurt the generalization that is important for a foundation model. The proposed approach outperforms all techniques.\\n\\n**Reviewer comment:**\\nIn Table 2, the authors compared PCLR with the proposed TA-PCLR. But PCLR were only pre-trained over the MGH and BIDMC datasets but not the combined BCSV dataset. The BCSV combined dataset is the largest. Why didn't a head-to-head comparison was done for PCLR and TA-PCLR for the BCSV dataset?\\n\\n**Response:**\\nWe modified Table 2 as the table mainly represented an ablation study of the approach and the comparison with the pretrained PCLR was only confusing the reader. We add results for other variations of temporal augmentations that we had explored. The table intends to show how each component improves the performance so for the same configuration the TA-PCLR performs better than PCLR. Finally, we take the best configuration and train it with the larger dataset. We also restructure and update the explanation in Section 4.1 accordingly. Due to long training time the model development was only undertaken with BIDMC and only the final optimum model was pretrained with the larger dataset.\\n\\n**Reviewer comment:**\\nIn Table 3, the authors then compared the proposed TA-PCLR against ResNet. Why ResNet, quite an old model, was picked as a baseline model to be compared with here? Why didn't the performance of PCLR included here?\\n\\n**Response:**\\nWe replaced Table 3 with Figure 2 and present the comparison through plots. We compare the performance of TA-PCLR with supervised training for a similar model as the backbone architecture. The architecture was based on Resnet for comparison to PCLR but as the model outperforms supervised training and pretraining for models with orders of magnitude more parameters we consider the architecture to be adequate for expressing the ECG features. We prove in Table 2 the performance of each design component that also involves patient-based augmentation (PCLR). The datasize comparison to supervised approach shows how the performance is especially relevant for low data size.\\n\\n**Reviewer comment:**\\nIn Table Table 4, the authors then compared TA-PCLR with the ensemble model developed by Strodthoff et al. in 2021 and the model in Bickmann et al. (2024) for the classification for both the super classes and sub classes for abnormalities. However, many readings for the result table were missing. What happened?\\n\\nIn Table 5, similarly, many of the result readings were missing. What happened?\\n\\n**Response:**\\nTable 4 and 5 includes the performance comparison for the PTB-XL classification only when reported in the original publication. That is the reason for the missing values in the tables. We have added \\u2018-\\u2019 to indicate these missing values and also tried to improve the explanation. \\n\\n**Reviewer comment:**\\nIt felt like the team didn't manage to finish the study in time and yet just submit the manuscript before the deadline?\\n\\n**Response:**\\nWe hope that our explanation of how the different experiments are performed will satisfy the reviewer to the completion of the research endeavor.\"}", "{\"title\": \"Response\", \"comment\": \"We highly respect your judgment and hope that you had time to go through the updated manuscript where we tried to incorporate all your suggestions. We would be highly grateful if you could read the final manuscript (changes in blue) and the following discussion.\\n\\n**Hyperparameter optimization:**\\nI apologize for misunderstanding \\\"Hyperparameter optimization\\\" as submitting an automatic search that our resources did not allow. I wish to clarify, that we experimented with several hyperparameter settings through manual search due to long training times, similar to many past works. We share the best-performing network hyperparameters as the paper limits did not allow us to go through all settings. For the downstream linear probe, the learning rate was the only hyperparameter to explore that we optimised within a range of [0.1, 0.00001]. The architecture backbone has been previously optimized for ECG classification in past works. We optimised the lr for pretraining, feature size, window size, and MLP architecture for results in section 4.1.\\n\\n**Comparison to SOTA approaches**. We have compared our work to a wider range of methodologies in new **Table 7** (we explain that comparison in Table 3 and 4 is limited to those publications that present a result for the particular target). Our results outperform all previous works, whether supervised or using pretraining, for much more complex models and approaches, including ensembles. That really attests to the fact that our contrastive strategy takes into account a clinical understanding.\\n\\n**novelty**. The particular configuration that we use has not been used before. We design the augmentation by considering the clinical implications of the transformations and the success of the approach is attested by the performance. Similarly, the experiments for data diversity would also provide a new perspective that is much needed in the field. The ICLR scope also includes \\\"applications to physical sciences (physics, chemistry, biology, etc.)\\\" so we hope that our results and insights will be pertinent to the ICLR scope.\"}", "{\"summary\": \"Summary\\nThe study introduces TA-PCLR (Temporally Augmented Patient Contrastive Learning for ECG Representations), a self-supervised learning approach for ECG data. This method combines temporal augmentations and patient-based contrastive learning to improve representation learning in ECGs. Key highlights include:\\n\\n1. Novelty of TA-PCLR: It introduces temporal augmentations, such as zero-masking and random cropping, to enrich representations while preserving clinical relevance.\\n2. Performance Improvements: TA-PCLR outperforms previous methods in tasks such as predicting age, sex, and five-year mortality from ECG data, especially on small and medium-sized datasets where labeled data is scarce.\\n3. Impact of Dataset Demographics: The study explores how population diversity in training datasets influences model performance, showing that a mixed, diseased cohort provides richer ECG feature diversity and enhances model performance.\\n\\nIn indipendent validation with the PTB-XL dataset, TA-PCLR surpasses other models in diagnostic tasks, establishing itself as a robust foundation model for ECG interpretation. Future directions include refining data augmentations and enhancing interpretability in ECG feature learning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Effective Representation Learning: TA-PCLR leverages temporal augmentations with patient-based contrastive learning to enhance feature extraction. This approach enables more nuanced and clinically relevant ECG representations, which are critical for downstream diagnostic tasks, even in data-scarce environments.\", \"Broad Applicability and Generalization: Training on a diverse dataset of over 6 million ECGs from multiple global cohorts makes TA-PCLR highly generalizable. This diversity allows the model to perform well across varied patient demographics and health conditions, ensuring its robustness across different healthcare settings.\", \"Superior Performance on Limited Data: The model\\u2019s ability to perform better than fully supervised methods on small to medium-sized labeled datasets makes it especially valuable in clinical contexts where labeled data is often scarce. This makes TA-PCLR a powerful tool for institutions with limited data resources.\"], \"weaknesses\": \"The important contribution of this research is that it highlights the importance of geographical and racial diversity in training data when creating machine learning models for electrocardiograms, but on the other hand, the results of this research have only been verified using the PTB-XL dataset.\\nThe scope of this study may be narrow than the general interest of ICLR main conference.\", \"questions\": \"Do the authors have any specific proposals for the above weaknesses? How can we build the necessary data sets to verify geographical and racial diversity?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Temporally Augmented Patient Contrastive Learning of Representations (TA-PCLR), a novel approach that incorporates temporal augmentations into a patient contrastive self-supervised foundation model.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This method adds temporally augmented patient contrastive learning that incorporates augmentations along the temporal axis.\\n2. This method constructed a new multi-center dataset for training with over six million ECGs with four cohorts from three different continents.\", \"weaknesses\": \"1. The organization of this paper needs improvement. The color of Fig2 is too dark. There are some empty results in table 4 and table 5, which should add a line '-' and explain why.\\n2. In table2, the pertaining cohorts in PCLR and TA-PCLR are different. One is MGH, and another is BCSV. It can't get any conclusion on the effectiveness of TA-PCLR because of using different cohorts.\\n3. The novelty of this method is concerned. Basically, this method just adds a general contrastive learning strategy to learn EEGs between patients.\", \"questions\": \"When showing performance comparison between Resnet and TA-PCLR, are there any latest SOTA methods like attention or transformer-based models to compare?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes temporally augmented patient contrastive learning for ECG data. The idea is to randomly crop ECG data as a form of augmentation with minimal changes to the underlying clinical pathology implied by the cropped ECG signal. This is then combined with an existing approach for patient-centered contrastive learning to build representations of ECG data. The experiments are run on a large, pretraining dataset and the representations used to make predictions of age, sex, mortality etc. I think the biggest issue that came out of the reviews and rebuttal period was around novelty -- I think the specific choice of augmentation used here is novel but most of the reviewers felt this alone was not sufficient. There were issues around writing, ablations and experiments that I think were well responded to and addressed during the rebuttal phase by the authors. One suggestion to expand the contributions by the work is to study (a) whether this form of augmentation works across different training and neural network architectural choices and (b) better update their proposal with improved graphics and visual aids on why their method works.\", \"additional_comments_on_reviewer_discussion\": \"The authors do a good job providing detailed explanations and further clarifications about their work. However, at the conclusion of the discussion period, the reviewer felt that a single proposal around a form of augmentation was insufficient depth in terms of contribution.\"}", "{\"summary\": \"The study aimed to develop a contrastive learning based foundation model for ECG data to predict patients Age, Sex, 5-year mortality and various cardio abnormalities. The authors claimed that they have proposed a novel approach by introducing temporal augmentation to the previously proposed contrastive learning algorithm PCLR. Experiments with the major ECG datasets that are publicly available were conducted to benchmark the performance of the proposed method to some of the existing methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"Development of foundation models for medical data and signals is obviously an important topic worth studying. Additionally, it is also true that the cost for engaging physicians to provide high quality labels for medical data is indeed much higher than many other domains. I can see how this can motivate the use of contrastive learning on medical data analysis.\", \"weaknesses\": \"1/ Lack of originality and novelty\\n\\nThe proposed method is built upon the previously published contrastive learning backbone, PCLR (by Diamant, Nathaniel, et al in 2022). The only enhancement was the introduction of some temporal augmentation of the ECG signals. These temporal augmentations included zero-marking and random clipping. These augmentations were nothing new too. Also, none of these augmentations were specifically inspired by any in-depth understanding of the ECG signals and its unique characteristics. \\n\\nA commonly used loss function was used as well.\\n\\nI failed to the see the technical originality and contributions here.\\n\\nCan the authors highlight and justify their technical novelty and contributions in this work further?\\n\\n2/ Possible inflating some information\\nThe authors claimed they trained their foundation model over 6 Mil ECGs in the introduction. After reading into the details then I learned that the authors actually counted a 12-lead ECG measurements of a single patient 12 ECG signals. This is a bit unusual. Can't help but to guess whether the team might present the dataset size in an inflated manner?\\n\\n3/ Poorly designed experiments\\n\\nThe experimental design was so fragmented and confusing. There lacked of a systematic framework and experimental design to fairly benchmark the proposed method over the SOA methods over various tasks and datasets.\\n\\nIn Table 2, the authors compared PCLR with the proposed TA-PCLR. But PCLR were only pre-trained over the MGH and BIDMC datasets but not the combined BCSV dataset. The BCSV combined dataset is the largest. Why didn't a head-to-head comparison was done for PCLR and TA-PCLR for the BCSV dataset?\\n\\nIn Table 3, the authors then compared the proposed TA-PCLR against ResNet. Why ResNet, quite an old model, was picked as a baseline model to be compared with here? Why didn't the performance of PCLR included here?\\n\\nIn Table Table 4, the authors then compared TA-PCLR with the ensemble model developed by Strodthoff et al. in 2021 and the model in Bickmann et al. (2024) for the classification for both the super classes and sub classes for abnormalities. However, many readings for the result table were missing. What happened?\\n\\nIn Table 5, similarly, many of the result readings were missing. What happened?\\n\\nIt felt like the team didn't manage to finish the study in time and yet just submit the manuscript before the deadline?\", \"questions\": \"Please refer to the above section for my questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 7RW1\", \"comment\": \"We highly appreciate your time and careful review. Your suggestions allowed us to learn the weaknesses of our research and helped to improve the quality of the work. We regret that you found the work to be lacking in many respects and we will try our best to address your concerns. We have substantially revised our work, with additional results and rephrasing where the wording could be confusing or misleading. Your main concern is novelty so we wish to explain that the research would be of great interest to representation learning and specifically to that involving electrocardiogram data. We demonstrate remarkable performance improvement to prior research since the use of augmentation was inspired from a clinical perspective. The main paper limits the comparison to only research which reports performance for PTB-XL dataset, removing bias due to lack of optimization. We add Table 7 in Appendix B providing a wider comparison with a range of past works and additional metrics and statistics in Table 8. Another main aspect is the use of a large multi-center cohort that allowed us to make further insights about the impact of underlying data distribution on the pre-training and subsequent supervised tasks that can be of great value for the particular field. We also incorporate some interpretability study in Appendix C with further discussion about how the model encodes ECG features. We hope that you will consider revising your previous score.\\n\\n**Reviewer Comment:**\\nThe proposed work lacks novelty. The framework does not possess originality expected by ICLR and is very similar to existing frameworks. In fact, it is unclear to me what the novelty is. The methods section presents the vanilla InfoNCE loss and then discusses the augmentations, preprocessing, architecture and training scheme.\\n\\n**Response:**\\nOur approach uniquely combines components from existing literature from the perspective of clinical implications and employs a large multi-center pretraining cohort to present a foundation model that is highly essential to the particular application domain where labeled data is scarce. We outperform highly optimised ensemble of supervised networks, as well as more complex pretraining approaches thus proving the efficiency and strength of the approach. That poses our foundation model as an interesting contribution to the domain of ECG interpretation where the performance is often limited by the small size of labeled datasets. \\n\\n**Reviewer Comment:**\\nComparison to existing SOTA frameworks is very limited. The authors do mention relevant papers but only compare to a few in the results section. There are many notable works in this area that are worth comparing to.\\n\\n**Response:**\\nWe limited the performance comparison in the main paper to research that reports performance for the PTB-XL labels, as the performance greatly depends on the optimisation of hyperparameters for both the pretraining and downstream tasks. We believe that it is not fair to compare with our implementation of other approaches that may not be as thoroughly optimised. To provide more comparability we have added Table 7 in Appendix B comparing the performance with a range of other methodologies. The proposed approach demonstrates a much higher performance as compared to other unsupervised pretraining approaches.\\n\\n**Reviewer Comment**:\\nDid the authors conduct hyperparameter tuning for their proposed framework and baselines? What was the approach? I appreciate that the final values are listed in the appendix however this is not sufficient.\\n\\n**Response:**\\nWe have incorporated further details in all relevant sections. A hyper-parameter optimisation was not performed for our work and mainly learning rate was manually optimised within a range of [0.1, 0.00001] for supervised tasks while the backbone architecture is similar to past works (https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1009862) for a fair comparison.\\n\\n**Reviewer Comment:**\\nThe authors did not conduct any statistical significance testing nor did they provide confidence intervals to understand whether the performance improvements are actually worthwhile.\\nNumbers should be presented with three significant figures.\\nThere are empty rows in Tables 4 and 6, why is that?\\n\\n**Response:**\\nFor all experiments conducted, the performance is reported as a mean of ten independent runs similar to the available literature. We provide three significant figures for our experiments and the available values from past work. We have added additional results in Appendix B Table 8 that provide both mean and 95% confidence intervals for multiple metrics, for all results reported in Section 4.3.\"}", "{\"summary\": \"This paper proposes a novel Electrocardiogram Foundation Model using Temporally Augmented Patient Contrastive Learning (TA-PCLR), focusing on building a large-scale FM using ECG data. The TA-PCLR model combines patient-based contrastive learning with temporal augmentations, and it demonstrates superior results over traditional fully-supervised models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Compared to other ECG FM studies, a key strength of this paper is the utilization of large-scale data. By handling over six million ECGs, the model incorporates diverse demographic characteristics, creating favorable conditions for improved model generalization.\", \"weaknesses\": \"The paper appears to lack analytical depth. Adding an exploratory data analysis (EDA) would provide a more comprehensive understanding of data distribution and representation, which could, in turn, enhance the reliability of the results. Additionally, assessing the representation distribution across different data types could further substantiate the FM\\u2019s robustness. For example, first, using methods like PCA or t-SNE, visualize the embeddings generated by the model in 2D or 3D to examine whether different data types fall within similar distributions or are distinctly separated. This analysis helps assess if the model represents various data types consistently. Second, analyze whether different data types are well-mixed or independently clustered to understand if the model extracts consistent features across types or demonstrates bias toward certain types. Third, examine specific features the model prioritizes across data types (e.g., certain words in text, patterns in images, frequency ranges in signals) to determine if the model consistently learns representations across data types. Lastly, feature correlation analysis assesses the relationships between key features learned by the model, which is especially insightful for multimodal data to understand how the model connects specific features across types.\\n\\nThe model primarily relies on temporal transformations such as zero-masking and cropping, with limited exploration of other data augmentation techniques. Moreover, the training approach employs conventional contrastive learning without attempting novel methods. The choice to rely solely on masking lacks sufficient justification, making it challenging to fully accept this design decision. Given the reliance on a limited set of augmentation techniques (https://dl.acm.org/doi/10.1145/3511808.3557591, https://arxiv.org/abs/2204.04360), it is difficult to assess their true effectiveness. To strengthen the study and provide clearer evidence of the augmentation strategy\\u2019s impact, it would be beneficial to incorporate and compare additional methods discussed in previous research, as outlined in this paper and this paper. Such a comparative approach would provide deeper insights and enhance the robustness and reliability of the findings regarding the model\\u2019s performance.\", \"questions\": \"To substantiate the proposed approach, it is essential to compare and analyze its performance against various existing learning approaches. Currently, the model relies on contrastive learning; however, evaluating its performance relative to other frameworks, such as Masked Autoencoder (MAE), could provide valuable insights. Specifically, understanding how TA-PCLR performs in comparison to MAE or other non-contrastive self-supervised learning methods would offer a clearer picture of its strengths and limitations in feature extraction and representation learning.\\n\\nFrom a technical perspective, additional experiments are proposed to further validate the approach. Comparing the model\\u2019s performance against currently available models, such as those discussed in (https://arxiv.org/abs/2408.05178), would provide a solid benchmark. Additionally, applying alternative contrastive learning techniques, such as MoCo, could reveal how different contrastive frameworks impact model performance. It would also be insightful to experiment with exclusively using MAE as a non-contrastive method to better understand the unique contributions of the contrastive approach in this context.\\n\\nBy conducting these comparative experiments with a range of learning frameworks, this study could offer a more comprehensive evaluation, showcasing both the effectiveness of TA-PCLR and the relative benefits of its contrastive learning strategy. Such an analysis would greatly enhance the reliability and depth of the findings regarding the model\\u2019s capabilities in feature extraction and representation learning.\\nA comparative analysis would allow us to assess whether TA-PCLR\\u2019s temporal augmentations and patient-based contrastive learning truly offer a competitive advantage over alternative methods. Such comparisons could reveal potential improvements in the model\\u2019s predictive accuracy, robustness, and generalization across different ECG data subsets.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer poSx\", \"comment\": \"We greatly appreciate your encouraging reviews and discussion about the weaknesses. We believe that the work is pertinent to the interest of the ICLR due to the discussion and insights for representation learning in medical applications. The design of our approach is inspired from a clinical perspective and we have further improved the overall organization and rephrased where working could be confusing or misleading. We have also updated Table 2 with a comparison of several variations of the temporal masking that we experimented with. We also replace Table 3 with Figure 2. Additional comparisons with other approaches are incorporated in Appendix B Table 7 with detailed metrics in Table 8. Some preliminary work on interpretability is also incorporated in Appendix D Figures 4 to 6 that may be of great interest to the ICLR conference. We will be highly grateful for your time and interest in going through the revised paper and reconsidering your previous score.\\n\\n**Reviewer comment:**\\nThe important contribution of this research is that it highlights the importance of geographical and racial diversity in training data when creating machine learning models for electrocardiograms, but on the other hand, the results of this research have only been verified using the PTB-XL dataset. The scope of this study may be narrow than the general interest of ICLR main conference.\\nDo the authors have any specific proposals for the above weaknesses? How can we build the necessary data sets to verify geographical and racial diversity?\\n\\n**Response:**\\nThe contributions of our work are multi-faceted, based on not only the exploration of the importance of geographical and racial diversity in both the pretraining and labeled datasets but also the foundation model that is highly essential for the particular domain where labeled data is highly expensive. The PTB-XL dataset is mainly employed for comparison to past literature but the specific aspect of the diverse dataset is explored in Section 4.2 where we have designed a series of experiments to investigate. The experiments show that the performance is impacted by the dataset diversity but we prove that our multi-centered dataset makes our model robust to the diverse distribution, consistently performing best for all labeled datasets.\"}", "{\"comment\": \"After reading all the reviewers' comments and the author's response, I could clearly understand that the author places importance on a research approach that emphasizes clinical application. This is an advantage from the perspective of those who emphasize application, but it is a weakness from the perspective of those who emphasize theory, but it is difficult to achieve a perfect balance within a single paper. I will maintain the current score.\"}", "{\"title\": \"Response to Reviewer pv5E\", \"comment\": \"We are highly grateful for your valuable comments and suggestions that helped us identify the weaknesses of our work and greatly enhance its quality. We have carefully studied all your comments and incorporated them into our revised manuscript. We have not only improved the paper organization and the overall text but we only performed additional experiments. We have reorganized Table 2 as an ablation study and added results from the exploration of different variations of our temporal augmentations together with novel raw vs filtered (clean) ECG contrasting. The Table 3 is converted to a graph format. Additional Appendices A, B, and C are added. Appendix A provides details about the demographics of the datasets used in the research. The comparisons for the external dataset PTB-XL in Section 4.3 is limited only to publications that report metrics for the classification tasks for the PTB-XL. Appendix B Table 7 provides a broader comparison with SOTA techniques that prove the superiority of our approach. Finally, Appendix C shares some insights from the interpretability investigation providing interesting observations. We would be grateful if you could go through our responses and our revision. We would be highly appreciative if you would reconsider your previous score.\\n\\n**Reviewer comment:**\\nThe organization of this paper needs improvement. The color of Fig2 is too dark. \\n\\n**Response:**\\nWe have updated the colors of Figure 2.\\n\\n**Reviewer comment:**\\nThere are some empty results in table 4 and table 5, which should add a line '-' and explain why.\\n\\n**Response:**\\nThanks for pointing out the omission. We have added dashes \\u2018-\\u2019 in the empty cells where the metric values are not available in the original publication. The performance greatly depends on the optimisation of hyperparameters for both the pretraining and downstream tasks so we believe that it is not fair to compare with our implementation of other approaches that may not be as thoroughly optimised. Although we have added a broader comparison in Table 7 (Appendix B).\\n\\n**Reviewer comment:**\\nIn table2, the pertaining cohorts in PCLR and TA-PCLR are different. One is MGH, and another is BCSV. It can't get any conclusion on the effectiveness of TA-PCLR because of using different cohorts.\\n\\n**Response:**\\nWe have restructured the table and removed the PCLR trained with MGH as it was confusing the readers. The initial intent was to show the impact of all components and PCLR with MGH was used as a baseline. The table now also presents the results for some variations of temporal augmentations that we experimented with.\\n\\n**Reviewer comment:**\\nThe novelty of this method is concerned. Basically, this method just adds a general contrastive learning strategy to learn EEGs between patients.\\n\\n**Response:**\\n\\nWe use the existing augmentations in a novel combination inspired by clinical perspective and a large multi-center pretraining cohort to outperform other pretraining approaches. We were able to outperform a highly optimised ensemble of supervised networks with a much larger number of parameters attesting to the efficiency of the approach. That poses our foundation model as an interesting contribution to the domain of ECG interpretation where the performance is often limited by the small size of labeled datasets. Additionally, interesting research questions like the effect of dataset ethnic and health status diversity on the learned representations and the impact of the labeled data distribution on the performance are also discussed and interesting insights are shared.\\n\\n**Reviewer comment:**\\nWhen showing performance comparison between Resnet and TA-PCLR, are there any latest SOTA methods like attention or transformer-based models to compare?\\n\\n**Response:**\\nSection 4.1 is presented as a proof of concept thus a similar network is employed for both the supervised and our pretraining approach. The comparison to Resnet in Table 3 is now changed to a graphical format and Resnet is changed to supervised, to improve clarity. The comparison in Section 4.3 involves SOTA approaches including Strodthoff et al. (2021) (Table 3) a supervised approach involving an ensemble comprising larger Resnets, inception, LSTM, etc. Song et al. (2024) compared in Table 4 is a transformer-based approach. We have added further performance comparison in Table 8 Appendix \\u201cAdditional results for PTB-XL\\u201d which covers a range of pretraining methodologies with diverse architectures.\"}", "{\"title\": \"Response 1 to Reviewer QAYi\", \"comment\": \"We wish to thank you for your detailed and insightful comments. We tried our best to respond to each comment in the Weaknesses and Questions sections of your review. We have updated our paper in light of your reviews and would be thankful for your time to go through our response and revised manuscript. We hope that you will reconsider your previous score.\\n\\n**Reviewer comment:**\\nThis paper introduces a novel method more than it does a foundation model. Although they do pretrain on a large dataset, there is a lack of experimentation which only seems reasonable when releasing a foundation model. Namely, model scaling, considering how they adopt a relatively smaller architecture; They pride themselves on this point, however, this is unfounded due to a lack of confirmation that such a small deep neural network could make the most effective use of the vast pretraining dataset. This would have been vital to confirm. \\n\\n**Response:**\\nYou have pointed out an interesting research question that will be important for future research. In the scope of our current work, we select the particular Resnet architecture with ten convolutional layers (about six million parameters) based on the fact that this architecture has been extensively explored in the previous literature. Using a similar backbone we can fairly demonstrate the efficacy of our approach: our pre-training strategy and then the large unlabelled, diverse cohort. Prior research recommends the rule of ten times more data, as model parameters (https://www.sciencedirect.com/science/article/pii/S1755534518300058) so the size of the model with more than 5 million parameters is still quite large for our dataset of six million. Considering that the model performs equally well or even better at tasks, compared to supervised approaches, can be an indication that the representations - and hence the size of the model - are adequate to effectively capture ECGs. We present the model as a foundation model that can greatly facilitate the performance of any generic ECG-based supervised task, as the availability of labeled datasets is scarce in the medical domain. The long training times and resource constraints prevent us from exploring architecture scalability in the scope of our current work but we highlight the importance of this investigation in our conclusions in Section 5.\\n\\n**Reviewer comment:**\\nThe literature background refers to just a couple existing foundation models (and only in passing). \\n\\n**Response:**\\nWe add a paragraph about foundation models in Section 2.\\n\\n**Reviewer comment:**\\nThere is no interpretability method. \\n\\n**Response:**\\nInterpretability is an important consideration for medical data and we add some results regarding interpretability in Appendix C. Figures 3 and 4 represent the t-SNE representation of the model embedding, and Figure 5 shows the importance of the different ECG segments using Grad-CAM. We also highlight the importance of future research in conclusions.\\n\\n**Reviewer comment:**\\nThey also report just one metric per task, which greatly reduces study comparability. Considering these points, it seems lacking for a foundation model paper, even though it seems sufficient for simply introducing a novel pretraining method.\\n\\n**Response:**\\nThe macro AUC has been recommended as a threshold-free metric for result comparison and most frequently employed in the past work thus being essential for comparability. The precision, recall, accuracy, and F1 greatly depend on the threshold used and thus may provide inconsistent information. We add these metrics for the PTB-XL super and sub classes in Appendix B Table 8 with a threshold of 0.5. We would like to mention that the values will be greatly improved by optimizing the threshold.\\n\\n**Reviewer comment:**\\nConsidering this paper is introducing a new pretraining method, it may have benefited from an ablation study of its various pretraining components beyond comparison to PCLR. Or, if the authors believe that this is all that's necessary as an ablation, then explain this in more detail to better communicate their contribution.\\n\\n**Response:**\\nThanks for pointing out the weakness in reporting the results. The main components of the approach can be regarded as patient contrastive augmentation, temporal augmentation, and the large pre-training dataset, and Table 2 represents an ablation study of these components. We have removed results for the PCLR using MGH and restructured the format so the configurations can be clearer for the readers. The table demonstrates that each component is essential to the success of the approach. To further clarify we restructure section 4.1 adding paragraph titles and rephrasing.\"}", "{\"summary\": \"The authors introduce a novel, ECG-specific self-supervised learning approach which incorporates temporal augmentations (random cropping) over patient contrastive methodology. This work is portrayed as a new foundation model for ECG interpretation, where experiments are run on a large, private pretraining dataset of six million unlabeled ECGs and involves a number of downstream tasks (age, sex, mortality, and diagnostic super-classes). They further examine how the pretraining dataset distribution impacts downstream performance by way of dataset-specific training and evaluation.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"Their use of a multi-source pretraining dataset is certainly a strength. Their literature background gives a relatively strong, concise overview of existing pretraining approaches. Their methodology selection is, in large part, explained well and intuitively. Their data scaling was vital in demonstrating TA-PCLR's dominant performance over fully-supervised methods for smaller datasets. I have not personally seen the random cropping approach they introduced as their temporal augmentation, which speaks, in part, to its originality.\", \"weaknesses\": \"This paper is introducing a novel method more than it does a foundation model. Although they do pretrain on a large dataset, there is a lack of experimentation which only seems reasonable when releasing a foundation model. Namely, model scaling, considering how they adopt a relatively smaller architecture; They pride themselves on this point, however, this is unfounded due to a lack of confirmation that such a small deep neural network could actually make most effective use of the vast pretraining dataset. This would have been vital to confirm. The literature background refers to just a couple existing foundation models (and only in passing). There is no interpretability method. They also report just one metric per task, which greatly reduces study comparability. Considering these points, it seems lacking for a foundation model paper, even though it seems sufficient for simply introducing a novel pretraining method.\\n\\nConsidering this paper is introducing a new pretraining method, it may have benefited from an ablation study of its various pretraining components beyond a comparison to PCLR. Or, if the authors believe that this is all that's necessary as an ablation, then explaining this in more detail to better communicate their contribution.\\n\\nCertain methodological matters are somewhat ambiguous. It was unclear whether hyperparameter tuning was performed. It is not made abundantly clear that the positive pairs are necessarily augmented instances of the same recording (and not from different recordings of the same patient). It is unclear whether there was any signal preprocessing (such as standardization), and if not, this may be worth stating.\\n\\nCertain purported claims are left unsubstantiated, for example: \\\"Human perception is limited by low visual accuracy, gaps in theoretical knowledge, and the complexity of the diverse, non-linear, interrelations.\\\"\\n\\nCertain references may be misleading provided its context, for example, \\\"Hybrid techniques combining contrastive learning and generative pretraining based on transformer architecture have been implemented to train foundation models for ECG feature extraction (Song et al., 2024; McKeen et al., 2024)\\\" may imply that ECG-FM (McKeen et al., 2024) using generative pretraining, which is not true.\\n\\nCertain definitions are questionable, for example, the statement that, \\\"There are diverse contrastive learning approaches, differing in their definitions of the positive and negative instances and the loss computations\\\" negates how certain contrastive methods do not apply here, such as using a triplet loss. Or also \\\"In the visual image domain, the positives are augmentations (transformations) of the same image while the negatives are augmentations from others\\\", which is not necessarily true. This also occurs in the literature background. Simply making less specific claims or by allowing for the existence of other alternatives (e.g., stating \\\"may\\\" or \\\"usually\\\") could solve the issue. Another is \\\"The feature-generating model is frozen while a single neuron is trained to predict each label\\\", which is misleading since neurons are computational nodes, it is the linear layer parameters being trained.\\n\\nCertain methods, such as Song et al., 2024 and 3KG, should likely be referenced in the literature background in greater detail. There is some methodology mixed into the results section (Song et al., 2024), which may or may not be necessary, but can hurt the flow.\\n\\nThere are typos (\\\"same train/test splits..\\\") and a random section link (\\\"Unique patients1\\\").\\n\\nCalling the dataset-specific training and evaluation \\\"the first investigation of its kind\\\" is an ambitious statement, where the abstract had me expecting some kind of domain shift/adaptation methodology to be applied. The analysis seems good, does emphasize their multi-source dataset, and is important to understand how data distributions affect performance (e.g., healthy versus unhealthy subjects); However, the results of this analysis left me unsure as to what explicit value it added to this particular study.\\n\\nMore discussion is warranted as to why performance on the fully supervised benchmarks would become similar after a finetuning dataset of 100k is reached.\\n\\nThe paper could have benefited from additional non-tabular figures, such as a data scaling graph.\", \"questions\": \"Is it fitting for this paper to introduce a foundation model given the nature of the experimentation performed, as well as those related points raised under \\\"Weaknesses\\\"?\\n\\nI started (and sort of left) the paper wondering: Is TA-PCLR a combination of individual existing methods which form a novel aggregate approach? Even by name, it is clear that TA-PCLR builds off the PCLR method, and yet how is never made totally clear - one would have to read the PCLR paper to learn this. Is it simply adding random temporal cropping to the PCLR method? Was the method/cropping approach inspired by other work in the ECG or timeseries space?\\n\\nConsidering, \\\"The number of ECGs from each patient greatly differs and thus the training epoch is defined as one complete iteration for all unique patients with the positive views randomly sampled at training time. In this way, the training is not biased by patients having more ECGs while still exploiting the available data diversity.\\\" I agree that this maintains better overall population diversity, however I would be curious to know whether this would lead to biasing the model away from representing the conditions of those people with chronic health issues (i.e., the people who do have those higher ECG counts). What if it were those people whom were the most important to represent well?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response 1 to Reviewer hUMx\", \"comment\": \"We are highly grateful for your insightful comments and your time and interest. Your indication of weaknesses greatly helped to enhance the quality of our work and especially the last comment was very amusing. We present a unique combination of existing components that is inspired by a clinical understanding of ECGs. The strength of our work is the high level of performance, against orders of magnitude more complex networks. Our unique dataset allowed us to explore the effect of the dataset diversity and its impact on downstream tasks that can be highly relevant to the domain, as label-based comparisons are common in literature. We demonstrate the superiority of our mult-center dataset for improved robustness and generalization for a range of diverse data distributions. We have rephrased and reorganized the manuscript to more effectively describe the approach. Table 2 is restructured to remove confusion and added results for different variations of the temporal augmentations and a novel augmentation of contrasting by raw vs. filtered that we had experimented with. The previous Table 3 is converted to Figure 2 which compares the proposed approach to a fully supervised network with a similar backbone. A new section was added as Appendix B that provides a wider comparison to the state of the art implementations where our approach outperforms much complex pretraining and even a zero-shot approach trained incorporating ECG reports in Table 7 (adding some notion of supervision). Appendix C explore interpretibility of the learned representations and provides interesting insights. We hope that you will go through our responses and revised manuscript and would reconsider your previous score.\\n**Reviewer comment:**\\n1/ Lack of originality and novelty\\n\\nThe proposed method is built upon the previously published contrastive learning backbone, PCLR (by Diamant, Nathaniel, et al in 2022). The only enhancement was the introduction of some temporal augmentation of the ECG signals. These temporal augmentations included zero-marking and random clipping. These augmentations were nothing new too. Also, none of these augmentations were specifically inspired by any in-depth understanding of the ECG signals and its unique characteristics.\\n\\nA commonly used loss function was used as well.\\n\\nI failed to the see the technical originality and contributions here.\\n\\nCan the authors highlight and justify their technical novelty and contributions in this work further?\\n\\n**Response:**\\nWe have combined existing building blocks in a novel configuration that is inspired by an understanding of ECGs. Other works take generic time-series augmentations and apply that to ECG which is a biological signal. The scale, axis and frequency are important information and thus scaling, rotating, warping, and permutation can alter the underlying cardiac characteristics. Apart from different variation of the masking we also experiment with a novel augmentation of raw vs. filtered ECGs. We have focused on augmentations that do not alter any physical characteristics and trained the model on a large multi-center dataset added to Table 2. We have updated the description in Section 3.3 to more clearly explain the motive for the design decisions.\\n\\nThe remarkable performance of the approach attests to the relevance of our hypothesis and will be very interesting to the representation learning community. The pretraining not only outperforms more complex pretraining but also a highly optimised ensemble of supervised networks with a much larger number of parameters. Our foundation model is an important contribution to the domain of ECG interpretation where the performance is often limited by the small size of labeled datasets. We also investigate the importance of geographical and racial diversity in training data for both the pretraining and supervised tasks, which will be very interesting for the medical domain where the results are often based on labels evaluated for diverse datasets. \\n\\n**Reviewer comment:**\\n2/ Possible inflating some information The authors claimed they trained their foundation model over 6 Mil ECGs in the introduction. After reading into the details then I learned that the authors actually counted a 12-lead ECG measurements of a single patient 12 ECG signals. This is a bit unusual. Can't help but to guess whether the team might present the dataset size in an inflated manner?\\n\\n**Response:**\\nThe training is performed for more than six million individual ECGs. The 12 leads are not counted as 12 but as a single ECG similar to past literature. We only utilize eight leads as the rest of the leads are only combinations of other leads. We do not know how the paper can be interpreted to relate the data size to leads but we further clarify by adding the following in Section 3.1:\\n\\n\\u201cWe denote the combined pretraining cohort as BCSV consisting of more than six million individual ECGs, with each ECG comprising eight leads.\\u201d\"}" ] }
7yncrX80CN
LLM-Augmented Retrieval: Enhancing Retrieval Models Through Language Models and Doc-Level Embedding
[ "Mingrui Wu", "Sheng Cao" ]
Recent advancements in embedding-based retrieval, also known as dense retrieval, have shown state of the art results and demonstrated superior performance over traditional sparse or bag-of-words-based methodologies. This paper presents a model-agnostic document-level embedding framework enhanced by large language model (LLM) augmentation. The implementation of this LLM-augmented retrieval framework has significantly enhanced the efficacy of prevalent retriever models, including Bi-encoders (Contriever, DRAGON) and late-interaction models (ColBERTv2). Consequently, this approach has achieved state-of-the-art results on benchmark datasets such as LoTTE and BEIR, underscoring its potential to refine information retrieval processes.
[ "information retrieval", "text retrieval", "artificial intelligence", "large language models", "data augmentation" ]
Reject
https://openreview.net/pdf?id=7yncrX80CN
https://openreview.net/forum?id=7yncrX80CN
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zYNVexZV6M", "y2LMGEBBpv", "xm492Yvj5t", "w7Mxr5LWtF", "u417aJ5HpI", "u1Q4z5HsVd", "tUVIpoGE9y", "r4CsOEcDmO", "paE78DqwiI", "pL4cmLsMt8", "m85Yu2oKia", "lPeSIIBK4w", "h19Pmr6ByD", "efsgThAl6M", "ajofbAZzvA", "ZHOgWmSD4a", "WcGGCKnBvQ", "V8jAJmgeUW", "UQ6t5xLBtV", "U5RD521WV9", "TxPyiQ6Ml7", "ScZ8JNeZya", "RpKAF1c4ks", "RD6SMHNjYs", "PMUCm5fAMp", "PCkCS9RKfB", "M2OtMmtYsf", "Lf5B0yS2SS", "Iozdq1X7zx", "Hxel064AjY", "E6gSyk5U4I", "6RTbIDPf5M", "5T3ODuj9is", "4cVVVKr31q", "3vVxjn9m8I" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732313081295, 1733310945355, 1733311080788, 1732676064756, 1732679725512, 1732528598465, 1732314464484, 1732314372616, 1732724359786, 1732493672344, 1732313721468, 1730609912411, 1732575396544, 1733311057088, 1732314025068, 1732306269329, 1730692094429, 1732530753029, 1730810942617, 1732579180980, 1732692311455, 1732734165203, 1732690792225, 1737523569355, 1732734333324, 1732314101968, 1732304799765, 1732693441400, 1732312933779, 1734488458353, 1733311070306, 1732686922017, 1732653767248, 1730179430711, 1732660256991 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3320/Authors" ], [ "ICLR.cc/2025/Conference/Submission3320/Authors" ], [ "ICLR.cc/2025/Conference/Submission3320/Authors" ], [ "ICLR.cc/2025/Conference/Submission3320/Reviewer_ZRAX" ], [ "ICLR.cc/2025/Conference/Submission3320/Authors" ], [ "ICLR.cc/2025/Conference/Submission3320/Reviewer_nf7F" ], [ "ICLR.cc/2025/Conference/Submission3320/Authors" ], [ "ICLR.cc/2025/Conference/Submission3320/Authors" ], [ "ICLR.cc/2025/Conference/Submission3320/Reviewer_z7Jy" ], [ "ICLR.cc/2025/Conference/Submission3320/Reviewer_z7Jy" ], [ "ICLR.cc/2025/Conference/Submission3320/Authors" ], [ "ICLR.cc/2025/Conference/Submission3320/Reviewer_fyxe" ], [ "ICLR.cc/2025/Conference/Submission3320/Authors" ], [ "ICLR.cc/2025/Conference/Submission3320/Authors" ], [ "ICLR.cc/2025/Conference/Submission3320/Authors" ], [ "ICLR.cc/2025/Conference/Submission3320/Authors" ], [ "ICLR.cc/2025/Conference/Submission3320/Reviewer_z7Jy" ], [ "ICLR.cc/2025/Conference/Submission3320/Reviewer_ZRAX" ], [ "ICLR.cc/2025/Conference/Submission3320/Reviewer_nf7F" ], [ "ICLR.cc/2025/Conference/Submission3320/Reviewer_z7Jy" ], [ "ICLR.cc/2025/Conference/Submission3320/Authors" ], [ "ICLR.cc/2025/Conference/Submission3320/Authors" ], [ "ICLR.cc/2025/Conference/Submission3320/Reviewer_ZRAX" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3320/Authors" ], [ "ICLR.cc/2025/Conference/Submission3320/Authors" ], [ "ICLR.cc/2025/Conference/Submission3320/Authors" ], [ "ICLR.cc/2025/Conference/Submission3320/Reviewer_ZRAX" ], [ "ICLR.cc/2025/Conference/Submission3320/Authors" ], [ "ICLR.cc/2025/Conference/Submission3320/Area_Chair_M5XY" ], [ "ICLR.cc/2025/Conference/Submission3320/Authors" ], [ "ICLR.cc/2025/Conference/Submission3320/Authors" ], [ "ICLR.cc/2025/Conference/Submission3320/Authors" ], [ "ICLR.cc/2025/Conference/Submission3320/Reviewer_ZRAX" ], [ "ICLR.cc/2025/Conference/Submission3320/Authors" ] ], "structured_content_str": [ "{\"title\": \"Rebuttal to Reviewer z7Jy\", \"comment\": \"## Rebuttal\\n**[W1]** Although the proposed approach is simple and effective, I think the major concern for me is the limited discussion in ablations and comparisons, which are important for the readers who want to adopt this approach. I'm willing to raise score if the authors can address the concern below:\\n\\n**[A1**] Thanks for the great feedback. We have included more discussions and Augmentation Analysis in Section 4.5. Per your questions:\\n\\n**[W1.1]** As a reader, one may want to know the quality and cost tradeoff of the generated titles and queries. For instance, the paper prompts Llama-70B to make it work; how about using smaller size of Llama; 7B, 13B etc.? and the additional cost of document indexing with LLM document expansion should be reported.\\n\\n**[A1.1]** We have experimented with Llama2-7B, Llama2-70B, Llama3-8B, and Llama3-70B for synthetic query generation. However, these models do not produce significant differences in the generated synthetic queries. Several qualitative examples are provided in Table 11 in the **Appendix**, highlighted in **blue** color. Although we chose 70B in the full experiments, we can also utilize Llama2-7B or Llama3-8B for synthetic query generation to improve cost-effectiveness in practical applications. Moreover, in Table 4, Section 4.5, we present a cost analysis of our method, indicating that, on average, 57% more tokens are generated compared to the original document tokens. This may result in a 57% increase in encoding costs when building the retriever index. Nevertheless, since retriever models are typically small (125 million parameters) and the retriever index is pre-computed once and stored (with no additional storage costs as the number of retriever indexes remains unchanged in our approach), the overall cost remains limited. \\n\\n&emsp;\\n\\n**[W1.2]** There are no ablations on the three combinations: chunk + query, chunk + title and query + title, which helps readers to choose the combinations with a balanced tradeoff between effectiveness and cost\\n\\n**[A1.2]** Thanks for the suggestion. The effectiveness of these combinations (chunk + query, chunk + title, and query + title) can be assessed by comparing our each of our ablation study results (with only single field chunk, query or title) with the results when three fields are all present (the last row) to measure the delta effects (Tables 5\\\\~7). Furthermore, the chunk component is essential to our framework, particularly for Bi-encoders, as our method is based on the traditional query-chunk embedding similarity system [1,2,3]. The cost associated with titles is negligible, as they typically comprise only a few tokens per document. The primary cost factor is the synthetic query component, where the number of generated tokens averages 57% of the original document tokens (see Section 4.5). However, in our ablation study we find out that synthetic queries generally play a pivotal role in enhancing recall performance so we'd better keep them in our doc-level embedding. It is important to note that since the token generation and retriever index building occur only once throughout the entire process, the overall cost impact is limited.\\n\\n[1] Chen, Tong, Hongwei Wang, Sihao Chen, Wenhao Yu, Kaixin Ma, Xinran Zhao, Dong Yu, and Hongming Zhang. \\\"Dense x retrieval: What retrieval granularity should we use?.\\\" arXiv preprint arXiv:2312.06648 (2023).\\n\\n[2] Finardi, Paulo, Leonardo Avila, Rodrigo Castaldoni, Pedro Gengo, Celio Larcher, Marcos Piau, Pablo Costa, and Vinicius Carid\\u00e1. \\\"The Chronicles of RAG: The Retriever, the Chunk and the Generator.\\\" arXiv preprint arXiv:2401.07883 (2024).\\n\\n[3] Lewis, Patrick, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K\\u00fcttler et al. \\\"Retrieval-augmented generation for knowledge-intensive nlp tasks.\\\" Advances in Neural Information Processing Systems 33 (2020): 9459-9474.\"}", "{\"title\": \"Follow up on full BEIR Experiment\", \"comment\": \"As requested by reviewer [**nf7F, ZRAX**], we assessed on the whole BEIR set using Contriever as the base model and report NDCG@10 along with R@3 an R@10. We can see the metric improvement by our method is substantial and consistent acoss each dataset.\\n\\n| | Contriever | | | Contriever* | (our method) | | \\n|---------------|:----------------|:--------|:---------|:--------|:--------|:---------|\\n| | R@3 | R@10 | NDCG@10 | R@3 | R@10 | NDCG@10 |\\n| ArguAna | 0.3030 | 0.5498 | 0.3317 | **0.3172** | **0.6095** | **0.3481** |\\n| FiQA | 0.1895 | 0.2993 | - | **0.3690** | **0.5174** | **0.2866** |\\n| Quora | 0.8653 | 0.9464 | 0.8311 | **0.8687** | **0.9517** | **0.8332** |\\n| Scidocs | 0.1560 | 0.2930 | 0.1700 | **0.2430** | **0.4040** | **0.2460** |\\n| Scifact | 0.5410 | 0.6934 | 0.5310 | **0.6005** | **0.7259** | **0.5796** |\\n| Climate-FEVER | 0.0612 | 0.1199 | 0.0725 | **0.2593** | **0.4541** | **0.2784** |\\n| MS MARCO | 0.6744 | 0.7907 | 0.6314 | **0.7907** | **0.8837** | **0.7210** |\\n| DBPedia | 0.4825 | 0.6425 | 0.4498 | **0.6125** | **0.7750** | **0.5774** |\\n| Touche-2020 | 0.5918 | 0.6939 | 0.5110 | **0.8163** | **0.8776** | **0.6987** |\\n| NFCorpus | 0.3065 | 0.5139 | 0.3233 | **0.5263** | **0.6409** | **0.4808** |\\n| Trec-COVID | 0.5200 | 0.7600 | 0.5083 | **0.8800** | **0.9600** | **0.8294** |\\n| CQADupStack | 0.1639 | 0.2402 | 0.1643 | **0.2948** | **0.6412** | **0.2940** |\"}", "{\"comment\": \"Please refer to the table in the main section for results on BEIR and nDCG@10.\"}", "{\"comment\": \"Thank you for the detailed response and the effort you\\u2019ve put into addressing these points.\\n\\n> 1. The synthetic queries are generated only once and are pre-computed into the retriever index, which is stored offline. The size of the index remains consistent with other standard dense retrieval methods. Consequently, our approach does not increase latency during inference time.\\n\\nWhile it does not increase latency during the search phase, the overall process introduces additional overhead rather than reducing it. More experimental results are needed to justify the associated costs.\\n\\n> 2. Furthermore, unlike other retrievers that requires extensive training, our model does not require additional training. This enables existing retriever models, including smaller ones, to directly leverage the foundational knowledge of LLMs. As a result, training costs are reduced, and the adaptability for users is enhanced.\\n\\nI respectfully disagree with this point. Your method relies on a well-trained retriever as its backbone and benefits from the training it has undergone. To support the claim that no training is required, you could experiment with foundational non-retrieval models like bert-base-uncased, which I don\\u2019t think would work.\\n\\nA less stringent setting might involve applying your method to unsupervised retrievers like E5-base-unsupervised or Contriever (unsupervised version), or weaker retrievers like DPR, and comparing their performance after augmentation to SOTA retrievers.\\n\\n> In addition, inspired by another reviewer, we summarize a table to estimate the cost of our approach (using Llama 7B for generation) comparing to other LLM-retrievers like RepLlama [3] and MistralE5 [4].\\n\\nI agree that this addition is helpful for assessing the cost of your approach. \\n\\n- Could you elaborate further on the specifics? For instance, what is the precise difference between O(2x7B)x and (2x7B)x? The distinction feels counterintuitive. While they use LLMs for embedding, your approach utilizes LLMs for both generation and embedding.\\n\\n- Additionally, I believe the encoder-only retriever serves as a more natural baseline for comparison rather than the LLM-based retriever.\\n\\n> Sure, we are evaluating the performance on MSMARCO and FEVER and will attach the performance here later. The results will be included in the final paper.\\n\\nI would recommend evaluating on the entire BEIR benchmark for a more comprehensive assessment, as is standard practice in prior IR research.\"}", "{\"comment\": \"Thanks for your questions and appreciate the discussion here. First of all, we'd like to briefly summarize our proposed method here to avoid any miscommunication or confusion.\\n1) Given an pre-existing retriever model and a foundational LLM\\n2) Use the LLM to generate query field and title field (optional). LLM is **ONLY** applied at this step.\\n3) We use the existing retriever model to compute the chunk embedding, query embedding, title embedding and therefore calculate the doc-level embedding described in eq. (1), and store them to construct the retriever index. The total number of retriever index does not increase.\\n4) Inference is the same as the standard dense retrieval process. Specifically, only the given retrieval model is employed to compute the user query embedding. From this point onward, the procedure follows the conventional retrieval setup.\\n5) Note that the given retrieval model is not fine-tuned or further trained at all.\\n\\n&emsp;\\n\\n> While it does not increase latency during the search phase, the overall process introduces additional overhead rather than reducing it. More experimental results are needed to justify the associated costs.\\n\\nYes we acknowledge the additional overhead here. Our paper seeks to propose a model-agnostic framework that enhances the recall performance of existing retrievers without requiring further training (fine-tuning), while recognizing that the trade-off involves the number of tokens generated. We would appreciate it if you could specify the types of experiments you are interested in here.\\n\\n&emsp;\\n\\n> I respectfully disagree with this point. Your method relies on a well-trained retriever as its backbone and benefits from the training it has undergone. To support the claim that no training is required, you could experiment with foundational non-retrieval models like bert-base-uncased, which I don\\u2019t think would work.\\nA less stringent setting might involve applying your method to unsupervised retrievers like E5-base-unsupervised or Contriever (unsupervised version), or weaker retrievers like DPR, and comparing their performance after augmentation to SOTA retrievers.\\n\\nHere we mean no additional training or fine-tuning will be needed. Practically, users often continue to fine-tune their retrieval models using augmented datasets, which may include data generated by LLMs. The trade-off here involves balancing the cost-effectiveness of token generation against improvements in recall performance. Alternatively, to enhance recall performance, users may need to either continue training their retrieval models or employ a larger base model, which might also necessitate ongoing training.\\n\\n&emsp;\\n\\n> I agree that this addition is helpful for assessing the cost of your approach.\\nCould you elaborate further on the specifics? For instance, what is the precise difference between O(2x7B)x and (2x7B)x? The distinction feels counterintuitive. While they use LLMs for embedding, your approach utilizes LLMs for both generation and embedding.\\nAdditionally, I believe the encoder-only retriever serves as a more natural baseline for comparison rather than the LLM-based retriever.\\n\\nO(2x7B)x $\\\\approx$ (2x7B)x (for token generation) + (2x125Mx1.6)x (for encoding tokens)\\n\\n1.6 includes the ~60% more tokens we generated. To be clear our approach **ONLY** utilize LLMs for generation (**NOT** for embedding).\\n\\nFurthermore, the objective of this paper is to utilize the foundational knowledge inherent in lLLMs to enhance existing recall performance. Employing LLM-based retrievers represents an alternative approach to improving retriever performance by capitalizing on the foundational knowledge of LLMs, and we include this as a point of comparison.\\n\\nMoreover, I recommend that we concentrate on the recall performance of our proposed method, as we do not assert that our approach is without cost. Our current discussion appears to be diverging from the primary aim of the paper. In practical terms, this method is particularly well-suited for enterprise search applications where the search domain is restricted and high recall performance is essential.\\n\\n&emsp;\\n\\n> I would recommend evaluating on the entire BEIR benchmark for a more comprehensive assessment, as is standard practice in prior IR research.\\n\\nSure, we'll include the entire BEIR benchmark then. But given the limited time and compute resources we have now, we probably will attach here first and update in the final revision of the paper.\\n\\n&emsp;\\n\\nFurthermore, if our actions and responses have satisfactorily addressed your questions and concerns, we would greatly appreciate it if you could consider increasing your rating of our paper. Your support would be highly valued!\"}", "{\"title\": \"Reply\", \"comment\": \"Thank you for your reply, which has addressed W1, Q1, Q2. Regarding Q3, I appreciate it's a standard approach in industry. I still believe it is more common to fully leverage the document (i.e., 100-word passage) rather than further chunking it into chunk_size=64. However, I think a further revision of writing should be enough.\\n\\nFor W2, I appreciate the purpose of ablation study in isolating the effect of each term. While empirical analysis could be performed, I think the left term does influence right term, making the results not entirely interpretable.\\n\\nFor W3, DRAGON evaluates on the full BeIR and ColBERTv2 does on 13 datasets (BeIR-13). (For ColBERT, do you mean v2 https://arxiv.org/pdf/2112.01488?). When studying BeIR, I believe it is necessary to either provide the full results or provide a justification beyond random selection. Also, I was wondering why nDCG@10, which is the most common metric for BeIR, was not reported.\"}", "{\"title\": \"Rebuttal to Reviewer ZRAX - Part 2\", \"comment\": \"**[W3]** Some presentation issues: (i) The inconsistent capitalization of \\\"Bi-encoder\\\" is concerning, with both \\\"Bi-encoder\\\" and \\\"bi-encoder\\\" used interchangeably throughout the paper, which appears unprofessional. Additionally, on line 19, to my knowledge, ColBERTv2 falls under the category of bi-encoders but employs a more complex relevance scoring process. (ii) Certain findings in sections 3.1.2, 3.1.3, and others could be more effectively presented in an analysis section following the experiments section, providing a clearer flow and focused discussion on results.\\n\\n**[A3]** Thanks for pointing this out. We have corrected \\\"Bi-encoder\\\" in our revised version and also put more analysis in Section 4.5 per your suggestion (in **blue** color).\\nFor ColBERT, they claim themselves as late-interaction models in their paper [1]. The primary distinction between ColBERT and a Bi-encoder lies in their handling of document indexing and query processing. A Bi-encoder can precompute its document index, requiring only the encoding of the query during inference, followed by the application of an approximate nearest neighbor (ANN) algorithm for search. In contrast, ColBERT cannot easily precompute in this manner due to its complex relevance scoring process.\\n\\n[1] Khattab, Omar, and Matei Zaharia. \\\"Colbert: Efficient and effective passage search via contextualized late interaction over bert.\\\" In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pp. 39-48. 2020.\\n\\n&emsp;\\n\\n**[Q2]** Could the authors provide an analysis of the costs associated with using an LLM to enhance these tasks, such as the number of tokens or text checks required for generation?\\n\\n**[A5]** Thanks for your suggestion. We have added a new Table 4 in Section 4.5, highlighted in **blue** color, which summarizes all the statistics related to the materials generated by the large language model (LLM), including the number of queries and tokens. Additionally, our method does not require text checks during generation; instead, we simply parse the LLM outputs to construct the document-level embedding. As mentioned above, the generated tokens constitute approximately 57% of the original document size. However, as these synthetic queries are pre-generated for augmentation and then pre-computed for retriever indexing only once throughout the entire process, the overall cost remains limited, particularly when the retriever model's parameter size is small (125 million). \\n\\n&emsp;\\n\\n**[Q3]** It would be interesting to see the performance when using the original query to search against the pseudo queries, to quantify any potential leakage from the LLM (i.e., the ability to recover the query based on the document).\\n\\n**[A6]** As mentioned above, Table 4 in Section 4.5 includes the Match($q^*$) values (illustrating the frequency using the original query to search against the pseudo queries, which are predominantly zero across all evaluation sets, except for FIQA (1.0%) and Quora (6.2%). This indicates that there is no or very little data leakage in the LLM, and the significant improvement in the retrieval result does not come from the data leakage\"}", "{\"title\": \"Rebuttal to Reviewer ZRAX\", \"comment\": \"## Rebuttal\\n**[W1]** The cost of this solution appears too high, given that it relies on LLaMA-70B, and all documents require query generation. Additionally, I have concerns regarding potential data leakage in the LLM, as it may have previously encountered these query-document pairs, potentially generating similar queries based on the document. This issue could be addressed if the authors provided examples and analysis.\\n\\n**[A1]** Thanks for the feedback. In Section 4.5, we have included an Augmentation Analysis, highlighted in **blue** color, to discuss both cost-effectiveness and the query match ratio, Match($q^*$). This ratio is defined as the ratio of the number of intersections between search queries in the eval set and synthetic relevant queries relative to the total number of search queries in the eval set. The Match($q^*$) values are predominantly zero across all evaluation sets, except for FIQA (1.0%) and Quora (6.2%). This indicates that there is no or little data leakage, and the significant improvement in the retrieval result does not come from the data leakage. Besides, in Appendix, Table 11 provides several examples of synthetic queries generated by different LLMs. \\n\\nIn addition, it is not necessary to use the Llama-70b model. As demonstrated in Section 4.5 and Table 11, Llama-7B and Llama-8B can perform equivalently at a lower cost. In addition, the generated tokens constitute approximately 57% of the original document size. However, these synthetic queries are pre-generated for augmentation and then pre-computed for retriever indexing only once throughout the entire process, the overall cost remains limited, particularly when the retriever model's parameter size is small (125 million). \\n\\n| | |Original|Documents||Generated|Synthetic|Relevant|Queries|||\\n|------------|-------|--------|-----------|----------|-----------|--------------|---------|------------|-------------|----------|\\n|Dataset|Subset|$N_D$(in K)|$N_{T_D}$(in K)|$N_{T_D}/N_D$|$N_{q^*}$(in K)|$N_{T_{q^*}}$(in K)|$N_{q^*}/N_D$|$N_{T_{q^*}}/N_D$|$N_{T_{q^*}}/N_{q^*}$|Match($q^*$)\\\\%|\\n|BEIR|ArguAna|9|1,782|205|46|684|5|79|15|0|\\n||FIQA|58|9,470|164|305|4,360|5|76|14|1.0|\\n||Quora|523|8,404|16|3,123|40,947|6|78|13|6.2|\\n||SciDocs|25|5,365|212|160|2,580|6|102|16|0|\\n||SciFact|5|1,548|299|32|618|6|119|19|0|\\n||CQADEnglish|40|4,251|106|179|2,987|4|74|17|0|\\n||CQADPhysics|38|6,992|182|184|3,232|5|84|18|0|\\n|LoTTE|Lifestyle|119|21,639|181|664|9,866|6|83|15|0|\\n||Recreation|167|26,988|162|902|13,215|5|79|15|0|\\n||Science|1,694|400,544|236|8,461|159,901|5|94|19|0|\\n||Technology|662|117,940|178|7,031|105,610|11|159|15|0|\\n||Writing|200|29,031|145|1,027|15,364|5|77|15|0|\\n\\n&emsp;\\n\\n**[W2 & Q1]** The inference process requires weighting strategies to combine the relevance between the query and different fields of the document. I have concerns about the scalability of this solution, as the presented dataset contains a relatively small number of documents (see Q1).\\nI am curious whether this method could be effective on larger document benchmarks, such as MSMARCO and FEVER within the BEIR benchmark. Could the authors provide results on these datasets?\\n\\n**[A2]** Thanks for the feedback. Our weights are hyperparameters determined using a dev set (BEIR-ArguAna) without extensive tuning, and they remain fixed across other evaluation sets. We have observed that these weight selections generalize effectively across datasets (see Section 4.3, highlighted in **orange** color). Additionally, as shown in Table 4 of Section 4.5, LoTTE-Science comprises 1.7 million documents, and LoTTE-Technology contains 0.7 million documents, both of which are substantial in size. The recall performance improvements are also significant across various base models on these two evaluation sets. Consequently, we anticipate that our method will yield significant recall performance enhancements on MSMARCO and FEVER across different base models.\"}", "{\"comment\": \"Thanks for the response. After reading other reviewers' concerns, I also agree that NDCG@10 should be reported (I'm not sure if this approach can improve NDCG@10 consistently overall the datasets since usually we observe a tradeoff between ranking accuracy, such as NDCG@10 and MRR@10, and recall). If no consistent improvement on NDCG@10 in all the datasets, the paper should explicitly mention that the approach is mainly improving recall of the retrieval but still report NDCG@10 for reference, which I think is also important for readers who want to consider before adopting your approach. If this is the case (I'm not asking for new experiments at this stage), it would be better to conduct reranking comparison over the retrieved candidates from the models with and without the expansion in the future so that the main benefits of the approach (i.e., improving recall) can be clearly demonstrated.\"}", "{\"comment\": \"Thanks for the the response. I think the remaining concern for me is A1.3, where I think the index costs are not compared fairly. I think this main cost for this approach is to generate queries and title expansion using Llama70B; however, in the table in A1.3, the flops for the proposed approach is only from encoding the whole text to embedding but without counting query and title expansion, which I think is the key tradeoff when a user would consider when applying this approach. I've raised the score but I think these number should be correctly reported or correct if I'm wrong.\"}", "{\"title\": \"Rebuttal to Reviewer z7Jy - Part 2\", \"comment\": \"**[W1.3]** A comparison with LLM-based retrievers, such as RepLlama[1] or MistralE5[2], in terms of retrieval effectiveness, query latency and indexing time, can provide more useful information for readers to choose which approach to use.\\n\\n**[A1.3]** We summarize the comparison in a table below and also included it in Table 11 with **violet** color in **Appendix** due to the page limit. For training FLOPS. inference FLOPS, and indexing time FLOPS, they are estimated based on OpenAI scaling law paper [1] which uses $C_\\\\text{forward} \\\\approx 2N$ and $C_\\\\text{forward+backward} \\\\approx 6N$. We can see both RepLlama and MistralE5 incur significantly higher costs due to the necessity of training on synthetic tokens and their substantially larger model parameters. Additionally, the query latency for RepLlama and MistralE5 is elevated because of the increased number of FLOPs required by their larger model sizes. Despite the need to encode approximately 0.6 times more tokens (as detailed in Section 4.5, where we generate 57% more tokens) to construct the retriever index, our method still results in lower indexing time FLOPs compared to RepLlama and MistralE5, due to their larger model size.\\n\\n|Method|Model Size|Model Architecture|Requires Training|Training FLOPS on Generated Tokens|Indexing Time FLOPS on Document Tokens|Inference FLOPS on User Query|\\n|-----------------|------------|----------|-------|------------|---------|------|\\n|RobertaRetriever+LLMAugmentedRetrieval|125M|encoder-only|No|0|(2\\\\*125M*1.6)x|(2*125M)x|\\n|RepLlama|7B|decoder-only|Yes|(6*7B)x|(2*7B)x|(2*7B)x|\\n|MistralE5|7B|decoder-only|Yes|(6*7B)x|(2*7B)x|(2*7B)x|\\n\\n*For training FLOPS inference FLOPS and indexing time FLOPS in the table, they are estimated based on OpenAI scaling law paper[1]\\n\\n[1] Kaplan, Jared, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. \\\"Scaling laws for neural language models.\\\" arXiv preprint arXiv:2001.08361 (2020).\\n\\n&emsp;\\n\\n**[W1.4]** Can we apply this approach to multilingual retrieval tasks?\\n\\n**[A1.4]** Yes, our approach is model-agnostic and language-agnostic. It utilizes an encoder model to compute text embeddings, and as long as this foundational encoder model is capable of handling multilingual embeddings, such as XLM-RoBERTa, our approach can be effectively applied to multilingual retrieval tasks.\\n\\n&emsp;\\n\\n**[W2]** Although the proposed method shows improvements over existing SoTA retrievers, the proposed approach is a bit incremental. I think the approach to augment the document is very similar to the previous work[3][4]. The authors should make a comparison with the previous work in Related Work to make the contribution more clear.\\n\\n**[A2]** Thanks for the suggestions. We have added our contributions clearer in Related Work in **violet** color and also included these two citations. In summary, prior research primarily employs a fine-tuned model for query generation [4] or utilizes generated queries for training retriever models [3]. In contrast, our approach is training-free, requiring no fine-tuning, and leverages the foundational knowledge of LLMs for query generation, as well as the foundational knowledge of retrievers for calculating similarity scores. By eliminating the need for training, we can minimize costs and ensure that the method generalizes effectively across various scenarios.\\n\\n[3] Rodrigo Nogueira and Jimmy Lin. 2019. From doc2query to docTTTTTquery.\\n \\n[4] Yongqi Li, Nan Yang, Liang Wang, Furu Wei, and Wenjie Li. 2023. Multiview identifiers enhanced generative retrieval.\\n\\n&emsp;\\n\\n**[Q1]** Clarified on implementation details: according to Figure1, for each passage coming from the same document, it seems you use the same synthetic queries and titles? However, in the experiments, you mention you use the original chunk from the datasets; then, how do you know which chunks are coming from the same document and use the document to generate synthetic queries and titles for those chunks.\\n\\n**[A3]** Thanks for your question. The chunks are extracted and chopped from the document, establishing a known many-to-one (M-to-1) relationship between chunks and their corresponding documents. You are correct in noting that we use the same synthetic queries and titles for each document across all corresponding chunks, as we use the entire document to generate these queries and titles. Furthermore, in Equation (1) of Section 3.2.1, all queries and titles are aggregated into a single embedding for each document and represented as $s(q, d)$, thereby maintaining a deterministic many-to-one relationship between the chunk embeddings and the combined embeddings of queries and titles.\"}", "{\"summary\": \"This paper proposes a novel retrieval framework called to improve the performance of current retrieval models. The framework achieves this by enriching document embeddings with information from large language models (LLMs), such as synthetic queries and titles. The proposed document-level embedding approach integrates this augmented information with existing document chunks, and it's shown to be effective for both bi-encoder and late-interaction retrieval models. The authors demonstrate the framework's effectiveness by achieving state-of-the-art results on benchmark datasets LoTTE and BEIR.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is clear and the results are good.\", \"weaknesses\": \"It's a rather straightforward model. The novelty is simply calling some LLM to generate titles and queries for each doc, which the retriever will later use to retrieve the document. I think ICLR requires more model novelty.\", \"questions\": \"All clear to me.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal to z7Jy\", \"comment\": \"Thanks for your question! Yes, in Table A1.3, the costs of query (and title) generation are excluded, as we assume the same source of synthetic queries (and titles) across all three scenarios. This allows us to focus on comparing the remaining costs presented in the table.\\n\\nWe acknowledge that the majority of the expenses are likely to arise from query (and titl)e generation. Should we include these costs, they would be equivalent to (2\\\\*70B)x for all scenarios, assuming the use of the Llama-70B model for query generation. However, as indicated in A1.1, our findings suggest minimal differences in query augmentation when utilizing either the 70B or 7B models. Consequently, the cost of query generation could potentially be reduced to (2\\\\*7B)x in terms of tokens generated. If we use 7B model for query generation, then the table above will become below (highlighted in bold).\\n\\n|Method|Model Size|Model Architecture|**Synthetic Queries Generation**|Requires Training|Training FLOPS on Generated Tokens|Indexing Time FLOPS on Document Tokens|Inference FLOPS on User Query|\\n|-----------------|------------|----------|-------|-------|-----------|---------|------|\\n|RobertaRetriever+LLMAugmentedRetrieval|125M|encoder-only|**(2*7B)x**|No|0|(2\\\\*125M*1.6)x|(2*125M)x|\\n|RepLlama|7B|decoder-only|**(2*7B)x**|Yes|(6*7B)x|(2*7B)x|(2*7B)x|\\n|MistralE5|7B|decoder-only|**(2*7B)x**|Yes|(6*7B)x|(2*7B)x|(2*7B)x|\"}", "{\"comment\": \"Please refer to the table in the main section for results on BEIR and nDCG@10.\"}", "{\"title\": \"Rebuttal to Reviewer z7Jy - Part 3\", \"comment\": \"**[Q2]** Which dev set are used to tune the hyperparameters $w_{query}$, $w_{title}$, $w_{chunk}$?\\n\\n**[A4]** Thanks for the question. We use the dev set of BEIR-ArguAna to choose all the hyperparameters (though not heavily tuned) and fix the hyperparameters across all the evaluation sets. The hyperparameters seem to generalize really well. In addition, $w_{query}$, $w_{title}$, $w_{chunk}$ are changed to $w_{q^*}$, $w_{t^*}$, $w_c$ for better presentation.\\n\\n&emsp;\\n\\n**[Q3]** From the prompts, it seems that the models are instructed to output multiple relevant queries; however, only one generated query is used for document expansion. How do you choose the query among all the generated ones? Or if you use multiple generated queries, how do you combine them?\\n\\n**[A5]** Sorry for the confusion here. Actually all synthetic queries are utilized in the calculation of the document-level embedding. These queries are firstly encoded and subsequently averaged into a single embedding vector, which is then integrated into the document embedding. We have improved Equation (1) and (2) in **teal** color in Section 3.2.1 for better presentation.\"}", "{\"title\": \"Rebuttal to Reviewer nf7F\", \"comment\": \"## Rebuttal\\n**[W1]** There lacks detailed analysis of how (synthetically generated) document fields help the retrieval. For example, how many queries were generated per document on average? The limitations section mentions that the augmented text may be as long as the original text, which suggests a substantial number of queries\\u2014clarification here would be valuable. Also, could you include some qualitative examples?\\n\\n**[A1]** Thanks for the great feedback. We have incorporated a quantitative summary table in Section 4.3 in **blue** color, which includes additional information such as $N_{q^*}/N_D$ (the average number of queries generated per document), $N_{T_{q^*}}/N_D$ (the average number of generated tokens per document), $N_{T_D}/N_D$ (the average number of tokens in the original document). \\n\\nGenerally, the value of $N_{q^*}/N_D$ is approximately 6, indicating that, on average, six synthetic questions are generated per document. Additionally, the average $N_{T_Q}/N_D$ is 57%, suggesting that the generated tokens slightly exceed half of the original tokens. Furthermore, we have included qualitative examples in Table 11 within the **Appendix** (due to page size constraints) to compare the quality of generated queries by different large language models (LLMs) and to illustrate how these synthetic queries can enhance the informational content of the documents.\\n\\n| | |Original|Documents||Generated|Synthetic|Relevant|Queries|||\\n|------------|-------|--------|-----------|----------|-----------|--------------|---------|------------|-------------|----------|\\n|Dataset|Subset|$N_D$(in K)|$N_{T_D}$(in K)|$N_{T_D}/N_D$|$N_{q^*}$(in K)|$N_{T_{q^*}}$(in K)|$N_{q^*}/N_D$|$N_{T_{q^*}}/N_D$|$N_{T_{q^*}}/N_{q^*}$|Match($q^*$)\\\\%|\\n|BEIR|ArguAna|9|1,782|205|46|684|5|79|15|0|\\n||FIQA|58|9,470|164|305|4,360|5|76|14|1.0|\\n||Quora|523|8,404|16|3,123|40,947|6|78|13|6.2|\\n||SciDocs|25|5,365|212|160|2,580|6|102|16|0|\\n||SciFact|5|1,548|299|32|618|6|119|19|0|\\n||CQADEnglish|40|4,251|106|179|2,987|4|74|17|0|\\n||CQADPhysics|38|6,992|182|184|3,232|5|84|18|0|\\n|LoTTE|Lifestyle|119|21,639|181|664|9,866|6|83|15|0|\\n||Recreation|167|26,988|162|902|13,215|5|79|15|0|\\n||Science|1,694|400,544|236|8,461|159,901|5|94|19|0|\\n||Technology|662|117,940|178|7,031|105,610|11|159|15|0|\\n||Writing|200|29,031|145|1,027|15,364|5|77|15|0|\\n\\n&emsp;\\n\\n**[W2]** In the ablation study, weights for each field are exclusively set to 1.0. However, the \\u201cleft term\\u201d at equation (1) remains active, meaning the query-chunks still affect the results. This raises concerns about whether the ablation study truly isolates the impact of each field, especially for the query and title.\\n\\n**[A2]** Thanks for the great feedback. The purpose of the ablation study presented here is to specifically evaluate the impact of the \\\"right term\\\", $s(q, d)$ in equation (1), as the \\\"left term\\\" pertains to the conventional query-chunk embedding similarity. For clarity, we have rewritten this term in **teal** color. Furthermore, since the influence of synthetic queries and titles is confined to $s(q, d)$, we can ensure that the effects of queries and titles are isolated in this ablation study, thereby focusing on their impact on the similarity between the query and the document.\\n\\n&emsp;\\n\\n**[W3]** The results are evaluated only on the subset of the BeIR benchmark without any explanations.\\n\\n**[A3]** Actually our results are evaluated not only on BEIR but also on the LoTTE dataset. The selection of BEIR and LoTTE is due to their widespread adoption in both the DRAGON [1] and Colbert [2] papers. We aim to demonstrate that our LLM-augmented retrieval method can achieve advancements over these SoTA models in their corresponding evaluation sets. In addition, we randomly selected subsets from the BEIR dataset and observed significant improvements in recall performance using our method. Therefore, we believe it is unnecessary to include all evaluation sets within BEIR.\\n\\n[1] Lin, Sheng-Chieh, Akari Asai, Minghan Li, Barlas Oguz, Jimmy Lin, Yashar Mehdad, Wen-tau Yih, and Xilun Chen. \\\"How to train your dragon: Diverse augmentation towards generalizable dense retrieval.\\\" arXiv preprint arXiv:2302.07452 (2023).\\n[2] Khattab, Omar, and Matei Zaharia. \\\"Colbert: Efficient and effective passage search via contextualized late interaction over bert.\\\" In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, pp. 39-48. 2020.\\n\\n&emsp;\\n\\n**[Q1]** How was the chunk size determined?\\n\\n**[A4]** In our experiment, we set the chunk size to 64 tokens for Bi-encoders as a hyperparameter. This chunk size, along with other hyperparameters, was selected based on the dev set of BEIR-ArguAna, although these parameters were not extensively tuned. We then applied these settings consistently across all datasets, and they appear to generalize well across various evaluation sets.\"}", "{\"summary\": \"The paper presents an approach to document expansion using LLMs to enhance the zero-shot retrieval effectiveness of existing state-of-the-art bi-encoder retrievers (Contriever, DRAGON and ColBERTv2).\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The proposed method is straightforward and easy to implement and shows significant improvements for existing state-of-the-art retrievers.\\n2. The proposed approach is useful, which combines the document embedding from different sources and forms a single document embedding. Thus, the index size is still the same for single-vector dense retrieval.\", \"weaknesses\": \"1. Although the proposed approach is simple and effective, I think the major concern for me is the limited discussion in ablations and comparisons, which are important for the readers who want to adopt this approach. I'm willing to raise score if the authors can address the concern below: (1) As a reader, one may want to know the quality and cost tradeoff of the generated titles and queries. For instance, the paper prompts Llama-70B to make it work; how about using smaller size of Llama; 7B, 13B etc.? and the additional cost of document indexing with LLM document expansion should be reported. (2) There are no ablations on the three combinations: chunk + query, chunk + title and query + title, which helps readers to choose the combinations with a balanced tradeoff between effectiveness and cost (3) A comparison with LLM-based retrievers, such as RepLlama[1] or MistralE5[2], in terms of retrieval effectiveness, query latency and indexing time, can provide more useful information for readers to choose which approach to use. (4) Can we apply this approach to multilingual retrieval tasks?\\n2. Although the proposed method show improvements over existing SoTA retrievers, the proposed approach is a bit incremental. I think the approach to augment the document is very similar to the previous work[3][4]. The authors should make a comparison with the previous work in Related Work to make the contribution more clear.\\n3. Some experimental details are missing (see Questions).\\n\\n[1] Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, and Jimmy Lin. 2023. Fine-tuning llama for multi-stage text retrieval.\\n[2] Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, and Furu Wei. 2023. Improving text embeddings with large language models.\\n[3] Rodrigo Nogueira and Jimmy Lin. 2019. From doc2query to docTTTTTquery.\\n[4] Yongqi Li, Nan Yang, Liang Wang, Furu Wei, and Wenjie Li. 2023. Multiview identifiers enhanced generative retrieval.\", \"questions\": \"1. Clarified on implementation details: according to Figure1, for each passage coming from the same document, it seems you use the same synthetic queries and titles? However, in the experiments, you mention you use the original chunk from the datasets; then, how do you know which chunks are coming from the same document and use the document to generate synthetic queries and titles for those chunks.\\n2. Which dev set are used to tune the hyperparameters, $w_{query}, w_{title}, w_{chunk}$?\\n3. From the prompts, it seems that the models are instructed to output multiple relevant queries; however, only one generated query is used for document expansion. How do you choose the query among all the generated ones? Or if you use multiple generated queries, how do you combine them?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the detail response.\", \"for_w1\": \"> On average 6 synthetic relevant queries are generated per document and the token count in the generated syntheticqueries is comparable to the token count in the original documents. (l 438)\\n\\nI still believe the cost here is relatively high. Existing retrievers typically train on tens of thousands of text pairs, while this method requires generating millions of text chunks and this is not even the full extent of the cost.\\n\\n---\", \"for_w2\": \"My concern remains. BEIR is a widely used benchmark in IR research, and I do not see a compelling reason to avoid performing tasks on other larger datasets within BEIR. In practice, million-scale corpora are quite common, and evaluating the method's performance in such scenarios would provide more valuable insights.\"}", "{\"summary\": \"This paper introduces a novel, model-agnostic approach for document embedding, leveraging an augmented document field comprising synthetically generated queries, titles, and document chunks. With a new similarity function, the method computes a similarity score between the query and document field as a proxy for query-document relevance. The method outperforms using naive document embeddings from three existing retrievers (Contriever, DRAGON, ColBERTv2) on two benchmarks (BeIR, LoTTE).\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"I like the idea of enhancing document embeddings prior to the inference step. Specifically, this can be pre-computed, offering practical advantages over query expansion methods which have been studied extensively.\", \"Proposed method can be applied to both bi-encoder and token-level late-interaction models, and it achieves consistently strong results across most datasets.\"], \"weaknesses\": [\"There lacks detailed analysis of how (synthetically generated) document fields help the retrieval. For example, how many queries were generated per document on average? The limitations section mentions that the augmented text may be as long as the original text, which suggests a substantial number of queries\\u2014clarification here would be valuable. Also, could you include some qualitative examples?\", \"In the ablation study, weights for each field are exclusively set to 1.0. However, the \\u201cleft term\\u201d at equation (1) remains active, meaning the query-chunks still affect the results. This raises concerns about whether the ablation study truly isolates the impact of each field, especially for the query and title.\", \"The results are evaluated only on the subset of BeIR benchmark without any explanations.\"], \"questions\": [\"How was the chunk size determined?\", \"I suggest a retouch on line 348-350 \\u201caugmentation of document embeddings with LLMs can substantially elevate the recall performance of retrieval models without necessitating additional fine-tuning\\u201d. I don\\u2019t think one model is simply fine-tuned relative to another, but it\\u2019s their complexity that differs.\", \"Could you cite some prior work on chunk-level embeddings (line 246-248)? Using a single document-level embedding, such as truncating, seems to be a common practice. In such cases, the proposed method will increase the document index size by |doc_len| / |chunk_size|.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks. Please make the claim explicitly which I think is the performance tradeoff for the approach ``We acknowledge that the majority of the expenses are likely to arise from query (and titl)e generation. Should we include these costs, they would be equivalent to (2*70B)x for all scenarios, assuming the use of the Llama-70B model for query generation.''. However, when mentioning that it can be replaced with Llama 7B with almost no performance degrade, do you report the performance of using Llama7B as expansion model somewhere in the paper? I did not see the results.\"}", "{\"comment\": \"Thanks for your comment. We appreciate your perspective on evaluating cost-effectiveness. However, it is important to note that assessing cost-effectiveness is inherently subjective and, more importantly, is not the main claim of our paper. We would be grateful if we could shift our focus to other aspects of the study and would be pleased to address any additional questions you may have.\"}", "{\"comment\": \"Sure, thanks for your comments. We'll include NDCG@10 here later after we've calculated NDCG@10 on BEIR sets, (and in the final revision of the paper since the paper deadline is approaching).\"}", "{\"comment\": \"Thanks for clarifcation.\\n\\n> The distinction feels counterintuitive. While they use LLMs for embedding, your approach utilizes LLMs for both generation and embedding.\\n\\nApologies for the mistake in my initial comment. I understand that your approach employs LLMs for generation only, with the subsequent retrieval process following a conventional retriever pipeline. My points are\\n- It is unfair to emphasize your effectiveness over encoder-only retrievers like Contriever while comparing your costs to LLM-based retrievers like RepLlama. \\n- The generation cost is several orders of magnitude higher than the embedding cost, making it somewhat misleading to use O() notation to omit such a significant disparity.\\n- The cost is essentially on par with using LLMs to construct synthetic datasets tailored to the corpus and task. A comparable baseline might involve employing a retriever trained on such synthetic datasets.\\n\\n\\nTo clarify, I am not advocating for a strictly low-cost approach or imposing overly harsh standards on cost. In fact, I would appreciate and commend the use of LLMs with increased cost if they significantly enhance effectiveness. **However, I feel that your comparison is unfairly presented and appears biased toward supporting your claim that your method is cost-effective.**\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Hi Reviewer fyxe,\\n\\nWe would like to confirm whether our response adequately addresses your concerns regarding novelty. Thank you very much.\"}", "{\"title\": \"Rebuttal to Reviewer fyxe\", \"comment\": \"## Rebuttal\\n\\n**[W1]** It's a rather straightforward model. The novelty is simply calling some LLM to generate titles and queries for each doc, which the retriever will later use to retrieve the document. I think ICLR requires more model novelty.\\n\\n**[A1]** Thanks for the feedback. To the best of our knowledge, most embedding-based retrieval methods primarily focus on computing embeddings from the original content, such as chunks of the original document, with synthetic queries predominantly used for training purposes. In contrast, our approach introduces several novel aspects:\\n1. We present a **training-free**, straightforward yet effective method that demonstrates improvements over existing state-of-the-art (SoTA) retrievers.\\n2. Our approach maintains inference speed without any sacrifice, and the total cost is minimized as the retriever index is pre-computed only once prior to the inference step. Additionally, the index size remains consistent with that of single-vector dense retrieval while including richer information from multiple fields.\\n3. We provide valuable insights, such as the impact of titles on information retrieval and the corresponding methods to employ when titles are absent.\\n4. Our solution is **model-agnostic** (also multilingual-agnostic) and can be applied to both Bi-encoder and token-level late-interaction models.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"# Rebuttal\\n\\nWe sincerely appreciate the insightful feedback provided by all reviewers. We are pleased to note that the reviewers recognize our paper as achieving robust, significant, and substantial results [**nf7F, z7Jy, fyxe, ZRAX**] through a straightforward method [**z7Jy, ZRAX**] that is easy to implement.\\n\\n\\n## Highlights and Common Questions\\nWe summarize several highlights and common feedback as below.\\n\\n\\n### Our novelty is primarily derived from the following aspects\\n1. We present a **training-free**, straightforward yet effective method that demonstrates improvements over existing state-of-the-art (SoTA) retrievers [**nf7F, z7Jy, fyxe, ZRAX**].\\n2. Our approach maintains inference speed without any sacrifice, and the total cost is minimized as the retriever index is pre-computed only once prior to the inference step [**nf7F**]. Additionally, the index size remains consistent with that of single-vector dense retrieval while including richer information from multiple fields [**z7Jy**].\\n3. We provide valuable insights, such as the impact of titles on information retrieval and the corresponding methods to employ when titles are absent [**ZRAX**].\\n4. Our solution is **model-agnostic** (also multilingual-agnostic) and can be applied to both Bi-encoder and token-level late-interaction models [**nf7F**].\\n\\n\\n### We have enhanced the presentation of the paper with the following improvements\\n1. We have added a quantitative summary of the augmentation details, including the number of queries/tokens generated [**nf7F, ZRAX**] and the query match ratio [**ZRAX**], in Table 4, Section 4.5 with **blue** color, where we also discuss the cost and latency analysis of our proposed method. In short, the cost impact is limited.\\n2. We have included qualitative examples [**nf7F, z7Jy, ZRAX**] of synthetic queries: Table 11 in the Appendix, highlighted in **blue** color, uses four documents as examples to illustrate the sample synthetic relevant queries generated by LLMs.\\n3. We have improved the description of hyperparameter selection and experiments details [**nf7F, z7Jy**], highlighted in **orange** color in Section 4.3.\\n4. We also improved our notations in equations in **teal** color to address reviewer\\u2019s questions [**z7Jy**] in Section 3.2.1 regarding how synthetic queries work in doc-level embedding.\"}", "{\"comment\": \"Thank you for your response. Over the past two years, there\\u2019s been a noticeable rise in works leveraging LLMs for tasks like data labeling or augmentation. I think it\\u2019s really important to clearly explain why LLMs are being used and to be upfront about the costs and benefits involved\\u2014it\\u2019s just getting a bit tiring seeing this overlooked.\", \"to_clarify\": \"I do not think it is necessary to aim for the lowest cost with the highest gain for acceptance, but it is vital to ensure that comparisons are **fair** and **well-justified**. Many of the concerns and discussions raised above could be more effectively addressed after you present the BEIR and MSMARCO benchmarks. I look forward to seeing the results.\"}", "{\"title\": \"Rebuttal to Reviewer nf7F - Part 2\", \"comment\": \"**[Q2]** I suggest a retouch on line 348-350 \\u201caugmentation of document embeddings with LLMs can substantially elevate the recall performance of retrieval models without necessitating additional fine-tuning\\u201d. I don\\u2019t think one model is simply fine-tuned relative to another, but it\\u2019s their complexity that differs.\\n\\n**[A5]** Thanks for the great feedback. Sorry for the confusion here. Our intention was to convey that, in comparison to the base candidates (the same base model without augmented document embeddings), augmentation with LLMs can significantly enhance recall performance. Specifically, the augmentation alone can improve retrieval results without the need for retraining or fine-tuning the base model. However, including this statement here may have led to misunderstandings, as readers might interpret it as a comparison between different models. Therefore, we have removed this sentence to avoid any ambiguity.\\n\\n&emsp;\\n\\n**[Q3]** Could you cite some prior work on chunk-level embeddings (line 246-248)? Using a single document-level embedding, such as truncating, seems to be a common practice. In such cases, the proposed method will increase the document index size by |doc_len| / |chunk_size|.\\n\\n**[A6]** Thanks for the great feedback. Sure I\\u2019ve added citations in Section 3.1.3 in **purple** color and also listed below. Actually document chunking is a widely utilized technique in the fields of information retrieval and retrieval-augmented generation (RAG) [1,2,3]. Companies such as LangChain and LlamaIndex have developed tools to facilitate semantic chunking. The traditional method of chunk-level embedding typically increases the retriever index size by a factor of |doc_len| / |chunk_size|, and it is a standard approach in industry. Our research builds upon this conventional chunk-level embedding method, thereby not adding additional retriever indexes to the system, while significantly enhancing recall performance.\\n\\n[1] Chen, Tong, Hongwei Wang, Sihao Chen, Wenhao Yu, Kaixin Ma, Xinran Zhao, Dong Yu, and Hongming Zhang. \\\"Dense x retrieval: What retrieval granularity should we use?.\\\" arXiv preprint arXiv:2312.06648 (2023).\\n\\n[2] Finardi, Paulo, Leonardo Avila, Rodrigo Castaldoni, Pedro Gengo, Celio Larcher, Marcos Piau, Pablo Costa, and Vinicius Carid\\u00e1. \\\"The Chronicles of RAG: The Retriever, the Chunk and the Generator.\\\" arXiv preprint arXiv:2401.07883 (2024).\\n\\n[3] Lewis, Patrick, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K\\u00fcttler et al. \\\"Retrieval-augmented generation for knowledge-intensive nlp tasks.\\\" Advances in Neural Information Processing Systems 33 (2020): 9459-9474.\"}", "{\"metareview\": \"This paper proposes an LLM-based method for document expansion in document retrieval. It enhances each document with synthetic queries, titles, and chunks generated by the LLM. The proposed method is training-free and model-agnostic because of the use of LLM. The method demonstrates substantial performance improvements across various retrievers (Contriever, DRAGON, ColBERTv2) and benchmarks (LoTTE, BEIR). Reviewers found the approach straightforward (z7Jy, ZRAX) and practical to implement (nf7F, z7Jy), with substantial empirical gains (fyxe, ZRAX).\", \"the_reviewers_also_raised_several_concerns\": \"1. The high inference cost of LLM-generated augmentations and the unfair comparison with previous methods in terms of inference efficiency (z7Jy, ZRAX).\\n2. The limited novelty, as the method is incremental with respect to existing document augmentation techniques (z7Jy, fyxe).\\n3. Concerns about the evaluation metrics (nf7F, z7Jy) and datasets (nf7F, ZRAX).\\n\\nDuring the rebuttal, the authors provided additional results to address concern 3; however, concerns 1 and 2 were not sufficiently addressed. A rejection is recommended. We encourage the authors to improve the current version based on reviewers\\u2019 feedback for future publication.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers also raised several concerns on high inference cost, limited novelty, and evaluation. During the rebuttal the authors provided additional results to address concern on evaluation; however, concerns on cost and novelty were not sufficiently addressed.\"}", "{\"comment\": \"Please refer to the table in the main section for results on BEIR and nDCG@10.\"}", "{\"comment\": \"Thanks a lot for your questions.\\n\\n> Thank you for your reply, which has addressed W1, Q1, Q2. Regarding Q3, I appreciate it's a standard approach in industry. I still believe it is more common to fully leverage the document (i.e., 100-word passage) rather than further chunking it into chunk_size=64. However, I think a further revision of writing should be enough.\\n\\nChunk size is a hyperparameter that we determined using our dev set and then fixed for evaluation across all sets in this paper. In practical scenarios, a single document may contain thousands of tokens. Creating a single embedding vector for such lengthy texts can lead to information loss; therefore, it is beneficial to split them into smaller chunks. While setting chunk_size=128, (equivalent to approximately 100-word passages), can help reduce the indexing cost of the retriever, the optimal selection of chunk_size is a topic that can be explored independently of this work. The primary objective of our paper is to introduce a novel framework that enhances the recall performance of retrievers without requiring additional training, and we leave the investigation of chunk size optimization for future studies.\\n\\n&emsp;\\n\\n> For W2, I appreciate the purpose of ablation study in isolating the effect of each term. While empirical analysis could be performed, I think the left term does influence right term, making the results not entirely interpretable.\\n\\nThank you for your comment. Our approach is to add the doc level embedding (the right term) on top of the current standard dense retrieval approach, i.e. the chunk level embedding (the left term). Therefore we keep the left term active while adjusting the other fields (title, query) to see their impacts on the final results. To minimize confusion, we have removed the row row corresponding to the case where $w_c=1.0$ in our ablation study and only focus on discussing the impact of synthetic query and title fields.\\n\\n&emsp;\\n\\n> For W3, DRAGON evaluates on the full BeIR and ColBERTv2 does on 13 datasets (BeIR-13). (For ColBERT, do you mean v2 https://arxiv.org/pdf/2112.01488?). When studying BeIR, I believe it is necessary to either provide the full results or provide a justification beyond random selection. Also, I was wondering why nDCG@10, which is the most common metric for BeIR, was not reported.\\n\\nYes, we mean the ColBERTv2. For BEIR, we initially selected a subset of datasets with varying number of documents to ensure representativeness. However, I concur that evaluating on the complete BEIR datasets would be beneficial. Given the limited time and compute resources we have now, we will include these evaluations here later and update them in the final version of the paper.\\n\\nFor nDCG@10, we reference the Contriever [1] paper, which states: \\\"While nDCG@10 is the main metric of BEIR, we are more interested in the Recall@100 to evaluate bi-encoders, as our goal is to develop retrievers that can be used in ML systems. Moreover, in many settings, retrieved documents can be re-ranked with a more powerful model such as a cross-encoder, thus improving the nDCG@10.\\\"\\n\\nSimilarly, our proposed method was originally designed for the information retrieval component of a Retrieval-Augmented Generation (RAG) system [2]. Typically, we retrieve 3 to 10 documents for each response generation of LLM. The LLM is then better equipped to relate the user query to the most relevant content among the retrieved documents to generate final answers. Therefore, we are more focused on R@3 and R@10. We have also included the nDCG@10 scores on several BEIR subsets for our method below for reference.\\n\\n| Model | Metrics | ArguAna | FIQA | Quora | SciDocs | SciFact |\\n|------------|---------|---------|--------|--------|---------|---------|\\n| Contriever*| R@3 | 0.2468 | 0.3690 | 0.8488 | 0.2440 | 0.5996 |\\n| | R@10 | 0.5825 | 0.5174 | 0.9434 | 0.4030 | 0.7259 |\\n| | nDCG@10 | 0.2691 | 0.3604 | 0.8131 | 0.2460 | 0.5790 |\\n| Dragon* | R@3 | 0.4196 | 0.3950 | 0.9098 | 0.2770 | 0.6393 |\\n| | R@10 | 0.7482 | 0.5353 | 0.9698 | 0.4550 | 0.7638 |\\n| | nDCG@10 | 0.3678 | 0.3853 | 0.8726 | 0.2825 | 0.6290 |\\n\\n\\n[1] Izacard, Gautier, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. \\\"Unsupervised dense information retrieval with contrastive learning.\\\" arXiv preprint arXiv:2112.09118 (2021).\\n\\n[2] Lewis, Patrick, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K\\u00fcttler et al. \\\"Retrieval-augmented generation for knowledge-intensive nlp tasks.\\\" Advances in Neural Information Processing Systems 33 (2020): 9459-9474.\\n\\n&emsp;\\n\\nFurthermore, if our actions and responses have effectively addressed your questions and concerns, we would greatly appreciate it if you could consider increasing your rating of our paper. Your support would be immensely appreciated!\"}", "{\"comment\": \"Thanks for the follow up question. We have included Table 5 and Table 11 in Section 4.6.1 to indicate that our method can be replaced with Llama2-7b or Llama3-8b with almost no performance degrade (see attached below). In some cases our Llama2-7b and Llama3-8b even outperform Llama2-70b model.\\n\\n\\n| Model | Dataset | Metrics | Llama2-7b | Llama2-70b | Llama3-8b | Llama3-70b |\\n|------------|---------|---------|-----------|------------|-----------|------------|\\n| Contriever* | Arguana | R@3 | 0.2425 | 0.2468 | 0.2447 | 0.2596 |\\n| | | R@10 | 0.5583 | 0.5825 | 0.5939 | 0.6110 |\\n| Contriever* | Scifact | R@3 | 0.5870 | 0.5996 | 0.5996 | 0.6231 |\\n| | | R@10 | 0.7106 | 0.7259 | 0.7196 | 0.7430 |\\n| Dragon* | Arguana | R@3 | 0.4132 | 0.3663 | 0.4232 | 0.4289 |\\n| | | R@10 | 0.7269 | 0.6764 | 0.7496 | 0.7624 |\\n| Dragon* | Scifact | R@3 | 0.6303 | 0.6610 | 0.6348 | 0.6528 |\\n| | | R@10 | 0.7520 | 0.7710 | 0.7538 | 0.7592 |\\n\\nIn addition, if we combine the query generation and indexing FLOPs into one column (to make the table cleaner), it will become:\\n|Method|Model Size|Model Architecture|Requires Training|Training FLOPS on Generated Tokens|Synthetic Query Generation + Indexing Time FLOPS on Document Tokens|Inference FLOPS on User Query|\\n|-----------------|------------|-------|-------|-----------|---------|------|\\n|RobertaRetriever+LLMAugmentedRetrieval|125M|encoder-only|No|0|O(2*7B)x|(2*125M)x|\\n|RepLlama|7B|decoder-only|Yes|(6*7B)x|(2*7B)x|(2*7B)x|\\n|MistralE5|7B|decoder-only|Yes|(6*7B)x|(2*7B)x|(2*7B)x|\"}", "{\"summary\": \"This paper proposes an LLM-augmented retrieval framework that uses a large language model (LLM), specifically LLaMA-70B, to generate pseudo-queries for document retrieval. For each document, the framework includes synthetic queries generated by the LLM, existing or LLM-generated titles, and document chunks. These text fields are embedded and combined with the query embedding using inner product, followed by weighted sum and max pooling. Evaluations are conducted with Contriever, Dragon, and ColBERT on the LoTTE dataset and a subset of the BEIR benchmark, with considerable improvements.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The motivation is solid, with a straightforward idea: using pseudo-queries to augment document representation, an important direction given the short context windows of current embedding models that limits their effectiveness.\\n\\n2. The observed improvement appears substantial on the presented datasets.\\n\\n3. The authors provide some valuable insights that could be helpful, such as the impact of titles and approaches to handle cases where titles are missing, which aligns well with real-world scenarios.\", \"weaknesses\": \"1. The cost of this solution appears too high, given that it relies on LLaMA-70B, and all documents require query generation. Additionally, I have concerns regarding potential data leakage in the LLM, as it may have previously encountered these query-document pairs, potentially generating similar queries based on the document. This issue could be addressed if the authors provided examples and analysis.\\n\\n2. The inference process requires weighting strategies to combine the relevance between the query and different fields of the document. I have concerns about the scalability of this solution, as the presented dataset contains a relatively small number of documents (see Q1).\\n\\n3. Some presentation issues: (i) The inconsistent capitalization of \\\"Bi-encoder\\\" is concerning, with both \\\"Bi-encoder\\\" and \\\"bi-encoder\\\" used interchangeably throughout the paper, which appears unprofessional. Additionally, on line 19, to my knowledge, ColBERTv2 falls under the category of bi-encoders but employs a more complex relevance scoring process. (ii) Certain findings in sections 3.1.2, 3.1.3, and others could be more effectively presented in an analysis section following the experiments section, providing a clearer flow and focused discussion on results.\", \"questions\": \"1. I am curious whether this method could be effective on larger document benchmarks, such as MSMARCO and FEVER within the BEIR benchmark. Could the authors provide results on these datasets?\\n\\n2. Could the authors provide an analysis of the costs associated with using an LLM to enhance these tasks, such as the number of tokens or text checks required for generation?\\n\\n3. It would be interesting to see the performance when using the original query to search against the pseudo queries, to quantify any potential leakage from the LLM (i.e., the ability to recover the query based on the document).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your follow-up questions.\\n\\n**[FQ1]**\\n> I still believe the cost here is relatively high. Existing retrievers typically train on tens of thousands of text pairs, while this method requires generating millions of text chunks and this is not even the full extent of the cost.\\n\\n**[FA1]** We acknowledge that the generation of synthetic queries introduces an additional cost to our approach. However, several factors justify that these costs are limited and worthwhile:\\n 1) The synthetic queries are generated only once and are pre-computed into the retriever index, which is stored offline. The size of the index remains consistent with other standard dense retrieval methods. Consequently, our approach does not increase latency during inference time.\\n 2) Furthermore, unlike other retrievers that requires extensive training, our model does not require additional training. This enables existing retriever models, including smaller ones, to directly leverage the foundational knowledge of LLMs. As a result, training costs are reduced, and the adaptability for users is enhanced.\\n 3) As demonstrated in Table 5, our analysis indicates that the generation model can be substituted with Llama 7B or Llama 8B with negligible performance degradation. This substitution further reduces the cost of generation.\\n\\nSpecifically, smaller retriever models, such as RoBERTa, typically exhibit suboptimal performance when trained on only tens of thousands of text pairs. Most retrievers are trained on the MSMARCO dataset, which contains over 8 million passages. For instance, Contriever [1] is trained on Wikipedia and CCNet data, while DRAGON [2] undergoes iterative training on MSMARCO, followed by extensive augmentation for training and distillation. Our methodology can surpass these models without additional training by incorporating foundational knowledge from large language models (LLMs). For LLM-based architectures like RepLlama [3] and MistralE5 [4], although they possess foundational knowledge, the cost of subsequent fine-tuning is non-trivial due to their large parameter sizes. Therefore, we believe our proposed methodology strikes an optimal balance between performance and cost.\\n\\nIn addition, inspired by another reviewer, we summarize a table to estimate the cost of our approach (using Llama 7B for generation) comparing to other LLM-retrievers like RepLlama [3] and MistralE5 [4]. Our cost is still less than the two LLM-retrievers with faster inference time (the Inference FLOPS on User Query is much smaller).\\n\\n|Method|Model Size|Model Architecture|Requires Training|Training FLOPS on Generated Tokens|Synthetic Query Generation + Indexing Time FLOPS on Document Tokens|Inference FLOPS on User Query|\\n|-----------------|------------|-------|-------|-----------|---------|------|\\n|RobertaRetriever+LLMAugmentedRetrieval|125M|encoder-only|No|0|O(2*7B)x|(2*125M)x|\\n|RepLlama[3] |7B|decoder-only|Yes|(6*7B)x|(2*7B)x|(2*7B)x|\\n|MistralE5[4] |7B|decoder-only|Yes|(6*7B)x|(2*7B)x|(2*7B)x|\\n\\n*For training FLOPS inference FLOPS and indexing time FLOPS in the table, they are estimated based on OpenAI scaling law paper [5]\\n\\n&emsp; &emsp;\\n\\n**[FQ2]**\\n> My concern remains. BEIR is a widely used benchmark in IR research, and I do not see a compelling reason to avoid performing tasks on other larger datasets within BEIR. In practice, million-scale corpora are quite common, and evaluating the method's performance in such scenarios would provide more valuable insights.\\n\\n**[FA2]** Sure, we are evaluating the performance on MSMARCO and FEVER and will attach the performance **here** later. The results will be included in the final paper.\\n\\n&emsp;\\n\\n[1] Izacard, Gautier, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. \\\"Unsupervised dense information retrieval with contrastive learning.\\\" arXiv preprint arXiv:2112.09118 (2021).\\n\\n[2] Lin, Sheng-Chieh, Akari Asai, Minghan Li, Barlas Oguz, Jimmy Lin, Yashar Mehdad, Wen-tau Yih, and Xilun Chen. \\\"How to train your dragon: Diverse augmentation towards generalizable dense retrieval.\\\" arXiv preprint arXiv:2302.07452 (2023).\\n\\n[3] Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, and Jimmy Lin. 2023. Fine-tuning llama for multi-stage text retrieval. \\n\\n[4] Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, and Furu Wei. 2023. Improving text embeddings with large language models. \\n\\n[5] Kaplan, Jared, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. \\\"Scaling laws for neural language models.\\\" arXiv preprint arXiv:2001.08361 (2020).\"}" ] }
7xf50qWFGP
Online Laplacian-Based Representation Learning in Reinforcement Learning
[ "Maheed H. Ahmed", "Jayanth Bhargav", "Mahsa Ghasemi" ]
Representation learning plays a crucial role in reinforcement learning, especially in complex environments with high-dimensional and unstructured states. Effective representations can enhance the efficiency of learning algorithms by improving sample efficiency and generalization across tasks. This paper considers the Laplacian-based framework for representation learning, where the eigenvectors of the Laplacian matrix of the underlying transition graph are leveraged to encode meaningful features from raw sensory observations of the states. Despite the promising algorithmic advances in this framework, it remains an open question whether the Laplacian-based representations can be learned online and with theoretical guarantees along with policy learning. To answer this question, we study online Laplacian-based representation learning, where the graph-based representation is updated simultaneously while the policy is updated by the reinforcement learning algorithm. We design an online optimization formulation by introducing the Asymmetric Graph Drawing Objective (AGDO) and provide a theoretical analysis of the convergence of running online projected gradient descent on AGDO under mild assumptions. Specifically, we show that if the policy learning algorithm induces a bounded drift on the policy, running online projected gradient descent on AGDO exhibits ergodic convergence. Our extensive simulation studies empirically validate the guarantees of convergence to the true Laplacian representation. Furthermore, we provide insights into the compatibility of different reinforcement learning algorithms with online representation learning.
[ "Reinforcement Learning", "Representation learning", "Online Learning", "Graph Laplacian" ]
Reject
https://openreview.net/pdf?id=7xf50qWFGP
https://openreview.net/forum?id=7xf50qWFGP
ICLR.cc/2025/Conference
2025
{ "note_id": [ "y3B5lDWQz5", "uh7InK6oNH", "uCb5MqJ68M", "rO0oyGh7cU", "lrQEu2pJTP", "lSYdDRidOT", "kt4JrDgEfx", "jejt4ZsmOG", "g6J2b7wghs", "eZORLEMIRt", "dy2FSEDukZ", "cqRGrMevux", "XbHHcoXAEy", "WlrmTfX4fQ", "Tz8Z7CFOzJ", "RxhHtDMtq5", "OijZPOK5X1", "NT2WRIDvQG", "8VFMEw0Hba", "1A6YP6zsBl" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732738583228, 1732415681875, 1729621339340, 1732223180601, 1732223079113, 1737524195370, 1734750976090, 1732294312870, 1732664446683, 1732223058595, 1732513722346, 1730694775186, 1730674218484, 1732223247675, 1732415721176, 1732223051847, 1732223075882, 1731100551072, 1732228284835, 1732223256042 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12496/Reviewer_BffQ" ], [ "ICLR.cc/2025/Conference/Submission12496/Authors" ], [ "ICLR.cc/2025/Conference/Submission12496/Reviewer_fhTs" ], [ "ICLR.cc/2025/Conference/Submission12496/Authors" ], [ "ICLR.cc/2025/Conference/Submission12496/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12496/Area_Chair_W6P9" ], [ "ICLR.cc/2025/Conference/Submission12496/Reviewer_BffQ" ], [ "ICLR.cc/2025/Conference/Submission12496/Reviewer_inVB" ], [ "ICLR.cc/2025/Conference/Submission12496/Authors" ], [ "ICLR.cc/2025/Conference/Submission12496/Reviewer_8Bos" ], [ "ICLR.cc/2025/Conference/Submission12496/Reviewer_inVB" ], [ "ICLR.cc/2025/Conference/Submission12496/Reviewer_BffQ" ], [ "ICLR.cc/2025/Conference/Submission12496/Authors" ], [ "ICLR.cc/2025/Conference/Submission12496/Authors" ], [ "ICLR.cc/2025/Conference/Submission12496/Authors" ], [ "ICLR.cc/2025/Conference/Submission12496/Authors" ], [ "ICLR.cc/2025/Conference/Submission12496/Reviewer_8Bos" ], [ "ICLR.cc/2025/Conference/Submission12496/Reviewer_fhTs" ], [ "ICLR.cc/2025/Conference/Submission12496/Authors" ] ], "structured_content_str": [ "{\"comment\": \"While I thank the authors for their clarification, I maintain my original score as I remain concerned about the strength of this assumption.\"}", "{\"comment\": \"We appreciate the reviewer's further clarifications and quick response.\\n\\n1. We agree that (b) is more restrictive. However, note that we require $\\\\rho(s)$ to be greater than some $\\\\delta$ and not $\\\\pi(a|s)$. This can be achieved even if $\\\\pi(a|s) = 0$ for some $a$ and $s$, given some restrictions on the underlying MDP. For example, if there is some action failure probability $\\\\epsilon$ for which the intended action fails and a random behavior is sampled, a corresponding $\\\\delta$ can be computed. \\n\\nWe have also updated our discussion on the relaxation of the assumption to only require that if $\\\\rho^{(t)}(s) = 0$ then $\\\\rho^{(t+1)} = 0$ but not necessarily the opposite. For this scenario, our analysis would apply to the set $S' = $ {$s \\\\in S : \\\\rho^{(t+1)} \\\\neq 0$}.\\n\\n2. The tricks discussed such as entropy regularization and some $\\\\epsilon$ exploration are known to have some tradeoff on downstream tasks, yet are common techniques for exploration and achieving faster convergence to a possibly sub-optimal policy. In this study, the noise introduced from such methods should not affect the representation learned as we only require the algorithm to converge to some policy.\\n\\n3. We agree with the reviewer that the total drift is dependent on T. We have updated both Assumption 2 and Theorem 2. The updated assumption requires the policy learning algorithm to converge to some policy while generating a total drift that is sublinear in T. We also updated the bound in Theorem 2 to include that dependence.\"}", "{\"summary\": \"This paper reveals an important observation from Klissarov & Machado (2023): Learning the representation in an online manner provides better exploration and total rewards. Motivated by this, the author proposes the online optimization formulation of GDO and gives some theoretical analysis.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"This paper extends the existing setting of Laplacian representation to the online optimization framework and provides a convergence analysis.\", \"weaknesses\": \"The theoretical results are not satisfying and over-simplified.\\n1. The assumption 1 is not realistic for the reinforcement learning senario. It is known that the optimal policy could be a deterministic policy, which means that $\\\\rho_{\\\\min}$ could be zero in many deterministic environments. The relaxion discussed in the following paragraph doesn't ensure that $\\\\rho_{\\\\min}$ is non-zero. A possible solution is to add a small uniform perturbation to all policies.\\n2. Proposition 1 is a relatively loose upper bound, which nearly removes all dependence on the time $t$. The main motivation of this paper discusses the benifits of online optimization framework; however, this motivation is not presented in the final convergence analysis. It is because the only connection to the time $t$ is simply omitted in this step. \\n3. The theoretical analysis doesn't bring anything new to this field. It simply uses the smoothness of the objective function $\\\\mathcal{L}$, and follows the same steps of standard optimization paper.\\n4. Moreover, Eq. (44) doesn't implies the upper bound is $O(\\\\frac{1}{T})$. It contains the term $\\\\sum_{t} \\\\delta^{(t)}_{\\\\mathcal{L}}$. The author simply assumes it is less than infinite in Assumption 2. But this assumption has never been made in the TRPO and PPO paper. Making such assumption requires the learning rate of the policy-based algorithm to be approperiately choosen. The author didn't reflect the impact of this assumption in the final bound and presents the $O(1/T)$ convergence rate.\", \"questions\": \"I hope the author addresses my concerns in the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal References\", \"comment\": [\"### References\", \"Ortner, Ronald. \\\"Regret bounds for reinforcement learning via markov chain concentration.\\\" Journal of Artificial Intelligence Research 67 (2020): 115-128.\", \"Metelli, Alberto Maria, Mirco Mutti, and Marcello Restelli. \\\"A tale of sampling and estimation in discounted reinforcement learning.\\\" In International Conference on Artificial Intelligence and Statistics, pp. 4575-4601. PMLR, 2023.\", \"Azar, Mohammad Gheshlaghi, Ian Osband, and R\\u00e9mi Munos. \\\"Minimax regret bounds for reinforcement learning.\\\" International conference on machine learning. PMLR, 2017.\", \"Jin, Chi, et al. \\\"Provably efficient reinforcement learning with linear function approximation.\\\" Conference on learning theory. PMLR, 2020.\", \"Puterman, Martin L. Markov decision processes: discrete stochastic dynamic programming. John Wiley \\\\& Sons, 2014 (Section 8.3).\", \"Bertsekas, D. P. \\\"Neuro-dynamic programming.\\\" Athena Scientific (1996) (Section 6.7.6).\", \"Melo, Francisco S., Sean P. Meyn, and M. Isabel Ribeiro. \\\"An analysis of reinforcement learning with function approximation.\\\" Proceedings of the 25th international conference on Machine learning. 2008.\", \"Even-Dar, Eyal, Sham M. Kakade, and Yishay Mansour. \\\"Online Markov decision processes.\\\" Mathematics of Operations Research 34.3 (2009): 726-736.\", \"Sidford, Aaron, et al. \\\"Near-optimal time and sample complexities for solving Markov decision processes with a generative model.\\\" Advances in Neural Information Processing Systems 31 (2018).\", \"Schulman, John, et al. \\\"Proximal policy optimization algorithms.\\\" arXiv preprint arXiv:1707.06347 (2017).\", \"Strehl, Alexander L. Probably approximately correct (PAC) exploration in reinforcement learning. Diss. Rutgers University-Graduate School-New Brunswick, 2007.\", \"Martin Klissarov and Marlos C. Machado. Deep Laplacian-based options for temporally-extended exploration. In Proceedings of the 40th International Conference on Machine Learning, pp.17198\\u201317217. PMLR, July 2023. URL https://proceedings.mlr.press/v202/klissarov23a.html.\", \"Jiayu Chen, Vaneet Aggarwal, and Tian Lan. A unified algorithm framework for unsupervised discovery of skills based on determinantal point process. Advances in Neural Information Processing Systems, 36, 2024.\", \"Yuu Jinnai, Jee Won Park, Marlos C Machado, and George Konidaris. Exploration in reinforcement learning with deep covering options. In International Conference on Learning Representations, 2019.\", \"Marlos C Machado, Marc G Bellemare, and Michael Bowling. A Laplacian framework for option discovery in reinforcement learning. In International Conference on Machine Learning, pp.2295\\u20132304. PMLR, 2017a.\", \"Yifan Wu, George Tucker, and Ofir Nachum. The Laplacian in RL: Learning representations with efficient approximations. arXiv preprint arXiv:1810.04586, 2018.\"]}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for their detailed feedback and thoughtful suggestions. Below we address their concerns and questions.\\n\\n## Assumption 1\\n\\nThe first part of Assumption 1 is a standard assumption in RL theory, particularly in works analyzing policy evaluation and optimization in Markov Decision Processes (MDPs) (e.g., (Puterman, 2014), (Bertsekas and Tsitsiklis, 1996)). Moreover, ergodicity (irreducibility with positive recurrence) is often assumed to ensure that the chain does not get stuck in transient or absorbing states under the learned policies (e.g., (Melo et al., 2008)), (Even-Dar et al, 2009)). While these assumptions may not hold for arbitrary policies, they are reasonable when considering the restricted class of policies produced by the RL algorithm, as explicitly stated in our paper.\\n \\nAdditionally, we acknowledge the reviewer\\u2019s observation that the assumption may not hold for some learning algorithms. However, many RL algorithms are explicitly designed to promote diverse exploration, making this assumption plausible. For instance, policy-gradient-based methods with entropy regularization (e.g., (Schulman et al., 2017)), value-based methods with optimism in face of uncertainty (e.g., (Strehl, 2007), (Azar et al., 2017)), or -as the reviewer noted- methods that randomly sampling actions with a small probability inherently drive the policies towards sufficient state coverage. These mechanisms align with our assumption that the policies produced by the RL algorithm induce a stationary distribution with non-negative entries for all states.\\n\\nThis work serves as an initial exploration into the online learning of Laplacian representations which is currently an open question, where we adopt certain (possibly restrictive) assumptions to establish a solid foundation for theoretical analysis and results. We acknowledge the limitations imposed by these assumptions and plan to explore their relaxation in future work to enhance the practicality and broader applicability of our framework.\\n\\n## Proposition 1\\n\\nIn Proposition 1, we prove the Lipschitz continuity of AGDO for a fixed policy, meaning AGDO at a specific time step. This property is established for any generic policy at any *fixed* time step, making it independent of $t$. The dependence on $t$ comes from the drift bound and appears in both Lemma 2 and Theorem 2.\\n\\n## Theoretical Analysis\\n\\nWhile the proof of ergodic convergence follows similar techniques from the stochastic optimization literature, we provide assumptions and proofs for necessary conditions that make these techniques possible for this particular problem. For instance, we start by showing that the smallest $d$-eigenvectors are the unique stable equilibrium of AGDO. Additionally:\\n\\n- We define the assumptions necessary for the convergence of online AGDO.\\n- We rigorously analyze the properties of the online objective under these assumptions, including characterizing the drift in the objective and establishing its Lipschitz continuity.\\n- Leveraging these properties, we prove that online AGDO achieves ergodic convergence, drawing on established techniques from the literature on the optimization of smooth objectives in the presence of drift.\\n\\nOur empirical and theoretical analysis also support that these assumptions and properties are necessary. For example, we demonstrate that algorithms such as Q-learning-based approaches\\u2014which often cause significant shifts in the action distribution and are commonly used in option discovery\\u2014may not be well-suited for learning Laplacian representations in an online setting.\\n\\n## Theorem 2 and Assumption 2\\n\\nWe agree with the reviewer that Assumption 2 is not universally satisfied by all reinforcement learning algorithms. However, as the reviewer noted, it can be achieved in certain algorithms with appropriate hyperparameter selection.\\n\\nThe first part of the assumption, which requires the policy not to deviate significantly after each update, is satisfied by algorithms like TRPO and PPO. Additionally, the condition that the drift summation is bounded by a finite constant is met whenever the learning algorithm converges to a stable policy, which can be ensured through proper hyperparameter tuning. For example, TRPO with entropy regularization has been shown to have convergence guarantees under mild assumptions (Shani et al., 2019).\\n\\nAlthough the drift summation term can be hidden within the big-O notation since it is constant, we agree with the reviewer that explicitly including the full term in Theorem 2 enhances clarity and conveys the complete message. We have updated the manuscript to reflect this improvement.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"This paper proposes a new algorithm for efficient online learning of Laplacian representations in reinforcement learning (RL). Under benign assumptions, the authors prove convergence to Laplacian eigenvectors. The reviewers identified several critical shortcomings in the paper, citing a lack of meaningful contributions in both algorithm design and theoretical analysis. Specifically, the algorithmic framework is largely adapted from existing works, offering limited novelty. The theoretical analysis is similar to that in Gomez et al. (2023), with minimal differentiation. Comprehensive experiments demonstrating the algorithm\\u2019s efficacy are missing. Moreover, certain assumptions made in the paper appear unrealistic or overly strong in practical scenarios. For example: Assumption 1, which lacks specific structural assumptions on the problem, could introduce approximation errors in the analysis. Assumption 2 is not valid for many existing algorithms and may lead to a different convergence rate for Theorem 2. The authors\\u2019 response did not adequately address these issues and suggested that significant modifications to the current results would be required. Given these concerns, I recommend rejecting this paper in its current form.\", \"additional_comments_on_reviewer_discussion\": \"The discussion failed to establish the paper's technical and algorithmic contributions. Additionally, scrutiny of the paper's underlying assumptions during the discussion suggests that the theoretical results may not hold.\"}", "{\"comment\": \"Thank you for the response. Can you point out where in Metelli et al the authors assume a uniform lower bound on the mass at any state of any stationary distribution? From a cursory reading, it appears that their bounds depend on spectral gaps, which bound mixing times but do not imply such a lower bound. I agree that Ortner makes use of this assumption, but I maintain that this is extremely strong. Modern RL theory involving episodic MDPs does make reluctant use of reachability assumptions (e.g. *Kinematic State Abstraction and Provably Efficient Rich-Observation Reinforcement Learning* by Misra et al 2019 and *Model-free representation learning and exploration in low-rank mdps* by Mode et al 2021 among many, many others) but again these assumptions allow the reaching policy to depend on the state, which is substantially weaker. Moreover, you say:\\n\\n> While controlling the policies an RL algorithm visits during training is challenging, many RL algorithms are explicitly designed to promote diverse exploration, making this assumption plausible. For instance, policy-gradient-based methods with entropy regularization (e.g., (Schulman et al., 2017)) or value-based methods with optimism in face of uncertainty (e.g., (Strehl, 2007), (Azar et al., 2017)) inherently drive the policies towards sufficient state coverage. \\n\\nI am not sure I agree that this assumption is plausible, especially near the end of training; indeed, in MDPs where there are `bad' states that always give poor reward (e.g. a walker falling over), the trained policy will explicitly avoid those states, and this assumption will fail to be satisfied, even by those algorithms that initially encourage sufficiently diverse exploration.\"}", "{\"comment\": \"Thank you for your response. After reading the work again, I decide to keep my score.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for their feedback and for pointing out areas that require clarification. We would also appreciate it if we get further feedback on what areas can be improved to enhance soundness and presentation. Below, we address the reviewer's concern.\\n\\n## Laplacian Representation Importance\\n\\nLearning the Laplacian representation in reinforcement learning has been a widely studied topic in the reinforcement learning literature such as the work by Mahadevan and Maggioni (2007) and more recently the works of Wu et al. (2018), Wang et al. (2021), and Gomez et al. (2023). These works study how to accurately learn the Laplacian representation corresponding to some fixed policy. In addition, as we highlight in the introduction section the Laplacian representation has found success in reward-shaping (Wu et al., 2018), and options learning (Machado et al., 2017a; Jinnai et al., 2019; Chen et al., 2024). We believe this topic is particularly interesting for the readership of ICLR\\n\\n## Usefulness of Online Representation Learning\\n\\nHere we discuss the importance of learning the representation online compared to learning the representation for a static policy. The work by Klissarov and Machado (2023) provides valuable insights into the benefits of using online representations for learning macro-extended actions (options) compared to static representations learned from a uniform policy. Their empirical results demonstrate the effectiveness of this approach across a variety of continuous tasks. However, the theoretical guarantees and the accuracy of the learned representations remain unexplored. \\nIn this work, we address this gap by establishing a theoretical foundation for the online learning of Laplacian representations, a method already shown to be effective for downstream tasks in prior empirical studies. Our theoretical and empirical results also emphasize the assumptions necessary for accurate learning. For instance, we demonstrate that algorithms such as Q-learning-based approaches\\u2014which often cause significant shifts in the action distribution and are commonly used in option discovery\\u2014may not be well-suited for learning Laplacian representations in an online setting.\\n\\nAs noted in the conclusion, our results lay the groundwork for broader applications. These include tasks like option discovery, reward shaping, and value function approximation, all supported by the theoretical guarantees provided by our analysis. This work opens up new directions for leveraging online representations in reinforcement learning. We agree with the reviewer that studying the effect of this algorithm on downstream tasks is important and can be the direction of future work.\\n\\n### References\\n\\n- Gomez, Diego, Michael Bowling, and Marlos C. Machado. \\\"Proper Laplacian Representation Learning.\\\" The Twelfth International Conference on Learning Representations.\\n\\n- Martin Klissarov and Marlos C. Machado. Deep Laplacian-based options for temporally-extended exploration. In Proceedings of the 40th International Conference on Machine Learning, pp.17198\\u201317217. PMLR, July 2023. URL https://proceedings.mlr.press/v202/klissarov23a.html.\\n\\n- Jiayu Chen, Vaneet Aggarwal, and Tian Lan. A unified algorithm framework for unsupervised discovery of skills based on determinantal point process. Advances in Neural Information Processing Systems, 36, 2024.\\n\\n- Yuu Jinnai, Jee Won Park, Marlos C Machado, and George Konidaris. Exploration in reinforcement learning with deep covering options. In International Conference on Learning Representations, 2019.\\n\\n- Marlos C Machado, Marc G Bellemare, and Michael Bowling. A Laplacian framework for option discovery in reinforcement learning. In International Conference on Machine Learning, pp.\\n2295\\u20132304. PMLR, 2017a.\\n\\n- Sridhar Mahadevan and Mauro Maggioni. Proto-value functions: A Laplacian framework for learning representation and control in markov decision processes. Journal of Machine Learning Research,\\n8(10), 2007.\\n\\n- Kaixin Wang, Kuangqi Zhou, Qixin Zhang, Jie Shao, Bryan Hooi, and Jiashi Feng. Towards better Laplacian representation in reinforcement learning with generalized graph drawing. In Proceedings of the 38th International Conference on Machine Learning, pp. 11003\\u201311012. PMLR, July 2021.\", \"url_https\": \"//proceedings.mlr.press/v139/wang21ae.html.\\n\\n- Yifan Wu, George Tucker, and Ofir Nachum. The Laplacian in RL: Learning representations with efficient approximations. arXiv preprint arXiv:1810.04586, 2018.\"}", "{\"comment\": \"Thank you for the detailed response and I approeciate the work to correct Assumption 2/ Theorem 2 as was pointed out by reviewer fhTs.\\n\\nWhile I acknowledge related work demonstrates the empirical value of online representations, I hold by my criticism that there are no experiments in the paper truly showing the value. I am still unconvinced that the theoretical contribution is significant enough to warrant acceptance. I retain my score.\"}", "{\"summary\": \"This work studies learning the policy and the Laplacian representation of the transition dynamics simultaneously in online RL. The authors propose an asymmetric graph drawing objective and show that online projected gradient descent on this objective achieves ergodic convergence.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The proposed method enables updating the Laplacian representation simultaneously with the policy. The online PGD algorithm enjoys theoretical convergence guarantees.\", \"weaknesses\": \"The problem of learning Laplacian representation in RL doesn't seem to be a super important question itself, because it doesn't really scale up to real-world, challenging settings where the state space is large. The experiments focus on a slightly toy environment and only evaluate the accuracy of the Laplacian learning, without demonstrating how online Laplacian representation learning can benefit RL (more than previous methods).\", \"questions\": \"How does online Laplacian representation learning benefit RL (more than previous methods) in practical RL applications?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work considers the problem of online representation learning in MDPs. Building on past work that uses the standard Laplacian graph embedding on the weighted graph induced by the Markov chain associated to a given policy as a representation of the MDP, this paper introduces a new objective that allows for learning when the policy is being updated, as occurs during training of an RL agent. More precisely, the authors introduce the Asymmetric Graph Drawing Objective, which seeks to produce a graph embedding from the Laplacian of the graph induced at step $t$ of some fixed RL algorithm. Under assumptions on reachability and bounded drift, as well as technical assumptions on the graph structure, the authors then prove that the desired Laplacian imbedding is the unique stable equilibrium under gradient descent dynamics for a fixed time $t$. The authors then demonstrate convergence to an approximate first-order stationary point under the bounded drift assumption on the RL algorithm. The authors conclude with an empirical analysis of their algorithm on Gridworld, where they compare their proposal with the earlier Augmented Lagrangian Laplacian Objective as well as conducting an ablation study to empirically examine the necessity of their assumptions.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"This paper introduces a novel variant of a previous representation learning objective that is simple and intuitive. They demonstrate that running gradient descent on their objective has nice properties under several technical assumptions and demonstrate empirically the efficacy of their method.\", \"weaknesses\": \"I think that there are three main weaknesses:\\n\\n1. I think that Assumption 1 is extremely strong. The irreducibility of the Markov chain induced by a policy being ergodic is already a strong assumption, and lower bound on the stationary distribution's mass at any given state is even more so, but the part I think is strongest is that this is essentially an *all policy* requirement. While reachability assumptions are often invoked in RL theory, they are usually state-dependent in the sense that for every state, there is some policy that does a good job of reaching that state; the current assumption is that every state is visited sufficiently frequently by every policy, which seems very difficult to guarantee. Note that the authors do not literally assume this, as their Assumption 1 is restricted to policies produced by the RL algorithm, but it is very difficult to control what policies an arbitrary RL algorithm will visit. The paragraph below this assumption allows for some slight weakening, where Assumption 1 only has to hold on the support of the initial stationary distribution, but this still seems to impose an extremely strong restriction on the ability of the RL algorithm to explore, especially as I expect that $\\\\rho_{\\\\min}$ would figure polynomially into any quantitative bound (as it indeed does in Lemma 2(d)).\\n\\n2. While it is nice that the authors demonstrate convergence to the Laplacian embedding, and I understand that this embedding is frequently used in visualizing graphs, it would be nice if the authors provided more motivation for why this is a reasonable objective for which to strive.\\n\\n3. The experiments seem to suggest that AGDO and ALLO behave extremely similarly; for example, the GridRoom-4 line in Figure 3 looks like the same curve in both (a) and (b). If this is the case, I am confused as to what the empirical takeaway is here and what the purpose of using AGDO over ALLO is.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal References\", \"comment\": [\"Puterman, Martin L. Markov decision processes: discrete stochastic dynamic programming. John Wiley \\\\& Sons, 2014 (Section 8.3).\", \"Bertsekas, D. P. \\\"Neuro-dynamic programming.\\\" Athena Scientific (1996) (Section 6.7.6).\", \"Melo, Francisco S., Sean P. Meyn, and M. Isabel Ribeiro. \\\"An analysis of reinforcement learning with function approximation.\\\" Proceedings of the 25th international conference on Machine learning. 2008.\", \"Even-Dar, Eyal, Sham M. Kakade, and Yishay Mansour. \\\"Online Markov decision processes.\\\" Mathematics of Operations Research 34.3 (2009): 726-736.\", \"Strehl, Alexander L. Probably approximately correct (PAC) exploration in reinforcement learning. Diss. Rutgers University-Graduate School-New Brunswick, 2007.\", \"Azar, Mohammad Gheshlaghi, Ian Osband, and R\\u00e9mi Munos. \\\"Minimax regret bounds for reinforcement learning.\\\" International conference on machine learning. PMLR, 2017.\", \"Schulman, John, et al. \\\"Proximal policy optimization algorithms.\\\" arXiv preprint arXiv:1707.06347 (2017).\", \"Shani, Lior, Yonathan Efroni, and Shie Mannor. \\\"Adaptive trust region policy optimization: Global convergence and faster rates for regularized mdps.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 04. 2020.\"]}", "{\"comment\": \"We thank the reviewer for their quick response and further discussion.\\n\\n1. We mentioned Metelli et al. as an example of theoretical bounds that are dependent on the properties of the induced Markov chain of the policy. As the reviewer stated, their work depends on the spectral properties of the induced Markov chain and mixing time. \\n\\n2. We agree that it is possible for potentially dangerous states to be completely avoided by a good policy. We have updated our discussion on the relaxation to assumption 1. It now states that if $\\\\rho^{(t)}(s) = 0$ then $\\\\rho^{(t+1)} = 0$ but not necessarily the opposite. This allows for the policy learning to eventually have $\\\\rho^{(t)}(s) = 0$ and our analysis would apply to the set $S' = $ {$s \\\\in S : \\\\rho^{(t+1)} \\\\neq 0$}. This relaxed assumption only requires that if a state is disconnected under a policy it remains disconnected under future updates.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewer for their thorough response and insightful feedback. We have updated the manuscript to fix the typos highlighted and we address below the other concerns and questions.\\n\\n## Theoretical Contribution\\n\\n Our work builds on the foundation established by Gomez et al. (2023), specifically leveraging a similar objective as ALLO. We use AGDO, which is equivalent to ALLO with $\\\\beta = 0$. As noted by Gomez et al., this simplification is appropriate when eigenvalues are not required. However, prior work primarily focused on AGDO (or ALLO with $\\\\beta=0$) for learning representations tied to a static policy, with convergence validated only through empirical analysis. In contrast, we make several key contributions:\\n \\n- **Theoretical Analysis of AGDO Stability**: We provide the first theoretical proof that AGDO has a unique stable equilibrium corresponding to the $d$-smallest eigenvectors. While our proof builds on techniques similar to those used for ALLO, it introduces important changes specific to AGDO. For example, AGDO has more equilibrium points in the form of $u_i = 0$ for some vectors which we also show are not stable. This result is crucial because we use AGDO as the core objective in our online framework. Unlike ALLO, AGDO enables a more practical analysis of the online drift without relying on the eigenvalue drift.\\n- **Convergence Analysis of Online AGDO**: Our primary contribution is a novel theoretical analysis of the online version of AGDO, which significantly diverges from prior work due to the challenge of dealing with drifting objectives. Specifically:\\n - We define the assumptions necessary for the convergence of online AGDO.\\n - We rigorously analyze the properties of the online objective under these assumptions, including characterizing the drift in the objective and establishing its Lipschitz continuity.\\n - Leveraging these properties, we prove that online AGDO achieves ergodic convergence.\\n\\n## Usefulness of Online Representation Learning and Usage of Cosine Similarity\\n\\nThe work by Klissarov and Machado (2023) provides valuable insights into the benefits of using online representations for learning macro-extended actions (options) compared to static representations learned from a uniform policy. Their empirical results demonstrate the effectiveness of this approach across a variety of continuous tasks. However, the theoretical guarantees and the accuracy of the learned representations remain unexplored. \\n\\nIn this work, we address this gap by establishing a theoretical foundation for the online learning of Laplacian representations, a method already shown to be effective for downstream tasks in prior empirical studies. Our theoretical and empirical results also emphasize the assumptions necessary for accurate learning. For instance, we demonstrate that algorithms such as Q-learning-based approaches\\u2014which often cause significant shifts in the action distribution and are commonly used in option discovery\\u2014may not be well-suited for learning Laplacian representations in an online setting. \\n \\nAs noted in the conclusion, our results lay the groundwork for broader applications. These include tasks like option discovery, reward shaping, and value function approximation, all supported by the theoretical guarantees provided by our analysis. This work opens up new directions for leveraging online representations in reinforcement learning. We agree with the reviewer that studying the effect of this algorithm on downstream tasks is important and can be the direction of future work.\\n\\nFinally, since our main goal is assessing the accuracy of the learned representation, we opted to use cosine similarity as the metric for evaluating the accuracy of the learned representation.\\n\\n### References\\n\\n- Gomez, Diego, Michael Bowling, and Marlos C. Machado. \\\"Proper Laplacian Representation Learning.\\\" The Twelfth International Conference on Learning Representations.\\n\\n- Martin Klissarov and Marlos C. Machado. Deep Laplacian-based options for temporally-extended\\nexploration. In Proceedings of the 40th International Conference on Machine Learning, pp.\\n17198\\u201317217. PMLR, July 2023. URL https://proceedings.mlr.press/v202/klissarov23a.html.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We appreciate the reviewer's thoughtful comments and critiques of our manuscript. Below, we address the reviewer's questions and concerns.\\n\\n## Assumption 1\\n\\n- **Ergodicity**: The first part of Assumption 1 is a standard assumption in RL theory, particularly in works analyzing policy evaluation and optimization in MDPs (e.g., Puterman, 2014, Bertsekas and Tsitsiklis, 1996). Moreover, ergodicity is often assumed to ensure that the chain does not get stuck in transient or absorbing states under the learned policies (e.g., Melo et al., 2008, Even-Dar et al., 2009). While these assumptions may not hold for arbitrary policies, they are reasonable when considering the restricted class of policies produced by the RL algorithm, as stated in our paper.\\n- **Lower Bound of the Stationary Distribution**: We acknowledge the reviewer\\u2019s observation that the lower bound on the stationary distribution mass in Assumption 1 plays a role in the quantitative bounds, particularly in Lemma 2(d). These bounds depend on the stationary distribution and its minimal mass to ensure finite sample guarantees. Similar dependencies appear in prior works that derive finite-time performance bounds for RL algorithms (e.g., (Ortner et al., 2020), (Metelli et al., 2023)), where the mass of the stationary distribution or a related measure (e.g., mixing time of the policy-induced Markov chain) appears as a scaling factor. Such dependencies are a common trade-off for achieving rigorous theoretical guarantees in RL settings. This work serves as an initial exploration into the online learning of Laplacian representations which is currently an open question, where we adopt certain (possibly restrictive) assumptions to establish a solid foundation for theoretical analysis and results. We acknowledge the limitations imposed by these assumptions and plan to explore their relaxation in future work to enhance the practicality and broader applicability of our framework.\\n- **Example Settings**: The reviewer rightly notes that our assumption applies only to policies produced by the RL algorithm, not all possible policies. While controlling the policies an RL algorithm visits during training is challenging, many RL algorithms are explicitly designed to promote diverse exploration, making this assumption plausible. For instance, policy-gradient-based methods with entropy regularization (e.g., (Schulman et al., 2017)) or value-based methods with optimism in face of uncertainty (e.g., (Strehl, 2007), (Azar et al., 2017)) inherently drive the policies towards sufficient state coverage. These mechanisms align with our assumption that the policies produced by the RL algorithm induce a stationary distribution with non-zero entries for all states in the support of the initial distribution.\\n\\n## Importance of the Online Learning of the Laplacian Representation\\n\\nThe Laplacian representation given a fixed policy has found success in reward-shaping (Wu et al., 2018), and options learning (Jinnai et al., 2019; Chen et al., 2024). Additionally, in the case of online learning of the representation, the work by Klissarov and Machado (2023) provides valuable insights into the benefits of using online representations for learning options compared to static representations. Their empirical results demonstrate the effectiveness of this approach across a variety of continuous tasks. However, the theoretical guarantees and the accuracy of the learned representations remain unexplored. In this work, we address this gap by establishing a theoretical foundation for the online learning of Laplacian representations, a method already shown to be effective for downstream tasks in prior empirical studies, providing further insights into the necessary conditions for theoretical guarantees of accurate learning.\\n\\nAs noted in the conclusion, our results lay the groundwork for broader applications. These include tasks like option discovery, reward shaping, and value function approximation, all supported by the theoretical guarantees provided by our analysis. We agree with the reviewer that studying the effect of this algorithm on downstream tasks is important and can be the direction of future work.\\n\\n## Similarity to ALLO\\n\\nThe result is intended to show that AGDO a simpler objective can achieve similar results to ALLO. AGDO is equivalent to ALLO with $\\\\beta = 0$. As noted by Gomez et al., this simplification is suitable when learning eigenvalues is unnecessary. However, their analysis primarily focused on AGDO (or ALLO with $\\\\beta=0$) for learning representations associated with a static policy, with convergence assessed solely through empirical validation. In this work, we adopt AGDO for analyzing convergence in the online learning context. We begin by showing that the $d$-smallest eigenvectors are the unique stable equilibrium of AGDO empirically and theoretically. AGDO enables a more practical analysis of the online drift without relying on the eigenvalue drift.\"}", "{\"summary\": \"The paper proposes a new algorithm for efficient online learning of laplacian representations for RL. Under benign assumptions, the authors prove convergence to the laplacian eigenvectors. Finally, authors provide some simple gridworld experiments showing that the algorithm performs as intended, in accordance with the assumptions (eg. it works worse when policy update is less stable).\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well written, with a clear survey of the ideas that lead to the solution.\\n\\nAuthors consider an interesting problem of efficiently learning representations that dynamically adjust as policy learning occurs. We could use existing algorithms for this task by performing an expensive representation learning step every time a policy update occurs, but this is impractical. \\n\\nThe algorithm seems like a practical extension of existing work.\\n\\nAs far as I can tell (did not read the proofs in the appendix) the paper is theoretically grounded.\\n\\nThe ablations to evaluate how the assumptions translate into actual performance is appreciated.\", \"weaknesses\": \"It is not entirely clear how much the theoretical contribution of this work relies on the existing work of Gomez et al. (2023). Given this contribution is primarily theoretical, and the actual algorithmic contribution to convert the existing offline approaches to an online algorithm seem reasonable, but mostly direct, the value of the contribution rests primarily on the theory. To me, it appears that there is enough of a contribution here.\\n\\nThe experiments seem more like debug/sanity check experiments. They are useful for convincing the algorithm works beyond asymptotics of theory but don't provide a compelling case that the method is actually useful. Without a stronger theoretical contribution, a more comprehensive experimental section (more challenging environments and a setting where the learned embeddings are used) would be helpful to showing the value of this method.\", \"questions\": \"Why is cosine similarity the only thing observed in the experiments? It seems like the core point of learning\\nrepresentations online is to be able to use them. Unless I am misunderstanding, the policy here can be anything,\\nand does not use the representations. I think the paper would be strengthened with an example showing \\nhow the ability to adjust the representations online improves the actual task, whether that means that the policy\\nacts on the representations or it is part of exploration (eg. only used in the exploration part of epsilon greedy).\", \"typos\": \"\", \"214\": \"These should be eigenfunctions/eigenvectors, not eigenvalues.\", \"341\": \"non-> none\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the author's response. Here I may hope present my concerns more clearly:\\n\\n1. I agree that the first part of the Assumption 1 is standard. However, I would like to further highlight two situations: (a) $\\\\pi_t(a|s)>0$ for all $t$; and (b) there exists a constant $\\\\delta>0$ such that $\\\\pi_t(a|s)>\\\\delta$ for all $t$. \\n\\n The second condition would be strictly stronger. The reference (melo2008) says that for all policy $\\\\pi$ with $\\\\pi(a|s)>0$, the ergodicity holds, which is the case (a) not (b). Consider the two-action policy $\\\\pi_t = [\\\\frac{1}{t}, 1-\\\\frac{1}{t}]$. Then for all $t$, the chain is ergodic. However, the limiting one with $\\\\min_t$ is not. \\n \\n2. I also agree that there are many tricks to restrict the policy class to make Assumption 1 possible, but such restriction will cause additional approximation error in the theoretical analysis, which has not been reflected in the proof. \\n\\n3. Assumption 2: Let the learning rate be $\\\\frac{1}{\\\\sqrt{t}}$. Then $\\\\sum_{t=1}^T \\\\delta_\\\\pi^t\\\\approx \\\\sqrt{T}$. Your convergence rate should be $\\\\frac{1}{\\\\sqrt{T}}$ instead of $\\\\frac{1}{T}$. I strongly disagree that the author simply assumes this term is finite, while it potentially grows as $T$ increases.\"}", "{\"title\": \"Updated Manuscript\", \"comment\": \"We would like to thank all the reviewers for the thoughtful and detailed feedback on our work. We greatly appreciate the time and effort you have invested in reviewing our submission. We have made modifications to the manuscript to fix typos and incorporate suggestions. We hope our response answers the reviewer's questions and resolves their concerns. In that case, we hope the reviewers reconsider their evaluation. We would be happy to further elaborate on any of the points if the reviewers would like additional details.\"}" ] }
7xJgPtLHfm
Incorporating continuous dependence implies better generalization ability
[ "Guojie Li", "Sheng Ran", "Wuyue Yang", "Liu HONG" ]
When applying deep-learning-based solvers to differential equations, a key challenge is how to improve their generalization ability, so that the pre-trained models could be easily adapted to new scenarios of interest. In this paper, inspired by the well-known mathematical statements on the continuous dependence of solutions to ordinary differential equations on initial values and parameters, we make a non-trivial extension of the physics-informed neural networks by incorporating additional information on the continuous dependence of solutions (abbreviated as cd-PINN). Our cd-PINN integrates the advantages of neural operators and Meta-PINN, requiring only few labeled data while enabling solving ordinary differential equations with respect to new initial values and parameters in a fast and accurate way without fine-tuning. As demonstrated through novel examples like the Logistic model, the Lotka-Volterra model as well as damped harmonic oscillators and a multiscale model for p53 activation, the accuracy of cd-PINN under those untrained conditions is usually 1-3 orders of magnitude higher than PINN. Meanwhile, the GPU time cost for training in the two approaches is comparable. Therefore, we expect our cd-PINN would be particularly useful in improving the efficiency and accuracy of deep-learning-based solvers for differential equations.
[ "Generalization", "Physics Informed Neural Network", "Continuous Dependence", "Ordinary Differential Equations" ]
Reject
https://openreview.net/pdf?id=7xJgPtLHfm
https://openreview.net/forum?id=7xJgPtLHfm
ICLR.cc/2025/Conference
2025
{ "note_id": [ "cvctBN6ZMy", "W1ChCHlHJ0", "HA6LFH9Cnx", "BTuopudlA5", "BBFhfFKMjX", "5Ykp3HLutU" ], "note_type": [ "decision", "official_review", "official_review", "official_review", "official_review", "meta_review" ], "note_created": [ 1737523572376, 1730489883870, 1729018350074, 1730749514635, 1731226771846, 1734725852348 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3376/Reviewer_XYcj" ], [ "ICLR.cc/2025/Conference/Submission3376/Reviewer_RhAr" ], [ "ICLR.cc/2025/Conference/Submission3376/Reviewer_tHxD" ], [ "ICLR.cc/2025/Conference/Submission3376/Reviewer_itQX" ], [ "ICLR.cc/2025/Conference/Submission3376/Area_Chair_Mm3f" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper presents a novel extension of Physics-Informed Neural Networks (PINN), termed continuous dependence PINN (cd-PINN), which aims to enhance the generalization ability of deep learning models applied to differential equations. The authors draw inspiration from the mathematical principle of continuous dependence of solutions on initial values and parameters.\\n\\nWhile the idea is straightforward, it introduces additional regularity in parameter dependence. For example, although the parameter \\\\(\\\\mu\\\\) may be continuous, the loss function could impose differentiability conditions on \\\\(\\\\mu\\\\). This is particularly relevant in cases like hyperbolic conservation laws, where the viscosity coefficient can lead to complexities, such as shock locations, that challenge these assumptions. Please discuss the implications of these assumptions, particularly for systems with discontinuities or shocks. This would help clarify the scope and limitations of the method.\\n\\nThe authors demonstrate that cd-PINN achieves significantly higher accuracy than standard PINN through examples like the Logistic model, the Lotka-Volterra model, and damped harmonic oscillators. However, it is worth noting that these examples are relatively simple and may not fully represent the capabilities or limitations of the proposed method in more complex scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The introduction of cd-PINN as a method that incorporates the mathematical framework of continuous dependence\\n\\nThe paper provides compelling numerical results showing that cd-PINN outperforms standard PINN\", \"weaknesses\": \"The examples selected to demonstrate cd-PINN\\u2019s performance are limited to relatively simple cases. Additionally, while the parameter dependence may be continuous, it does not necessarily have to be differentiable. This distinction is crucial, as the lack of differentiability can present challenges in certain contexts, such as in hyperbolic conservation laws, e.g. Burger's equation, where discontinuities like shock locations arise. Please discuss how cd-PINN might handle systems with non-differentiable parameter dependence.\", \"questions\": \"Including a logarithmic scale in Figure 4 would be beneficial,\\nPlease give a better explanations to your figures on how you compare with other methods\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors presented a novel way of introducing continuous dependence into the pinns framework. Specifically for ODEs, they introduced a nn framework with loss designed tailored to the continuous dependence on the initial condition and the hyperparameters. They validated the proposed approach on a range of numerical examples.\\n\\nWhile the proposed method is interesting, the referee finds it hard to generalize to more challenging cases, and the literature review is not sufficient. Please see the detailed comments below.\\n\\n1. The framework is designed and tested for ODEs, but with PDEs, can you reach comparable performance and speed-accuracy tradeoff?\\n2. The parametric ODEs considered are not challenging enough. Say if the parameters induced singularity or multiscale of ODEs, can the proposed method still work?\\n3. There has been work in the literature incorporating derivative (smoothness) loss into the loss function and finding that the nns identify smoother functions. I was wondering how the proposed continuity loss compares?\\n4. Please use more challenging examples.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Please see the section above.\", \"weaknesses\": \"Please see the section above.\", \"questions\": \"Please see the section above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this work, the author proposes a continuous constraint for PINNs, called cd-PINN. This approach allows the ODE problem to continuously depend on its initial conditions and input parameters, thereby transforming it into a family of problems suitable for operator learning. The auther shows theorem on local existence and uniqueness. Experiments on various ODEs demonstrate that cd-PINN performs better on previously unseen conditions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The author proposes an innovative formulation that converts the ODE to continuously depend on its input parameters.\", \"This approach combines operator learning with PINN.\", \"The author provides theoretical results demonstrating the well-defined nature of the simulation.\", \"Three ODE systems are considered in the study, showing that performance on untrained conditions is typically 1-3 orders of magnitude higher than with standard PINN.\"], \"weaknesses\": [\"### Methods\", \"This formulation seems to only work for inputs in \\\\(\\\\mathbb{R}\\\\) or low-dimensional cases, which limits its applicability to PDE problems, where the input is high dimensional.\", \"The assumption of continuity may not hold in chaotic systems, such as the Lorenz system.\", \"Overall, the results are not very significant.\", \"### Writing\", \"The input parameter \\\\(\\\\mu\\\\) could benefit from further discussion. It would be helpful to include an example alongside Equation (1).\", \"It would improve clarity to separate the previous formulation of PINN from the newly proposed cd-PINN. Equations (5) and (6) should be moved to a dedicated subsection for PINN.\", \"For PDEs, it would be beneficial to cite PINO, which combines neural operators with PINNs using pre-training and fine-tuning:\", \"Li, Zongyi, et al. \\\"Physics-informed neural operator for learning partial differential equations.\\\" *ACM/JMS Journal of Data Science*\"], \"questions\": [\"Could the author give better motivation of application of cd-PINN beyond PINN?\", \"Can cd-PINN improve the accuracy or convergence rate compared to PINN give a fixed parameters?\", \"Is it possible to apply the cd-PINN and viscosity in Burgers equation or Reynolds number in Navier-Stokes?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a novel approach, called cd-PINN (continuous dependence physics-informed neural network), aimed at enhancing the generalization ability of deep-learning-based solvers for ordinary differential equations (ODEs). Building on the principle of continuous dependence of ODE solutions on initial values and parameters, cd-PINN extends traditional physics-informed neural networks (PINNs) by incorporating this property to improve performance. By combining neural operators and Meta-PINN techniques, cd-PINN achieves accurate solutions for ODEs with new initial values and parameters, requiring minimal labeled data and no fine-tuning. Experimental results on models like the Logistic model, Lotka-Volterra system, and damped harmonic oscillators show cd-PINN's accuracy is 1-3 orders of magnitude higher than traditional PINNs, with comparable GPU training time. This approach promises enhanced efficiency and accuracy for deep-learning solvers in differential equation applications.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"**1. Improved Generalization**: By integrating continuous dependence on initial values and parameters, cd-PINN enhances the ability to generalize across different scenarios without additional fine-tuning, which is crucial for solving ODEs in varied applications.\\n\\n**2. Efficient Data Use**: cd-PINN requires minimal labeled data, reducing the reliance on large datasets while still achieving high accuracy, making it suitable for scenarios where data collection is costly or limited.\\n\\n**3. Comparable Training Efficiency**: Despite these improvements in accuracy and generalization, cd-PINN maintains a similar GPU time cost as standard PINNs, which suggests it could be implemented without extra computational overhead.\", \"weaknesses\": \"Although the paper propose interesting strategy, it lacks several points:\\n\\n**1. Lack of Model Details and Hyperparameter Settings**\\n\\nThe paper does not provide sufficient detail on the architecture and hyperparameter configurations of cd-PINN, such as the number of layers, model parameters, or training configurations. Given that cd-PINN is proposed as a robust solution for a wide variety of differential equations, these details are essential for reproducibility and for evaluating the computational cost of deeper or larger models. Including these specifications would enable researchers to replicate results and better understand how model depth and complexity impact generalization and accuracy.\\n\\n**2. Limited Scope in Differential Equation Types: Absence of Fundamental PDEs and Comparison with Neural ODEs**\\n\\nThe current focus on ordinary differential equations (ODEs) in the paper feels restrictive, especially given the model\\u2019s \\u201cPINN\\u201d designation, which implies potential applicability to a broader class of partial differential equations (PDEs) involving other derivatives, such as convection-diffusion-reaction (CDR) equations. Extending the study to include such fundamental PDEs would demonstrate cd-PINN\\u2019s flexibility and relevance in more complex, real-world applications. Furthermore, comparing cd-PINN\\u2019s performance with Neural ODE models would clarify its advantages and limitations, as Neural ODEs are another prominent approach for learning solutions to ordinary differential equations. Including these comparisons would provide a more comprehensive picture of where cd-PINN stands in relation to existing methods.\\n\\n**3. Extra-interpolation Capabilities**\\n\\nWhile cd-PINN claims to learn continuous flow of parameters of ODE, it is unclear whether the model performs well on unseen initial values and parameters. If cd-PINN can indeed generalize to genuinely novel scenarios (e.g., significantly out-of-distribution conditions), this would represent a major strength. Several ablation studies about these cases would be highlight cd-PINN's performance and applicability. Furthermore, a discussion on whether cd-PINN can reliably extrapolate beyond trained distributions would add clarity regarding its practical usability in diverse differential equation settings.\", \"questions\": \"Please refer to weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The work proposes a method for combining PINNs with operator learning by allowing the network to depend continuously on a parameter of the system. Numerical experiments are carried out on a few simple ODE examples.\", \"additional_comments_on_reviewer_discussion\": \"The scope of the method seems quite limited with parametric dependence being only finite-dimensional (as opposed to functional) and applications examples being only for simple ODE models. Claims that all parametric dependences are continuous are, of course, false and the focus should be on systems where this does hold (as an assumption) as well as to PDE extensions.\"}" ] }
7xCSK9BLPy
Better Instruction-Following Through Minimum Bayes Risk
[ "Ian Wu", "Patrick Fernandes", "Amanda Bertsch", "Seungone Kim", "Sina Khoshfetrat Pakazad", "Graham Neubig" ]
General-purpose LLM judges capable of human-level evaluation provide not only a scalable and accurate way of evaluating instruction-following LLMs but also new avenues for supervising and improving their performance. One promising way of leveraging LLM judges for supervision is through Minimum Bayes Risk (MBR) decoding, which uses a reference-based evaluator to select a high-quality output from amongst a set of candidate outputs. In the first part of this work, we explore using MBR decoding as a method for improving the test-time performance of instruction-following LLMs. We find that MBR decoding with reference-based LLM judges substantially improves over greedy decoding, best-of-N decoding with reference-free judges and MBR decoding with lexical and embedding-based metrics on AlpacaEval and MT-Bench. These gains are consistent across LLMs with up to 70B parameters, demonstrating that smaller LLM judges can be used to supervise much larger LLMs. Then, seeking to retain the improvements from MBR decoding while mitigating additional test-time costs, we explore iterative self-training on MBR-decoded outputs. We find that self-training using Direct Preference Optimisation leads to significant performance gains, such that the self-trained models with greedy decoding generally match and sometimes exceed the performance of their base models with MBR decoding.
[ "LLM", "instruction-following", "test time compute", "decoding", "MBR", "minimal bayes risk", "LLM judges", "self-improvement" ]
Accept (Spotlight)
https://openreview.net/pdf?id=7xCSK9BLPy
https://openreview.net/forum?id=7xCSK9BLPy
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vbV2Jx7i03", "r582V1uSk3", "pkSu7K3zJ7", "nDuN4zaxOj", "eMWHN8Inrq", "Yd7nI0kXN7", "TYTZukSQvG", "OkJasT9B6j", "OW8y3uAWtx", "JAuEer0c0k", "BnuEPxUpKB", "8OkESfhHJO", "7kttxCepUS", "6HBJWm5gXx" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "meta_review" ], "note_created": [ 1732559796424, 1732420101864, 1730692916364, 1730719720416, 1729888646836, 1732742113762, 1732421010494, 1732867724009, 1732420340680, 1737524094434, 1732420829764, 1732867803246, 1732420475216, 1735891974327 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10961/Reviewer_Nmeb" ], [ "ICLR.cc/2025/Conference/Submission10961/Authors" ], [ "ICLR.cc/2025/Conference/Submission10961/Reviewer_hFLa" ], [ "ICLR.cc/2025/Conference/Submission10961/Reviewer_Nmeb" ], [ "ICLR.cc/2025/Conference/Submission10961/Reviewer_Z8nY" ], [ "ICLR.cc/2025/Conference/Submission10961/Authors" ], [ "ICLR.cc/2025/Conference/Submission10961/Authors" ], [ "ICLR.cc/2025/Conference/Submission10961/Authors" ], [ "ICLR.cc/2025/Conference/Submission10961/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10961/Authors" ], [ "ICLR.cc/2025/Conference/Submission10961/Authors" ], [ "ICLR.cc/2025/Conference/Submission10961/Authors" ], [ "ICLR.cc/2025/Conference/Submission10961/Area_Chair_C3DL" ] ], "structured_content_str": [ "{\"title\": \"Keeping the score\", \"comment\": \"Thank you for your response and for conducting the experiments. I appreciate your efforts. I will maintain my current score, as I believe it is already sufficiently high.\"}", "{\"title\": \"Response to Reviewer Nmeb\", \"comment\": \"Thank you for your review! We\\u2019re glad to hear you found the approach well-reasoned, the discussion well-organized and comprehensive, and the writing easy to follow.\\n\\n---\\n*Question: Are there any distinctive linguistic features in MBR-decoded outputs? How does it affect diversity, style, or tone?*\\n\\nThis is a great question! \\n\\nOne linguistic feature that we analyzed in our original submission is generation length (Appendix A.4 and G.6). We find that MBR-decoded outputs tend to be slightly longer than their greedy and BoN counterparts, although they are still far shorter than **Longest** or **Embedder** decoding.\\n\\nIn response to your question, we conducted some further analysis in search of other distinctive linguistic features. We will include these results in the Appendix of our camera-ready manuscript:\\n\\n**Formatting**\\n\\nWe used GPT-4o to classify the outputs of Llama-3-8b-Instruct and Llama-3-70b-Instruct on AlpacaEval with greedy, Prometheus BoN, and Prometheus MBR decoding into \\u201cBullet List\\u201d, \\u201cNumbered List\\u201d, \\u201cBoth\\u201d or \\u201cNeither\\u201d categories, depending on whether a specific kind of list formatting is present in the output. We then computed the percentage of outputs that fall into each category.\\n\\n\\n| Method | Bullet List | Numbered List | Both | Neither |\\n|-------------------|-------------|---------------|-------|---------|\\n| Greedy | 14.2 | 29.1 | 26.4 | 30.2 |\\n| Prometheus BoN | 13.6 | 29.8 | 26.0 | 30.8 |\\n| Prometheus MBR | 13.9 | 30.1 | 28.1 | 27.8 |\\n\\nWe notice that Prometheus MBR uses list formatting more often than Prometheus BoN and greedy decoding, although the difference is quite small.\\n\\n**Lexical Diversity**\\n\\nWe calculate the type token ratio (TTR) of the outputs of Llama-3-8b-Instruct and Llama-3-70b-Instruct on AlpacaEval with greedy, Prometheus BoN and Prometheus MBR decoding. This measures the lexical diversity of the resulting outputs (higher => more diverse).\\n\\n| Method | TTR |\\n|------------------|-------|\\n| Greedy | 0.514 |\\n| Prometheus BoN | 0.520 |\\n| Prometheus MBR | 0.521 |\\n\\nAgain we notice only very small differences between the outputs. We note that longer generation lengths may be associated with higher lexical diversity, which may act as a confounder.\\n\\n**Readability**\\n\\nWe compute the Flesch Kincaid readability scores (lower => more readable) of the outputs of Llama-3-8b-Instruct and Llama-3-70b-Instruct on AlpacaEval with greedy, Prometheus BoN and Prometheus MBR decoding. \\n\\n| Method | FK Score |\\n|------------------|----------|\\n| Greedy | 12.13 |\\n| Prometheus BoN | 12.40 |\\n| Prometheus MBR | 12.24 |\\n\\nWe find there to be little difference in readability scores between the various decoding strategies.\\n\\nWe plan to conduct further follow-up work in the future to better understand the linguistic characteristics of MBR decoding. We would love to hear if you have any ideas for experiments we could try.\"}", "{\"summary\": \"The paper presents a novel approach to improving the test-time performance of instruction-following LLMs through the application of Minimum Bayes Risk (MBR) decoding. The authors leverage LLM judges as reference-based evaluators to select high-quality outputs from a set of candidate outputs. They demonstrate that MBR decoding with LLM judges significantly outperforms greedy decoding and other decoding methods without references on benchmarks like AlpacaEval and MT-Bench. Furthermore, the paper explores iterative self-training on MBR-decoded outputs to retain performance improvements while mitigating additional test-time costs. The authors find that self-training using Direct Preference Optimisation leads to significant performance gains, matching or exceeding the performance of base models with MBR decoding.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"1. The application of MBR decoding with LLM judges is a creative approach that combines recent advances in LLM evaluation with decoding techniques.\\n2. The experiments are thorough, and the benchmarks used are relevant and well-established in the field.\\n3. This paper also explores the guiding role of MBR decoding in LLM\\u2019s DPO training, which is inspiring in subsequent model training.\", \"weaknesses\": \"Main weakness:\\n- The presentation lacks clarity and reads more like an experimental report than an academic paper. The methods section is merged with the experiments and results, lacking any formal formulations. For instance, in Section 4.1.2, Iterative DPO on MBR-Decoded Outputs, adding mathematical formulations would improve both understanding and reproducibility of the approach.\", \"others\": [\"This paper only compares some relatively simple decoding methods. If some better decoding methods such as speculative decoding and medusa can be added, the method will be more credible.\", \"The paper could also benefit from a discussion on the computational costs associated with MBR decoding and self-training, especially when scaling to larger models or datasets.\"], \"questions\": [\"**Question 1:** How does the performance of MBR decoding with LLM judges compare to other state-of-the-art decoding methods beyond those presented in the paper?\", \"**Question 2:** Can the authors elaborate on any potential negative impacts of using LLM judges, such as the risk of overfitting to the judge's biases?\", \"**Question 3:** The hyperparameter N~cond~ is set to 12 for generating candidates for DPO but increased to 30 during decoding. How did the authors determine these values, and how do they align with the experiment illustrated in Figure 2?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes an approach for selecting one of N generation hypotheses using a Minimum Bayes Risk (MBR) method. MBR decoding alone results in improved performance. However, as MBR decoding is resource-intensive and may not be practical for real-life applications, an alternative use case\\u2014self-training with DPO on preference pairs selected via MBR\\u2014also demonstrates performance gains.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper explores the use of previously developed approaches\\u2014specifically, Minimum Bayes Risk (MBR)\\u2014for selecting a generation hypothesis at the decoding stage in LLMs. The MBR-selected hypothesis is treated as the one with the highest average utility according to a specified utility metric. The choice of this approach is well-reasoned, with a clear motivation behind it.\", \"The experimental setup is solid. The paper first demonstrates improvements using MBR decoding. Given its high computational cost, the paper then explores self-training with MBR-decoded outputs, which also leads to improvements. The evaluation is conducted on two standard benchmarks (Alpaca-Eval 2.0 and MT Bench), with comparisons across different decoding approaches and a variety of utility metrics used in MBR. The discussion of results is well-organized, providing clear and comprehensive takeaways.\"], \"weaknesses\": \"The paper is well-written and easy to follow. The reviewer does not identify any major weaknesses.\", \"questions\": \"Are there any distinctive linguistic features in MBR-decoded outputs? How does it affect diversity, style, or tone?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper explores MBR decoding with reference-based LLM judges for selecting one of n outputs for instruction finetuned LLMs at inference time.\\nThe paper uses Prometheus2, LLama3, and JudgeLM models as a judge for Llama 2 & 3 models and evaluates on AlpacaEval and MT-Bench, comparing MBR decoding with best-of-n inference, and other one-best decoding strategies, and explore non-LLM utility functions for MBR as well.\\nThey find gains across the bench, with smaller judges also being successful as utility function in MBR decoding for guiding larger LLMs. MBR decoding is furthermore combined with self-training (DPO) to yield further gains and overcome the added decoding costs.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. The experiments are thorough (and well documented), including multiple sizes of learner and judge models, and multiple utility functions for comparison.\\n2. The analyses are interesting and well presented, explained and visualized. Important ablations, such as hypotheses size and temperature, are included.\\n3. The distillation solution is very attractive for boosting performance in practice without sacrificing inference time, which is contributing to a timely trend. There's a good chance this approach will be added to everyone's box of inference tricks.\\n4. Compute costs are well discussed and addressed with the distillation solution.\\nSolid and interesting review of past works, especially about Machine Translation where MBR decoding has already become popular.\", \"weaknesses\": \"1. The method is not particularly novel, as MBR decoding with LLMs has been studied in the past (as discussed in Related Work), but not with an LLM as judge.\\n2. Given the novelty being the use of a judge to find the consensus output, I would love to see more analysis on the dependence of the LLM judge\\u2019s quality. There is a notable gap with other utility functions, so it would be helpful to understand where they diverge. See questions below.\", \"questions\": \"1. Could the LLM model itself be used as a judge? In the spirit of self-improvement.\\nJinnai et al. evaluated their utility metrics on benchmarks without LLM judge. How would LLM judges perform there? (e.g. machine translation, summarization)\\n2. Could there be some kind of overfitting/bias towards the GPT-4o judge that is dominating the win-rates in the comparison (the \\u201ccloser\\u201d the judge to GPT-4o, the better the MBR)? In the extreme case, what if GPT-4o was used as a judge - I\\u2019m sure that would beat even Prometheus. Perhaps one could compare the outputs selected by the evaluated judges to GPT-4o decisions to find out if the ranking of win-rates corresponds to agreeing with GPT-4o.\\n3. I would love to see some qualitative analysis on how BoN and MBR generation selection differ with the same judge. The paper explains that the smoothing effect of MBR might be helpful to find the best generation, but I would like this to be made a little more built out and supported by examples.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Nmeb\", \"comment\": \"Understood. Thank you for your timely response and for your review!\"}", "{\"title\": \"Response to Reviewer Z8nY (2/2)\", \"comment\": \"*Question 2: Could there be some kind of overfitting/bias towards the GPT-4o judge that is dominating the win-rates in the comparison (the \\u201ccloser\\u201d the judge to GPT-4o, the better the MBR)? In the extreme case, what if GPT-4o was used as a judge - I\\u2019m sure that would beat even Prometheus. Perhaps one could compare the outputs selected by the evaluated judges to GPT-4o decisions to find out if the ranking of win-rates corresponds to agreeing with GPT-4o.*\\n\\n---\\n\\n**Could there be some kind of overfitting/bias towards the GPT-4o judge that is dominating the win-rates in the comparison (the \\u201ccloser\\u201d the judge to GPT-4o, the better the MBR)?**\\n\\nThis is a good point and definitely a reasonable concern. \\n\\nTo ameliorate this concern, we conducted a human study (Appendix H, included in both the original and updated manuscript) to rule out the possibility of win-rates increasing solely due to overfitting to the GPT-4o judge. From our human study, we find that humans generally prefer MBR decoded outputs more than BoN or greedy outputs, suggesting that the increase in win-rates is due to true quality improvements.\\n\\nWe also tried searching for distinctive linguistic characteristics in the MBR-decoded outputs relative to the greedy outputs, and documented our findings in our response to Reviewer Nmeb. We find none of the common characteristics indicative of reward hacking behavior (e.g. significantly increased verbosity, use of certain formatting tricks), which further suggests that MBR decoding with Prometheus does not overfit to a particular quirk of the GPT-4o judge and is instead associated with genuine quality improvements.\\n\\nPlease see our response to Reviewer hFLa for further discussion on related ideas.\\n\\n---\\n\\n**Perhaps one could compare the outputs selected by the evaluated judges to GPT-4o decisions to find out if the ranking of win-rates corresponds to agreeing with GPT-4o.**\\n\\nThis is a great idea. \\n\\nWe used GPT-4o as a reference-free judge to score the outputs of Llama-3-8b-Instruct (N_cand = 30, t = 0.7) on the first turn of MT-Bench. Outputs were scored on a scale of 1 - 10. We used a GPT-4o judge temperature of 0.5 and generated three scores per generation, taking as our final scores the average of the three sampled scores. Then, for every sample, we computed the Spearman\\u2019s rank correlation between the GPT-4o scores and the MBR scores:\\n\\n| Method | Avg. Delta over Greedy | Avg. Corr |\\n|-------------------------|-----------------------|-----------|\\n| Prometheus-7b | 0.28 | 0.119 |\\n| Prometheus-8x7b | 0.39 | 0.136 |\\n| JudgeLM-7b | 0.22 | 0.053 |\\n| JudgeLM-33b | 0.31 | 0.113 |\\n| Llama-3-8b-Instruct | 0.28 | 0.116 |\\n| Llama-3-70b-Instruct | 0.41 | 0.144 |\\n\\nWe find that stronger judges (higher Avg. Delta over Greedy) are generally associated with slightly better correlation with GPT-4o scores, although the absolute value of this correlation is not very high. This suggests that our judges are unlikely to be overly biased towards GPT-4o, although stronger judges will generally agree with GPT-4o more.\\n\\nWe will add these findings to our camera-ready paper.\\n\\n---\\n\\n*Question 3: I would love to see some qualitative analysis on how BoN and MBR generation selection differ with the same judge. The paper explains that the smoothing effect of MBR might be helpful to find the best generation, but I would like this to be made a little more built out and supported by examples.*\\n\\nWe agree that this would be very beneficial! Please see our response to Reviewer Nmeb for an overview of our attempts to search for obvious linguistic characteristics that set BoN and MBR decoded outputs apart.\"}", "{\"title\": \"We eagerly await your response\", \"comment\": \"Dear Reviewer hFLa,\\n\\nWe have responded to your review of our work and have updated our manuscript. We greatly appreciate the time you have taken to help us improve our paper!\\n\\nWith the author-discussion period drawing to a close, we would be grateful if you could respond to our rebuttal - your feedback is crucial to the progress of our work and we would like our paper to be in the best possible shape before the discussion period closes.\"}", "{\"title\": \"Response to Reviewer hFLa (1/2)\", \"comment\": \"Thank you for your review! We\\u2019re glad to hear you find our approach creative, our experiments thorough, and our benchmarks well-selective. We respond to your concerns in the response below, and we would be happy to engage in further discussion!\\n\\n---\\n*Main weakness: The presentation lacks clarity and reads more like an experimental report than an academic paper. The methods section is merged with the experiments and results, lacking any formal formulations. For instance, in Section 4.1.2, Iterative DPO on MBR-Decoded Outputs, adding mathematical formulations would improve both understanding and reproducibility of the approach.*\\n\\nTo improve understanding and reproducibility of MBR distillation (iterative DPO), we have now included a mathematical formulation for it in Appendix I.2 and have referenced this in Section 4.1.2 - please see our updated manuscript. We hope this provides sufficient clarity regarding our distillation method.\\n\\nAs our main contribution is exploring the use of LLM as a judge for MBR decoding, we provide a formal, mathematical description of MBR decoding in the Background section (Section 2) rather than in the experimental sections (Sections 3 and 4). We hope this provides sufficient clarity regarding MBR Inference.\\n\\nTo add further clarity on both MBR Inference and MBR Distillation, we have also added to Appendix I detailed algorithm descriptions of both methods. Please let us know if there is anything more specific you\\u2019d like to see added!\", \"on_the_structuring_of_our_paper\": \"as our work is composed of two experimental sections - MBR Inference (Section 3) and MBR Distillation (Section 4) - it is difficult for us to maintain entirely separate methods and results sections. Within each experimental section however we have kept methods (3.1, 4.1), experimental results (3.2, 4.2) and further experiments (3.3) separate. We hope this is still able to provide a good level of clarity. Please let us know if you have any suggested formatting changes!\\n \\n---\\n\\n*Other weakness: This paper only compares some relatively simple decoding methods. If some better decoding methods such as speculative decoding and medusa can be added, the method will be more credible.*\\n\\nThank you for bringing up acceleration methods such as speculative decoding - this is a very interesting idea that we did not originally consider looking at in the context of MBR decoding.\\n\\nMethods like speculative decoding and Medusa are used to increase autoregressive decoding speed without impacting output quality, and therefore can be used alongside MBR decoding, which selects over decoded sequences to improve the final output quality.\\n\\nTo demonstrate this, we applied speculative decoding to the LLM judge decoding step and measured changes to decoding speed. We used Llama-3-70b-Instruct as the MBR LLM judge, using ibm-fms/llama3-70b-accelerator (https://huggingface.co/ibm-fms/llama3-70b-accelerator) as the draft model and 100 prompts randomly sampled from AlpacaEval as the dataset. We used Llama-2-7b-chat as the generator model, without speculative decoding. All inference was done using 8xA100 GPUs.\\n\\n| Method | Avg time per generation (s) | Tokens / s | GPT-4 score |\\n|-------------------------|-----------------------------|------------|-------------|\\n| Vanilla MBR | 59.3 | 4.48 | 6.83 |\\n| MBR + Spec. Decoding | 52.9 | 5.21 | 6.90 |\\n\\nFrom our results above, we see that speculative decoding can be used to improve MBR decoding speed with no loss in performance. We note however that MBR decoding is typically compute bound (as we do batch inference during decoding), so the speed increase is smaller than what you might see in the memory bound case (small batch sizes, where speculative decoding is most useful). We will incorporate discussion of these results in the camera-ready paper. \\n\\n---\\n\\n*Other weakness: The paper could also benefit from a discussion on the computational costs associated with MBR decoding and self-training, especially when scaling to larger models or datasets.*\\n\\nWe discuss the computational costs associated with MBR decoding and self-training in Section 4.2, where we demonstrate experimentally that (1) MBR decoding at inference time incurs a significant cost, largely associated with the decoding (utility metric calculation) step and (2) self-training mitigates this cost entirely, enabling the model to achieve greedy-decoding levels of throughput (tokens / s) with MBR decoding levels of performance. Please let us know if there is anything further you\\u2019d like us to discuss.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"title\": \"Response to Reviewer Z8nY (1/2)\", \"comment\": \"Thank you for your review! We are glad to hear you find the experiments thorough, the analysis interesting, and the distillation solution timely. We respond to your concerns in the response below, and we would be happy to engage in further discussion!\\n\\n---\\n*Question 1: Could the LLM model itself be used as a judge? In the spirit of self-improvement. Jinnai et al. evaluated their utility metrics on benchmarks without LLM judge. How would LLM judges perform there? (e.g. machine translation, summarization)*\\n\\n---\\n\\n**Could the LLM model itself be used as a judge? In the spirit of self-improvement.**\\n\\nThis is a great question. There are definitely strong hints that a strong LLM could be used as the judge both for MBR inference and distillation. \\n\\nIn Section 3.3.1, we use Llama-3-70b-Instruct as both the judge and the generator LLM, and find (Table 3) that both MBR decoding and BoN decoding yield gains over greedy decoding (8.29 -> 8.35 for BoN and 8.29 -> 8.52 on MT-Bench for BoN and MBR respectively). \\n\\nAdditionally (new in the updated manuscript), we use Llama-3-8b-Instruct as both the judge and the generator LLM (see Table 3 in the updated paper), and find that both MBR decoding and BoN decoding yield gains over greedy decoding (7.54 -> 7.60 for BoN and 7.54 -> 7.80 on MT-Bench for BoN and MBR respectively). \\n\\nBoth results demonstrate that the generator and judge LLM can be the same LLM, and that MBR decoding is still better than BoN decoding in this setup.\\n\\nWe also consider using Llama-3-8b-Instruct in place of Prometheus as the utility metric to self-train Llama-3-8b. We compare this approach to using Prometheus to train Llama-3-8b.\\n\\n| Method | Prometheus | Llama-3-8b-Instruct |\\n|--------------------|------------|---------------------|\\n| STF | 6.70 | 6.70 |\\n| MBR DPO-1 | 6.94 | 6.99 |\\n| MBR DPO-2 | 7.45 | 7.51 |\\n| MBR DPO-3 | 7.55 | 7.52 |\\n\\nWe see that generator LLMs can be used to improve LLMs. These distillation results have been added to Appendix G.7 in the updated manuscript.\\n\\n---\\n\\n**Jinnai et al. evaluated their utility metrics on benchmarks without LLM judge. How would LLM judges perform there? (e.g. machine translation, summarization)**\\n\\nWe ran additional experiments where we evaluated MBR with Prometheus along with greedy decoding and MBR with a task-specific metric for XSUM (summarisation) and WMT-19 Cs-En (translation), using Llama-3-8b-Instruct as the generator model.\\n\\n*XSUM (evaluated with BERTScore)*\\n\\n| Method | Score |\\n|-----------------------|---------|\\n| Greedy | 69.45 |\\n| MBR with ROUGE-L | 69.72 |\\n| MBR with Prometheus | 69.24 |\\n\\n*WMT (evaluated with COMET)*\\n\\n| Method | Score |\\n|-----------------------|---------|\\n| Greedy | 84.40 |\\n| MBR with BLEURT | 84.60 |\\n| MBR with Prometheus | 84.37 |\\n\\nWe find that MBR with Prometheus does not outperform greedy decoding or MBR decoding with task specific metrics. We suspect that our task-specific metrics (ROUGE-L and BLEURT) are better suited to the closed-form tasks of (extreme) summarisation and translation than Prometheus. For translation, the lack of multilingual capabilities for the Prometheus model might also play a role (Prometheus was only trained on English finetuning data). Nonetheless we highlight that the differences in performance between these methods is small.\"}", "{\"title\": \"We eagerly await your response\", \"comment\": \"Dear Reviewer Z8nY,\\n\\nWe have responded to your review of our work and have updated our manuscript. We greatly appreciate the time you have taken to help us improve our paper!\\n\\nWith the author-discussion period drawing to a close, we would be grateful if you could respond to our rebuttal - your feedback is crucial to the progress of our work and we would like our paper to be in the best possible shape before the discussion period closes.\"}", "{\"title\": \"Response to Reviewer hFLa (2/2)\", \"comment\": \"*Question 1: How does the performance of MBR decoding with LLM judges compare to other state-of-the-art decoding methods beyond those presented in the paper?*\\n\\nWe believe that we have already selected state-of-the-art decoding methods to compare MBR decoding against, including BoN decoding and Universal Self-Consistency (Appendix A.2). If there are any other specific decoding methods we may have missed that you would like us to compare against, please let us know!\\n\\n---\\n\\n*Question 2: Can the authors elaborate on any potential negative impacts of using LLM judges, such as the risk of overfitting to the judge's biases?*\\n\\nThis is a very good point and definitely a possible concern! By using a judge LLM to select outputs, we inherently bias our outputs towards the preferences of the judge LLM. If these preferences are not actually associated with improved quality, then MBR decoding could have a negative overall impact. This is especially true for MBR distillation, as we repeatedly distill these outputs back into the model! We include a separate discussion of this idea in a new Limitations section, which we have added to Appendix J in the updated manuscript.\\n\\nHowever, we do not believe that overfitting to the judge LLMs occurs in our experiments. Firstly, note that we never use the actual MBR judge LLMs to conduct evaluation, and use GPT-4o instead. While GPT-4o could possess biases itself, the benchmarks we employed as well the LLM-as-a-Judge community in general use proprietary LLMs such as GPT-4o as a judge in practice based on the observation that it holds high correlation with human judgments [1][2]. Hence, we emphasize that this is a broader problem for the evaluation community in general, not a specific problem for MBR. Note that we nonetheless take steps to ensure that our gains are real and not the result of overfitting to GPT-4o by conducting a human study on our MBR and BoN decoded outputs (Appendix H). We find that human judges, like GPT-4o, also rate Prometheus MBR outputs more highly than Prometheus BoN or greedy outputs.\\n\\nFinally, we note that overfitting to the utility metric is a problem known in the MBR literature, and is not specific to LLM judges [3][4][5]. It is also known from prior work in translation that this overfitting problem is worse for BoN decoding (known as Quality Estimation in the translation literature) than for MBR decoding. Assuming that findings from machine translation translate well to instruction-following, one advantage of MBR decoding with LLM judges could be that it in fact overfits comparatively less to the judges' biases than BoN decoding!\\n\\n\\n\\n[1] Dubois et al. 2024. Length-Controlled AlpacaEval: A Simple Way to Debias Automatic Evaluators.\\n\\n[2] Zheng et al. 2024. Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena.\\n\\n[3] M\\u00fcller et al. 2021. Understanding the Properties of Minimum Bayes Risk Decoding in Neural Machine Translation.\\n\\n[4] Freitag et al. 2022. High Quality Rather than High Model Probability: Minimum Bayes Risk Decoding with Neural Metrics.\\n\\n[5] Fernandes et al. 2022. Quality-Aware Decoding for Neural Machine Translation\\n\\n---\\n\\n*Question 3: The hyperparameter Ncond is set to 12 for generating candidates for DPO but increased to 30 during decoding. How did the authors determine these values, and how do they align with the experiment illustrated in Figure 2?*\\n\\nThank you for the question! \\n\\nFrom the N_cand curves (right side of Fig. 2), we see that setting N_cand above 10 already recovers most of the gains associated with MBR and BoN inference. In order to balance performance and compute cost, we therefore chose N_cand = 12 for our self-training experiments. As the objective of this section is to demonstrate that MBR distillation is a promising method for self-training, we do not feel that expending considerably more train-time compute to achieve the best possible results is necessary when using a lower N_cand already yields significant gains.\\n\\nAs for our MBR and BoN inference experiments - we could certainly lower N_cand from 30 and still achieve strong performance gains. However, the objective of this section is to understand the full potential of MBR inference performance, so we do feel that using a larger N_cand is necessary. We have modified Section 4.1.2 to explain our selection of N_cand =12 for distillation more clearly. Please see our updated manuscript.\"}", "{\"metareview\": \"This paper introduces a novel application of Minimum Bayes Risk (MBR) decoding to enhance the test-time performance of instruction-following LLMs. LLM judges (including Prometheus2, Llama3, and JudgeLM) are used as reference-based evaluators and select high-quality outputs from a set of candidates generated by Llama 2 & 3 models. Evaluations on AlpacaEval and MT-Bench demonstrate that MBR decoding with LLM judges significantly outperforms greedy decoding and other reference-free decoding strategies, even with smaller LLMs acting as judges. The paper explores iterative self-training on MBR-decoded outputs. MBR decoding is also combined with self-training (DPO) to yield further gains and overcome the added decoding costs.\\n\\nThere is consensus among the reviewers that the experimental setup and experiments are thorough. The majority of the reviewers agree that the paper is well written, clear to follow, and motivation and results are presented well. One of the concerns raised was the need for comparisons with other decoding methods which is addressed by the authors with speculative decoding results in the rebuttal. Another suggestion was to include a discussion on the computational cost which the authors have included in section 4.2 of the paper and few more details in the rebuttal. Reviewer Z8nY points out that the paper lacks analysis of the dependence of the LLM judge\\u2019s quality. The authors rebut with a detailed analysis of this by answering the questions asked by the reviewer. With its solid experimental setup, clear writing, and a rebuttal addressing all key concerns, I am recommending this paper for acceptance.\", \"additional_comments_on_reviewer_discussion\": \"See above.\"}" ] }
7wuJMvK639
Hierarchical World Models as Visual Whole-Body Humanoid Controllers
[ "Nicklas Hansen", "Jyothir S V", "Vlad Sobal", "Yann LeCun", "Xiaolong Wang", "Hao Su" ]
Whole-body control for humanoids is challenging due to the high-dimensional nature of the problem, coupled with the inherent instability of a bipedal morphology. Learning from visual observations further exacerbates this difficulty. In this work, we explore highly data-driven approaches to visual whole-body humanoid control based on reinforcement learning, without any simplifying assumptions, reward design, or skill primitives. Specifically, we propose a hierarchical world model in which a high-level agent generates commands based on visual observations for a low-level agent to execute, both of which are trained with rewards. Our approach produces highly performant control policies in 8 tasks with a simulated 56-DoF humanoid, while synthesizing motions that are broadly preferred by humans. Code and videos: https://www.nicklashansen.com/rlpuppeteer
[ "reinforcement learning", "world model", "humanoid" ]
Accept (Poster)
https://openreview.net/pdf?id=7wuJMvK639
https://openreview.net/forum?id=7wuJMvK639
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zm0ek5dafZ", "zbkzduyTOr", "yZr7wBytIv", "yMPQ9756s3", "y6ra8KpCFC", "y5Xt6FX9qQ", "xd5fQASxUp", "wNHlljjE59", "tOz1sFV2UK", "qIViUTrxwv", "pDSnODW2Je", "oieqYx7FuP", "mwdAn7ovF5", "hkHhHGBNgB", "gMzZfQlueR", "fwCgIQ6fIv", "fsOEh0glYU", "cAC20za7rW", "Z8JGf380em", "Wg6sNHTGhu", "UTtHx9saUR", "TuYiSAPwf3", "TShzGtDWxj", "O6g5hLzmvK", "LU7GULcqZ7", "JLhyohwGJT", "IH7pBIiY8D", "ID2Bz3z3vP", "Ft43iiMVGR", "Fj5HKWOm9T", "EM1UX7XCxA", "CuZELd9Jyl", "Cthn38uncT", "CP7qEek2Z7", "AKvZS3Lu4W", "9h9Lfe7mYP", "9MHJlHim1B", "9CJfueV2qA", "8Fb92Zkq4X", "6FD294tdRJ", "1nIBmpFKbj" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732480654337, 1732609353354, 1732391229285, 1733142012611, 1732246967475, 1733125573161, 1732521661867, 1732410204978, 1733126405592, 1732247581286, 1732602280926, 1732247400609, 1733151295470, 1730524872984, 1732480684026, 1732514946630, 1732489973946, 1732905455403, 1732735341421, 1732480718370, 1730672692009, 1733020383478, 1732247709682, 1734922065548, 1737523674438, 1732247353673, 1732730973521, 1733208215259, 1733170314990, 1733208033644, 1732498715246, 1732804823069, 1733126056335, 1732248243556, 1730774508585, 1732573797312, 1732576148867, 1733151434125, 1732815208683, 1730683757871, 1733025010810 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Reviewer_EisB" ], [ "ICLR.cc/2025/Conference/Submission4970/Reviewer_HShf" ], [ "ICLR.cc/2025/Conference/Submission4970/Reviewer_C5cF" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Reviewer_2bf6" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Area_Chair_kWAX" ], [ "ICLR.cc/2025/Conference/Submission4970/Reviewer_EisB" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Reviewer_2bf6" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Reviewer_2bf6" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Area_Chair_kWAX" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Reviewer_EisB" ], [ "ICLR.cc/2025/Conference/Submission4970/Reviewer_2bf6" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Reviewer_HShf" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Area_Chair_kWAX" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ], [ "ICLR.cc/2025/Conference/Submission4970/Reviewer_C5cF" ], [ "ICLR.cc/2025/Conference/Submission4970/Authors" ] ], "structured_content_str": [ "{\"title\": \"A gentle reminder\", \"comment\": \"Dear reviewer C5cF,\\n\\nThe end of the discussion period is rapidly approaching, and we would really appreciate it if you had a moment to look over our response and changes to the manuscript. We believe that your main concerns are addressed by our response, but if not we will be more than happy to work with you on further addressing them. Thanks again for your time and for serving as a reviewer!\"}", "{\"comment\": \"Thanks for your reply!\\n\\nAs the paper is designed for \\\"whole-body humanoid control\\\", I still believe it is important to showcase the proposed method's performance in manipulation task. Could you design one simpliest manipulation task and demonstrate the motion generated by the proposed method has better qualities and more natural movements than traditional RL baselines? I will raise the score to acceptance if the results are provided.\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for the detailed response. The authors have addressed the majority of my concerns and I have raised my evaluation accordingly to accept.\"}", "{\"title\": \"Reviewer Comment\", \"comment\": \"Thanks for adding additional results to address the reviewers' questions. All my concerns are addressed. I am raising my score accordingly.\"}", "{\"title\": \"Thank you!\", \"comment\": \"We thank the reviewer for their valuable feedback. We address your comments in the following.\\n\\n----\\n\\n**Q:** Lack of low-level tracking performance evaluation. There is no evaluation or metrics for the tracking accuracy of success rate.\\n\\n**A:** While we predominantly focus on the utility of the low-level tracking agent for downstream tasks, we agree that a more thorough evaluation of the tracking performance itself would be informative. We include 4 new evaluations: (1) a collection of tracking visualizations for readers to qualitatively evaluate our approach, (2) success rate as defined by [1] Luo, Z., Cao, J., Kitani, K., & Xu, W. (2023), (3) average tracking error, and (4) CoMic [2] score across all 836 clips. Results are shown below, and have also been added to our revised manuscript along with more precise definitions of the metrics used.\\n\\n| | Success rate (%) \\u2191 | Tracking error \\u2193 | CoMic score \\u2191 |\\n|--------------|--------------------|------------------|---------------|\\n| Offline only | 6.2 | 0.503 | 42.6 |\\n| 5% data | 74.4 | 0.260 | 45.4 |\\n| 25% data | 79.6 | 0.225 | 46.3 |\\n| 75% data | 87.9 | 0.202 | 47.4 |\\n| Ours | **88.3** | **0.187** | **48.7** |\\n\\nWe also appreciate the additional references; they are now cited in the related work section of our updated manuscript. Thanks again for your suggestions!\\n\\n[2] Hasenclever, L., Pardo, F., Hadsell, R., Heess. N., Merel, J., \\u201cCoMic: Complementary Task Learning & Mimicry for Reusable Skills\\u201d, 37th International Conference on Machine Learning (2020)\\n\\n----\\n\\n**Q:** The lack of interface design discuss. [...] To me, the idea of training low-level tracking policy for skills reuse is a long-standing idea, but the interface of this hierarchy matters a lot. I'd love to see more comparison on this.\\n\\n**A:** We agree that the interface between hierarchies is important, and that including a comparison to e.g. an interface based on latent actions (as opposed to our explicit 3D end-effector joint commands) would be informative. We experimented with such an approach in the early stages of our project but found our current approach to be more reliable for whole-body humanoid control. Including such a comparison will take longer than the span of this rebuttal period, but we are committed to adding it for the camera-ready version. Would that address your concern?\\n\\n----\\n\\n**Q:** The source of naturalness is unclear. The low-level tracking policy might be conditioned on human motion pirors, but if the tracking policy is good enough, it should be able to produce what TD-MPC2 achieves. Also, if the advantage of this paper is sample-efficiency and naturalness, a key baseline here would TD-MPC2+ AMP, which is missing.\\n\\n**A:** We agree that further exploring how the data mixture and design choices affect generated motions would be really interesting and informative. If the reviewer believes that this would add significant value to the manuscript, we will be more than happy to add a TD-MPC2+AMP baseline to the camera-ready version, along with an additional ablation of how different mixtures of MoCap data (e.g. excluding all \\u201crunning\\u201d clips from the pretraining data) influence the behavior of learned policies. We would like to point out, however, that the algorithmic complexity of AMP (inverse RL with a learned GAN discriminator + unsupervised skill discovery) is significantly more complicated than our approach (RL without any bells and whistles + no hyperparameter-tuning), and that the original AMP work only achieved downstream task transfer in simple state-based (i.e., no visual inputs) reaching and velocity control tasks while using several orders of magnitude more environment steps to do so. For example, AMP required approx. 800M steps to learn the reaching task, whereas our method learns visuo-motor control tasks such as running in an environment with randomized obstacles in as little as 3M steps.\\n\\n----\\n\\nWe believe that our response addresses your main concerns. However, if that is not the case please let us know if you have any additional comments. We would greatly appreciate it if the reviewer could provide us with precise and actionable feedback such that we can fully address your concerns. Thanks again for your time and valuable feedback!\"}", "{\"title\": \"Discussion ends TODAY\", \"comment\": \"Dear reviewer,\\n\\nThis is a final reminder that the discussion period is ending **TODAY**, December 2, and we still have not heard from you. We would really appreciate it if you could take a moment to go through our response + revised manuscript, and consider increasing your score if our rebuttal has addressed your concerns. Thanks again for your time and valuable feedback.\\n\\nBest,\\n\\nAuthors of Puppeteer\"}", "{\"title\": \"Reviewer reply\", \"comment\": \"Thank you for the quick follow-up and clarifications. This baseline is a step in the right direction but is not sufficient on its own as SAC typically requires a substantially higher amount of samples than MBRL methods. Can you please include a dreamer-based baseline? A smaller number of seeds would be acceptable for the short rebuttal period.\"}", "{\"title\": \"Thanks again!\", \"comment\": \"We are happy to hear that our response + changes to the manuscript addresses your concerns, and that you have decided to raise your score. Thanks again for your time and valuable feedback, we really appreciate it!\"}", "{\"title\": \"Discussion ends TODAY\", \"comment\": \"Dear reviewer,\\n\\nThis is a friendly reminder that the discussion period is ending **TODAY**, December 2. We hope that our previous response addresses your concern regarding manipulation tasks to the extent possible, and we would really appreciate it if you could take a moment today to reevaluate our contributions and revised manuscript based on our discussion as well as all the changes that we have made per other reviewers' request. Thanks again for your time and valuable feedback!\\n\\nBest,\\n\\nAuthors of Puppeteer\"}", "{\"title\": \"Thank you!\", \"comment\": \"We thank the reviewer for their valuable feedback. We address your comments in the following.\\n\\n----\\n\\n**Q:** the main advantage of this method over TD-MPC2 is the resulting naturalness of the motions. Can the authors elaborate on why they think the proposed method improves this aspect?\\n\\n**A:** We attribute the naturalness of motions to pretraining on a large collection of human MoCap clips. By first training a single, reusable tracking (low-level) agent to track reference motions from this dataset, and then subsequently training a task-specific high-level agent to perform downstream tasks by providing commands to the low-level agent, our method implicitly leverages the behavioral priors obtained during pretraining. This is, to some extent, validated by our ablations in Figure 8 that vary the number of MoCap clips used during pretraining \\u2013 we observe a strong correlation between number of MoCap clips and tracking performance. Lastly, per request of reviewer HShf we have added 4 new evaluations of our low-level tracking agent and its relevant ablations, which are included in our reply here [https://openreview.net/forum?id=7wuJMvK639&noteId=y6ra8KpCFC](https://openreview.net/forum?id=7wuJMvK639&noteId=y6ra8KpCFC) as well as in Appendix B of our updated manuscript.\\n\\n----\\n\\n**Q:** can the authors elaborate on how the low-level policies exactly track the high-level commands? Since the low-level receives a sequence of commands does it keep using until the tracking error is below a threshold, or is it only used for a single step independent of the outcome of applying the one-step action?\\n\\n**A:** The low-level agent is provided a command (or sequence of commands) by the high-level policy at every step, and both levels will replan a sequence of high-level commands + low-level actions regardless of whether the previous command was achieved or not. This design decision is consistent with the pretraining phase in which the tracking agent is trained to track a reference motion. We have added the sentence \\u201cThe high-level policy outputs commands at a fixed frequency regardless of whether the previous command was achieved\\u201d to Section 3.2 (L215-216) which can be found in the updated manuscript. We hope that this makes it more clear, and thank the reviewer for pointing it out.\\n\\n----\\n\\n**Q:** on the methods side of things, the paper extends td-mpc2 to a hierarchical architecture. Can the authors compare the method to the hierarchical version of Dreamer [1]?\\n\\n**A:** We agree that this would be an interesting comparison in principle. However, our empirical results in Figure 5 indicate that DreamerV3 does not achieve any meaningful performance even on the state-based humanoid control tasks (same as SAC), while TD-MPC2 converges in <1M environment steps. These results are in line with concurrent work [2] that also finds TD-MPC2 to outperform DreamerV3 by a large margin in state-based humanoid control. We thus decided to proceed with TD-MPC2 as our base algorithm in the remainder of our experiments which are all vision-based. We will be happy to include results for a hierarchical DreamerV3 in the camera-ready version if the reviewer believes that this will be informative, but we do not expect results to change much compared to the non-hierarchical version.\\n\\n[2] Sferrazza, C., Huang, D., Lin, X., Lee, Y., Abbeel, P., \\u201cHumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation\\u201c (2024)\\n\\n----\\n\\nWe believe that our response addresses your main concerns. However, if that is not the case please let us know if you have any additional comments. We would greatly appreciate it if the reviewer could provide us with precise and actionable feedback such that we can fully address your concerns. Thanks again for your time and valuable feedback!\"}", "{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your response! We really appreciate your feedback and engagement during this discussion period. We respond to each of your comments in the following.\\n\\n> I believe HumanoidBench (https://arxiv.org/abs/2403.10506) is not a concurrent work\\n\\nHumanoidBench is still an unpublished work and only a preprint was made available on arXiv (concurrent with the development of our benchmark and method). Regardless, while we agree that HumanoidBench is an interesting resource for robot learning research, it does not consider visual observations (which is a key focus of our approach) and the Unitree H1 robot used in HumanoidBench is significantly lower DoF than the humanoid model that we consider in this work. The same is true for the HumanoidOlympics benchmark that was referenced in your original review. Additionally, the primary contribution of these papers is the simulation environments that they propose, whereas our paper contributes both a set of visuo-motor control tasks *and* a new RL algorithm for visual whole-body humanoid control.\\n\\n> and this paper should include manipulation tasks, as they would be much more suitable and easier to evaluate motion naturalness than the locomotion tasks designed in the paper.\\n\\nRespectfully, does the reviewer have any scientific evidence that human-like locomotion is a less important problem than human-like manipulation? We ask the reviewer to please judge our work based on scientific merit rather than personal preference or feelings.\\n\\n> the authors do not really compare the motion clips produced by this method with the real motion capture data\\n\\nWhile it is true that we do not compare to the real MoCap clips in our user study, this is primarily because the focus of our study is natural motion in *downstream tasks* rather than imitation of human clips. There are no suitable MoCap clips available for the types of tasks that we consider in this work, hence their exclusion from our user study. That said, we do compare our method against real MoCap clips in the pretraining stage, both qualitatively (clips are available on our project page under \\\"tracking results\\\") as well as quantitatively (success rate, average tracking error, and CoMic score). We observe that e.g. availability of more pretraining data results in more accurate tracking of reference motions from real humans.\"}", "{\"title\": \"Thank you! Part 2\", \"comment\": \"**Q:** Is the low-level tracking effectively transferred to control different humanoid models with varying degrees of freedom?\\n\\n**A:** The low-level tracking agent takes current joint positions as input and outputs delta joint positions for each of the 56 actuated joints in our humanoid model, so we doubt that a trained agent would be able to control a different humanoid model without finetuning. However, we do believe that our tracking agent would be able to control a variety of different embodiments when either trained from scratch on the new embodiment or with some amount of finetuning. Given that the hierarchical interface is 3D end-effector positions, it is possible that the high-level agent would be able to control a new humanoid model without any additional training, as long as a new low-level tracking agent is obtained for that model. This would definitely be an interesting direction for future research.\\n\\n----\\n\\nWe believe that our response addresses your main concerns. However, if that is not the case please let us know if you have any additional comments. We would greatly appreciate it if the reviewer could provide us with precise and actionable feedback such that we can fully address your concerns. Thanks again for your time and valuable feedback!\\n\\n(2/2)\"}", "{\"comment\": \"As the authors point out, the discussion period ends today. Please be sure to read their most recent posts, which address your questions about the experimental setup.\\n\\nBest,\\\\\\nAC\"}", "{\"summary\": \"The paper presents a novel hierarchical world model Puppeteer designed for visual whole-body humanoid control, which operates without relying on predefined skill primitives, reward designs, or temporal abstractions. This paper also introduces a new task suite consisting of eight challenging tasks for evaluating humanoid control, demonstrating that Puppeteer produces more natural and human-like motions preferred by human evaluators in comparison with model-free and model-based RL baselines.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper is well-organized, well-written, and includes clear figures.\", \"A well-designed hierarchical control framework (although the idea is not novel) is implemented to control humanoid motion in a more natural way: the high-level agent generates commands given visual observations, and the low-level agent is responsible for executing them.\", \"The proposed visual whole-body high-dimensional humanoid control benchmark enrich the evaluation platforms in the area.\"], \"weaknesses\": [\"The paper primarily evaluates the visio-locomotive capabilities of the humanoid model. It could be better to expand the range of tasks to include more diverse scenarios that test different aspects of humanoid capabilities.\", \"More baseline method should be compared, like HumanoidOlympics (https://arxiv.org/pdf/2407.00187). This paper also uses human motion data and reinforcement learning to train natural humanoid motions in various tasks.\"], \"questions\": [\"Although the motion produced by Puppeteer is more natural, why does the humanoid robot always lean forward while moving?\", \"Can the proposed method be validated in real-world experiments?\", \"Can this framework be generalized to manipulation tasks in HumanoidOlympics?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"A gentle reminder\", \"comment\": \"Dear reviewer 2bf6,\\n\\nThe end of the discussion period is rapidly approaching, and we would really appreciate it if you had a moment to look over our response and changes to the manuscript. We believe that your main concerns are addressed by our response, but if not we will be more than happy to work with you on further addressing them. Thanks again for your time and for serving as a reviewer!\"}", "{\"title\": \"We appreciate your feedback\", \"comment\": \"Thank you for the constructive feedback, we really do appreciate your time and effort! It appears that your primary concern at this point is our choice of baselines. We would like to take a moment to clarify why we picked the current baselines and ablations, as well as provide new empirical results along the lines of what you are suggesting.\\n\\n**Choice of baselines:** Our goal is to learn visual whole-body humanoid controllers that perform well (data-efficient and strong asymptotic performance) *and* produce natural motions. We first benchmarked three common algorithms for continuous control: SAC, DreamerV3, TD-MPC2, and found that only TD-MPC2 achieved non-trivial performance. Based on our ablations in Figure 8 it appears that the planning component of TD-MPC2 is critical to performance on these high-dimensional tasks, which could explain the poor performance of SAC and DreamerV3 as neither of them use planning. We thus decided to use TD-MPC2 as the backbone learning algorithm at both levels of our hierarchical approach. Regarding your specific comment that\\n\\n> the advantage of using a TD-MPC style high-level planner is unclear\\n\\nwe would like to again reference our planning ablation in Figure 8; this result clearly demonstrates that performance of our method degrades substantially if planning is disabled at either level (i.e., planning at the high-level is necessary). This brings us to our second point.\\n\\n**New empirical results:** To further confirm the importance of a TD-MPC style high-level planner, we run a new set of experiments that use the **same** low-level tracking agent as our approach (based on TD-MPC2) but replaces the high-level TD-MPC2 agent with a SAC policy. A revised Figure 5 (learning curves) can be found [on this link](https://i.imgur.com/tcxIyqT.png); we denote this ablation as \\\"SAC w/ LL. TD-MPC2\\\" and run a full set of experiments (10 seeds) on the three state-based tasks *stand*, *walk*, and *run*. Similar to the non-hierarchical SAC baseline and our non-planning ablation, the high-level SAC agent fails to achieve any meaningful performance on these three tasks, and often suffers from numerical instability (divergence). In light of these results, we do not believe that a baseline with SAC at both levels will provide any additional insights. We will be very happy to include the equivalent \\\"DreamerV3 w/ LL. TD-MPC2\\\" ablation as well, but this will take a little while longer since DreamerV3 runs significantly slower than SAC.\\n\\nWe have updated our manuscript to include these new results, as well as a description of the added baseline on L315-317. We hope that our response helps address your concern regarding baselines! Either way, thanks again for your time and valuable feedback.\"}", "{\"title\": \"Reply to authors\", \"comment\": \"Thank you for your rebuttal. Unfortunately, the current form of the paper and the answers from the rebuttal fail to convey the contribution of this work. The paper mainly advances the naturalness of the humanoid motion. The reason for the emergence of this more natural motion (according to the authors) is the pretraining of the low-level tracking agent on a human MoCap dataset, which represents expert interactions with a very narrow distribution. Without extensive comparison of the proposed approach to other hierarchical frameworks that should equally leverage the MoCap dataset, the value of this work is unclear. Model-based methods are typically advantageous due to their sample efficiency. However, it is unclear whether this advantage remains in the presence of a dataset that can facilitate pretraining a low-level tracking policy. Such low-level trackers reduce the complexity of the problem and the advantage of using a TD-MPC style high-level planner is unclear. I advise the authors to include a comparison with other hierarchical frameworks based on Dreamer, as well as model-free methods. Otherwise, the value of this work is unclear. The work looks at an interesting problem and could potentially make a nice contribution. However, in its current form, it is not ready for publication and I still propose rejecting it unless the mentioned baselines are included in the comparison.\"}", "{\"title\": \"Final baseline results\", \"comment\": \"Dear reviewer 2bf6,\\n\\nThanks again for all the constructive feedback and discussion so far.\\n\\nAll 30 runs of the new DreamerV3 w/ low-level TD-MPC2 baseline have now completed. The complete results are available [on this link](https://i.imgur.com/ShPGaJo.png). We believe that our conclusions summarized in [our previous comment](https://openreview.net/forum?id=7wuJMvK639&noteId=IH7pBIiY8D) still hold true: *a TD-MPC2 backbone (which uses planning) is crucial to performance in whole-body humanoid control, at every level of the (hierarchical) architecture.*\\n\\nBased on our logged training metrics, the instability of DreamerV3 on the *stand* and *walk* tasks appears to be due to divergence of the policy and critic networks of DreamerV3. This is a common phenomenon with high-dimensional continuous action spaces, and SAC is equally prone to such instabilities. Looking at individual seeds on the *stand* task, 6 seeds of DreamerV3 briefly experience signs of learning followed by training divergence, and the remaining 4 seeds fail to learn at all. The original DreamerV3 paper primarily considers tasks with discrete action spaces for which this is often less of an issue.\\n\\nWe hope that these two additional baselines, SAC w/ low-level TD-MPC2 and DreamerV3 w/ low-level TD-MPC2, address the reviewer's concerns, and that the reviewer would be willing to reconsider their initial score as a result.\"}", "{\"title\": \"Reminder\", \"comment\": \"Thanks again for your thoughtful review and feedback so far. As the rebuttal period is coming to a close, we would like to encourage further discussion or clarification on any remaining points. We are happy to address any concerns to ensure all perspectives are thoroughly considered.\"}", "{\"title\": \"A gentle reminder\", \"comment\": \"Dear reviewer EisB,\\n\\nThe end of the discussion period is rapidly approaching, and we would really appreciate it if you had a moment to look over our response and changes to the manuscript. We believe that your main concerns are addressed by our response, but if not we will be more than happy to work with you on further addressing them. Thanks again for your time and for serving as a reviewer!\"}", "{\"summary\": \"Post rebuttal | the authors did a good job in the rebuttal phase to include some necessary baselines.\\n\\nThe paper improved a lot during the rebuttal phase and now includes some of the necessary baselines to validate the method. When I asked the authors about measures to ensure fairness in the new baseline experimentation, they argued that the considered MBRL algorithms are known to be robust to hyperparameter changes, which is true but not a valid argument for a simple reason: if an algo is robust to hyperparameter choice, doesn't mean that it cannot benefit from tuning. As I believe the proposed method was subject to some tuning during development time (like any ML method is), measures to ensure fairness of comparison would be necessary. Additionally, the baselines added during rebuttal just scratch the surface of hierarchical RL approaches and are themselves made up baselines combining known high-level algorithms with low-level TD-MPC. While these baselines are very important and necessary to understand the method, including standard baselines from the literature would strongly improve the paper and increase its impact. I highly encourage the authors to do that. I will be raising my score but only to a 5, as I still think the paper is not ready for publication at ICLR, despite the good improvements.\\n\\n---\\n\\nThe paper proposes \\\"Puppeteer\\\" a hierarchical decision-making approach tailored for visual whole-body humanoid control. The proposed method trains two separate world models for high-level and low-level control purposes. The low-level world model is concerned with tracking reference joint-level trajectories produced by the high-level controller. The high-level controller can additionally be conditioned on visual data. Both world models are based on TD-MPC2 which is a sampling-based MPC approach with learned decoder-free world models (with deterministic-only components) and a learned value function for long-horizon value assignment. TD-MPC2 is further extended to include a termination encoder head as is common in other model-based RL methods such as dreamer. The paper claims that the proposed method achieves results that are mostly comparable to TD-MPC2's results, while the plots show significantly worse results than TD-MPC2 in terms of asymptotic performance. The main advantage of the method is that it produces more natural and human-like motion, which was quite well shown in the experiments. The paper also ablates multiple design choices.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"the paper is generally well-written and an enjoyable read, I also liked the figures and plots.\", \"the proposed approach is very interesting and promising and is a natural next step to extend the TD-MPC2 framework.\", \"the method is evaluated on multiple humanoid tasks including environments with only proprioception as well as others with additional visual observations.\", \"the ablations nicely evaluate the role of the different design choices of the method. I especially appreciate the study of the role of planning in the architecture.\", \"the baselines include model-free and model-based approaches.\"], \"weaknesses\": [\"the method section misses a detailed motivation for why hierarchy improves the naturalness of the motions.\", \"the method section misses a detailed explanation concerning the exact usage of the high-level commands (see question 2).\", \"the paper introduces a hierarchical version of td-mpc2, the baselines however do not include a single hierarchical RL approach, I would at least consider including a hierarchical implementation of dreamer [1].\", \"[main weakness] The results of the paper are weak, at least in the current way in which they are presented. While the method is interesting and makes sense, the results show that it significantly underperforms TD-MPC2 but improves the naturalness of the produced motions. That would have been an acceptable tradeoff if the paper could justify why the proposed method improves the naturalness of the motions with intuitions and ideally, some experiments that validate them.\", \"** Minor issues:**\", \"line 079 end-effector joints --> end-effector links.\", \"punctuation is missing in the equations (but I understand that this is a matter of style, so no pressure).\", \"Overall the paper proposes an interesting approach, but it currently fails to showcase the benefits of this approach. I am willing to raise my score if this aspect is properly addressed.\"], \"questions\": [\"the main advantage of this method over TD-MPC2 is the resulting naturalness of the motions. Can the authors elaborate on why they think the proposed method improves this aspect? (here I mean further explaining the reward hacking argument made in the paper and perhaps including other arguments that could make sense)\", \"can the authors elaborate on how the low-level policies exactly track the high-level commands? Since the low-level receives a sequence of commands does it keep using $c_t$ until the tracking error is below a threshold, or is it only used for a single step independent of the outcome of applying the one-step action?\", \"on the methods side of things, the paper extends td-mpc2 to a hierarchical architecture. Can the authors compare the method to the hierarchical version of Dreamer [1]?\", \"[1] Hafner, Danijar, et al. \\\"Deep hierarchical planning from pixels.\\\" Advances in Neural Information Processing Systems 35 (2022)\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Discussion period is ending\", \"comment\": \"Dear reviewer C5cF,\\n\\nAs the discussion period concludes on December 2 (in **2** days!) and we have not heard from you yet, we wanted to kindly remind you to read and reply to our rebuttal as soon as possible. We welcome you to share any remaining concerns or feedback. If our responses have adequately addressed your comments, we would greatly appreciate it if you could update your score accordingly.\"}", "{\"title\": \"Thank you!\", \"comment\": \"We thank the reviewer for their valuable feedback. We address your comments in the following.\\n\\n----\\n\\n**Q:** Although the motion produced by Puppeteer is more natural, why does the humanoid robot always lean forward while moving?\\n\\n**A:** It is difficult to give a definitive answer to this question, but we conjecture that it may be one of two reasons, or more likely a combination of the two: (1) leaning forward with the torso creates momentum while being a relatively stable gait so it is possible that the high-level policy learns such behavior for this reason, and (2) there are many instances in the human MoCap dataset where the human leans forward while performing highly dynamic motions, which could bias the low-level tracking policy towards such a gait. For context, we include videos of various reference motions from the MoCap dataset along with our learned tracking policy on our project webpage; you will notice that the human reference motion in clip 9 (bottom row, second to right) leans forward while making a quick turn. We agree that it would be very interesting to study how different mixtures of MoCap data (e.g. excluding all \\u201crunning\\u201d clips from the pretraining data) influence the behavior of learned policies, and we will be happy to include some exploratory results on this in the camera-ready version if the reviewer believes it would add value to the manuscript.\\n\\n----\\n\\n**Q:** Can the proposed method be validated in real-world experiments?\\n\\n**A:** We agree that deploying our method on a real humanoid robot is a natural next step (which we intend to pursue), but believe that it would be outside of the scope of this manuscript. Deploying a learned policy on a relatively new hardware platform such as the Unitree H1 / G1 will require substantial hardware engineering and sim-to-real design effort, while our work focuses more on the algorithmic foundations. That said, we do believe deploying our method to real hardware would be possible without significant algorithmic changes.\\n\\n----\\n\\n**Q:** Can this framework be generalized to manipulation tasks in HumanoidOlympics?\\n\\n**A:** Yes, this should indeed be possible, and we are interested in pursuing this in the future. However, we would like emphasize two things: (1) HumanoidOlympics is a strictly state-based benchmark whereas our method and benchmark is designed specifically for visuo-motor control (image inputs), and (2) both HumanoidOlympics and HumanoidBench (the two alternatives to our benchmark) were developed concurrently with our work.\\n\\n----\\n\\nWe believe that our response addresses your main concerns. However, if that is not the case please let us know if you have any additional comments. We would greatly appreciate it if the reviewer could provide us with precise and actionable feedback such that we can fully address your concerns. Thanks again for your time and valuable feedback!\"}", "{\"metareview\": \"The paper proposes Puppeteer, a hierarchical reinforcement learning (RL)-based framework for whole-body control of humanoid robots based upon visual observations. The high-level model generates reference commands for a pre-trained low-level policy responsible for trajectory tracking. The experimental analysis emphasizes the \\\"naturalness\\\" of the resulting motion, with performance gains over contemporary baselines.\\n\\nThe paper was reviewed by four referees, who despite attempts by the AC, were unable to come to a consensus on the merits of the paper. Two reviewers (HShf and C5cF) appreciated the broader advancements that the paper provides to humanoid control. Reviewers 2bf6 and EisB found that it is well written and a pleasure to read. Reviewers HShf and EisB appreciated Puppeteer's hierarchical approach, but disagreed on its novelty. Meanwhile, at least two reviewers (C5cf and 2bf6) noted that the results show that the TD-MPC2 baseline is comparable to if not better than the proposed method. Related, several reviewers emphasized the fact that the evaluation focuses on the \\\"naturalness\\\" of the resulting motion, however \\\"naturalness\\\" is subjective and does not afford a standard metric. In their response, the authors note the importance of realizing motions that are natural, which the AC agrees with, but they did not address its subjectivity. Related, Reviewer 2bf6 questions the role of hierarchy in ensuring the naturalness of the resulting behavior, which the authors attribute to pretraining the motions on a large-scale collection of human motion capture data. Additionally, some reviewers identified the need for comparisons to other baselines, while others requested and evaluation of low-level tracking performance, which the authors made an effort to address during the rebuttal period by including additional results. \\n\\nThe AC recognizes the importance of realizing humanoid motions that are natural and human-like, particularly for settings where robots are expected to interact with people. That said, the paper would benefit from a more concrete discussion of what novel aspects of the method are integral to realizing natural behaviors. THe significance of Puppeteer's contributions in this regard is not clear, particularly in light of the authors' claim that it is due to having pretrained on a large amount of human motion capture data, which one can argue constitutes an incremental contribution.\", \"additional_comments_on_reviewer_discussion\": \"The AC recognizes that some of the reviewers did not participate in the discussion period as soon as one would like. Each reviewer ended up responding to the authors' rebuttal, some perhaps at the request of the AC. This is is in slight conflict with the authors' claims that Reviewers 2bf6 and EisB were not active in the discussion. While they were not as active as some may have desired, the did respond before the end of the discussion period.\\n\\nMeanwhile, the AC urged the reviewers to try and come to a consensus on the paper. Unfortunately, they remained split on the paper's merits.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Thank you!\", \"comment\": \"We thank the reviewer for their valuable feedback. We address your comments in the following.\\n\\n----\\n\\n**Q:** How relevant is the metric of \\\"naturalness\\\" in real-world humanoid control, and is it sufficient to evaluate humanoid trajectory tracking effectively?\\n\\n**A:** This is a great question. We do believe that policies that behave \\u201cnaturally\\u201d and \\u201chuman-like\\u201d are inherently valuable. For example, the physical safety of a robot and its surroundings is often a priority in robotics applications, which is not unlike when humans perform everyday tasks. It is desirable for humans and robots alike to complete tasks in ways that are efficient yet reliable and safe, and humans do tend to prefer when other agents (e.g.) behave in a predictable manner. This preference is, to some extent, validated in our user study. Biasing an RL policy towards human MoCap data is an effective way to embed behavioral preferences in a humanoid setting. Preferences can in principle be specified via reward engineering, but designing a reward that accurately conveys specific preferences can be very non-trivial compared to a more data-driven approach like ours. We focus on humanoids in this work, but our approach could in principle be applied to other embodiments for which a dataset of reference motions are available. At a high level, you can think of our problem setting as conceptually similar to the alignment problem in e.g. LLMs.\\n\\n----\\n\\n**Q:** It would be beneficial to include a comparative study on the survival rate or survival time when using the final-trained model\\n\\n**A:** Agreed! Table 1 of our manuscript already compares gait + average episode length (survival time) over the course of training. Following your suggestion, we now report episode length at 1M environment steps + at convergence. We generally observe that TD-MPC2 achieves similar episode return and survival times as Puppeteer at convergence, but that Puppeteer has significantly higher survival times both throughout training and at the 1M snapshot. We conjecture that the behavioral prior of the low-level tracking agent is especially helpful in the early stages of training.\\n\\n| | Eplen @ 1M \\u2191 | Eplen (final) \\u2191 | Torso height \\u2191 |\\n|-----------|--------------|-----------------|----------------|\\n| TD-MPC2 | 66.9 +- 9.8 | 181.6 +- 28.1 | 85.9 +- 4.7 |\\n| Puppeteer | **115.9 +- 5.2** | 159.3 +- 5.9 | **96.0 +- 0.2** |\\n\\nWe **bold** numbers that are at least one std.dev. apart. We have updated Table 1 with these new metrics in our revised manuscript.\\n\\n----\\n\\n**Q:** It would be helpful to include baseline experiments focused on \\\"zero-shot generalization.\\\"\\n\\n**A:** Great suggestion! We have added baseline results to our zero-shot generalization experiments in Figure 9 of our updated manuscript. The figure is also viewable here [https://i.imgur.com/gP0MLW0.png](https://i.imgur.com/gP0MLW0.png) for your convenience. We observe that Puppeteer generally is more robust to changes in gap length than the TD-MPC2 baseline, which we attribute to its more stable gait. Videos of our method performing this task with varying gap lengths are shown on our project webpage.\\n\\n----\\n\\n**Q:** Please provide details on memory usage, training time, and control (or inference) time across different methods.\\n\\n**A:** Thank you for the suggestion! We already provide training times and hardware requirements of Puppeteer in Section 4.1 of our manuscript, but agree that a more comprehensive comparison to baselines would be informative. We have added a comparison of wall-time, inference time, and GPU memory for Puppeteer, TD-MPC2, SAC, and DreamerV3. Overall, wall-time and system requirements of Puppeteer are mostly comparable to that of TD-MPC2 across both state-based and visual RL. SAC runs approx. 3.6x faster than Puppeteer, but does not achieve any meaningful performance on our benchmark; DreamerV3 runs approx. 2.3x **slower** than Puppeteer and also does not achieve any meaningful performance.\\n\\n| | Wall-time (h / 1M steps) | Inference time (ms / step) | GPU memory (GB) |\\n|-----------|--------------------------|----------------------------|-----------------|\\n| SAC | 5.9 (vision: N/A) | 2.2 | 0.4 |\\n| DreamerV3 | 50.2 (vision: N/A | 18.7 | 3.9 |\\n| TD-MPC2 | 18.6 (vision: 25.2) | 50.8 | 0.5 |\\n| Puppeteer | 21.8 (vision: 29.0) | 88.2 | 0.6 |\\n\\nThese results are now included in Appendix C of our updated manuscript, along with a detailed description of how the numbers are obtained.\\n\\n**Edit:** Added DreamerV3 numbers to the table above.\\n\\n----\\n\\n(1/2)\"}", "{\"title\": \"Added new baseline\", \"comment\": [\"As promised, we would like to update the reviewer with preliminary results for the baseline that consists of a high-level DreamerV3 w/ a low-level TD-MPC2 agent (pretrained as in our method). We run this baseline on three tasks: stand, walk, run with 10 random seeds per task. Results are shown [on this link](https://i.imgur.com/dpPG7fq.png) as well as in Figure 5 of our revised manuscript. We have results for up to 1.7M environment steps at the moment (56 hours of training) and expect it to take another 3 days for training to complete. However, we believe that the conclusion is clear: a TD-MPC2 backbone (which uses planning) is crucial to performance in whole-body humanoid control, at every level of the (hierarchical) architecture. We summarize our experimental results wrt this particular contribution as follows:\", \"Single-level SAC and single-level DreamerV3 achieves no meaningful performance on our tasks\", \"Single-level TD-MPC2 achieves good data-efficiency and asymptotic performance but produces highly unnatural motions\", \"High-level SAC + low-level TD-MPC2 achieves no meaningful performance\", \"High-level DreamerV3 + low-level TD-MPC2 achieves non-trivial performance on a limited number of tasks and random seeds but is unstable and prone to divergence\", \"Our method, Puppeteer, achieves strong data-efficiency, asymptotic performance, *and* produces natural and human-like motions\", \"Disabling planning at any level of our hierarchical approach degrades performance significantly (Figure 8)\", \"We hope that these additional baselines (high-level SAC / DreamerV3 + low-level TD-MPC2) address the reviewer's concerns.\", \"----\", \"**Edit:** We have updated the manuscript + link shared here with current \\\"DreamerV3 w/ low-level TD-MPC2\\\" baseline results one last time before paper revision closes. We believe that performance for this baseline is unlikely to improve with additional training but will run it to completion.\"]}", "{\"title\": \"Final reminder\", \"comment\": \"Dear reviewer EisB,\\n\\nThis is a **final reminder** that the discussion period ends in approx. **6 hours** (Dec 2, anywhere on earth). We would really appreciate it if the reviewer would take a moment to read our previous response and let us know whether it addresses the reviewer's concerns. We have worked hard to accommodate the requested changes of all reviewers, and we hope that the reviewer would consider revising their score as a result.\\n\\nBest,\\n\\nAuthors of Puppeteer\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for acknowledging our response! We are pleased to hear that our changes have addressed your concerns and that you are willing to raise your score as a result.\"}", "{\"title\": \"Final reminder\", \"comment\": \"Dear reviewer 2bf6,\\n\\nThis is a **final reminder** that the discussion period ends in approx. **6 hours** (Dec 2, anywhere on earth). Since we included the additional baselines requested by the reviewer in our previous response, and the reviewer appears to have no other concerns, we would really appreciate it if they would be willing to acknowledge our response and update their score accordingly.\\n\\nBest,\\n\\nAuthors of Puppeteer\"}", "{\"comment\": \"Thanks for your reply! However, as the authors do not conduct any experiment I proposed in the original review, I change my scores as follows: Rating: 5 and Confidence: 4. I believe HumanoidBench (https://arxiv.org/abs/2403.10506) is not a concurrent work, and this paper should include manipulation tasks, as they would be much more suitable and easier to evaluate motion naturalness than the locomotion tasks designed in the paper.\\n\\nFurthermore, I do not think the results of locomotion naturalness are compelling enough to support the major claims in the paper especially the authors do not really compare the motion clips produced by this method with the real motion capture data in the user study but only compare with regular RL method.\"}", "{\"title\": \"looking forward to the results\", \"comment\": \"Thank you for working hard on implementing some of the proposed baselines. I strongly believe that they are needed to clearly understand your contribution. The results so far are looking very promising and I'm looking forward to the final results. Can you please elaborate on the experimental setup for this comparison. Mainly, I'm interested in how much effort was put into hyper-parameter tuning of the baselines in comparison to your method (especially when it comes to things like horizon length, latent space dimensionality/ general network architecture, batch size and learning rates).\"}", "{\"title\": \"Discussion ends TODAY\", \"comment\": \"Dear reviewer,\\n\\nThis is a friendly reminder that the discussion period is ending **TODAY**, December 2. We believe that our responses and additional experimental results address the reviewer's concerns. Since you previously stated that you would be willing to increase your score if we included these additional baselines, we would really appreciate it if you could take a moment to reevaluate our revised manuscript (including the baselines that you suggested along with changes suggested by other reviewers) and consider updating your score accordingly. Thanks again for your time and valuable feedback!\\n\\nBest,\\n\\nAuthors of Puppeteer\"}", "{\"title\": \"General comment\", \"comment\": [\"We thank all reviewers for their thoughtful comments, and are especially pleased to see that the reviewers agree that:\", \"The paper is **well written** and enjoyable to read (reviewers 2bf6, EisB)\", \"Our method is conceptually **simple yet generates natural motions** (reviewers HShf, EisB)\", \"**Integration of vision** in whole-body humanoid control is exciting (reviewers HShf, C5cf, 2bf6, EisB)\", \"Our proposed benchmark **tasks and experiments are interesting** (reviewers HShf, C5cF, 2bf6, EisB)\", \"Our choice of **baselines and ablations are informative** (reviewer 2bf6)\", \"----\", \"### Summary of revisions\", \"We have revised our manuscript based on your feedback \\u2013 a summary of changes is available below. These changes are highlighted (red) in the updated manuscript. We have also responded to your individual comments.\", \"**(HShf)** Added 4 new evaluations of the low-level tracking policy: tracking visualizations for qualitative assessment, success rate, average tracking error, and CoMic score. Results are reported in Table 2 for our method + ablations, as well as in [this](https://openreview.net/forum?id=7wuJMvK639&noteId=y6ra8KpCFC) reply to reviewer HShf.\", \"**(C5cf)** Added a new survival time (episode length) metric in downstream task evaluations, reported at 1M steps and at convergence. Results have been added to Table 1, as well as in [this](https://openreview.net/forum?id=7wuJMvK639&noteId=JLhyohwGJT) reply to reviewer C5cf.\", \"**(C5cf)** Added a table with wall-time, inference time, and GPU memory requirements for our method and baselines across both state-based and visual RL tasks from our benchmark. Results are reported in Table 3, as well as in [this](https://openreview.net/forum?id=7wuJMvK639&noteId=JLhyohwGJT) reply to reviewer C5cf.\", \"**(C5cf)** Added baseline results for our zero-shot generalization experiments in Figure 9. We also share the figure here https://i.imgur.com/gP0MLW0.png for your convenience.\", \"**(2bf6)** Added a new baseline that uses the same low-level agent (based on TD-MPC2) as our method but replaces the high-level agent with a **SAC** policy. Results are reported in Figure 5, as well as here https://i.imgur.com/ShPGaJo.png for your convenience.\", \"**(2bf6)** Added a new baseline that uses the same low-level agent (based on TD-MPC2) as our method but replaces the high-level agent with a **DreamerV3** policy. Results are reported in Figure 5, as well as here https://i.imgur.com/ShPGaJo.png for your convenience.\", \"**(2bf6)** Updated Section 3.2 to clarify that the high-level policy acts at a fixed frequency regardless of whether previous commands were achieved.\", \"----\", \"Again, we thank the reviewers for their constructive feedback. We have made a significant effort to address all comments made by reviewers, and we hope that reviewers will take our responses into consideration when deciding whether to revise their score. If any reviewer believes that their concerns have not been addressed by our rebuttal, we would be very happy to work with them to address any further comments.\", \"Thank you for serving as a reviewer!\", \"Best,\", \"Authors of Puppeteer\"]}", "{\"summary\": \"The paper proposes a hierarchical world model for whole-body humanoid control based on RL. The framework separates high-level and low-level control, with a high-level puppeteering agent providing commands for a pre-trained low-level tracking agent, which executes detailed joint movements. Key contributions include a task suite for visual humanoid control, a hierarchical control model using RL without pre-defined reward designs, metrics for \\\"naturalness\\\" in motion, and thorough analysis through ablation studies and user preference tests.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The hierarchical world model, which integrates high-level visual guidance with low-level proprioceptive control, is novel in its simplicity and efficacy, especially in achieving natural motion without predefined rewards or skill primitives.\\n\\n2. Puppeteer advances visual whole-body humanoid control by setting new standards for naturalness and efficiency in motion synthesis. The zero-shot generalization to unseen tasks demonstrates the model\\u2019s potential for practical application.\", \"weaknesses\": \"1. Lack of low-level tracking performance evaluation. There is no evaluation or metrics for the tracking accuracy of success rate. There are several works both from simulated avatars community [1,2] and real-world humanoids [3,4] that evaluate the tracking performance. I am supurised that these works are not mentioned and their metircs are not used for evaluation in this work.\\n\\n\\n[1] Luo, Z., Cao, J., Kitani, K., & Xu, W. (2023). Perpetual humanoid control for real-time simulated avatars. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 10895-10904).\\n\\n[2] Won, J., Gopinath, D., & Hodgins, J. (2022). Physics-based character controllers using conditional vaes. ACM Transactions on Graphics (TOG), 41(4), 1-12.\\n\\n[3] Cheng, X., Ji, Y., Chen, J., Yang, R., Yang, G., & Wang, X. (2024). Expressive whole-body control for humanoid robots. arXiv preprint arXiv:2402.16796.\\n\\n[4] He, T., Luo, Z., Xiao, W., Zhang, C., Kitani, K., Liu, C., & Shi, G. (2024). Learning human-to-humanoid real-time whole-body teleoperation. arXiv preprint arXiv:2403.04436.\\n\\n\\n2. The lack of interface design discuss. The paper proposes to use high-level controller to generate positions of tracking keypoints. This might be one way for reuse the low-level skills for downstream tasks, but there are many existing designs in prior works [5] [6] that not are compared in this work. To me, the idea of training low-level tracking policy for skills reuse is a long-standing idea, but the interface of this hierarchy matters a lot. I'd love to see more comparison on this.\\n\\n[5] Tessler, C., Kasten, Y., Guo, Y., Mannor, S., Chechik, G., & Peng, X. B. (2023, July). Calm: Conditional adversarial latent models for directable virtual characters. In ACM SIGGRAPH 2023 Conference Proceedings (pp. 1-9).\\n\\n[6] Luo, Z., Cao, J., Merel, J., Winkler, A., Huang, J., Kitani, K., & Xu, W. (2023). Universal humanoid motion representations for physics-based control. arXiv preprint arXiv:2310.04582.\\n\\n3. The source of naturalness is unclear. The low-level tracking policy might be conditioned on human motion pirors, but if the tracking policy is good enough, it should be able to produce what TD-MPC2 achieves. Also, if the advantage of this paper is sample-efficiency and naturalness, a key baseline here would TD-MPC2+ AMP, which is missing. That being said, all the experimental results make sense to me, but the key comparison experiments are missing somehow.\", \"questions\": \"All my questions are listed in the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We have now updated the Table above with wall-time, inference time, and GPU memory requirements of DreamerV3 on our benchmark, as previously promised. We would greatly appreciate it if the reviewer could take a moment to review our response and changes to the manuscript.\"}", "{\"comment\": \"We are pleased to hear that our ablation with TD-MPC2 at the low level and SAC at the high level addresses your concern to some extent.\\n\\nWe are currently running the same ablation with DreamerV3 at the high level, but these experiments (3 tasks, 10 seeds each) will take a while since DreamerV3 is *significantly* slower than SAC/TD-MPC2. We estimate 3M environment steps (as is reported for our other methods) to take **approx. 6 days**. We will keep you posted with results as training progresses.\\n\\nIn the mean time, we would like to provide some additional context and push back on this claim a bit:\\n\\n> This baseline is a step in the right direction but is not sufficient on its own as SAC typically requires a substantially higher amount of samples than MBRL methods.\\n\\nWe do not believe there is substantial evidence for this in existing literature in the context of SAC vs. DreamerV3. The DreamerV3 paper does **not** compare against SAC at all, and does **not** consider humanoid control tasks, but several other related works have made this comparison. In particular, the TD-MPC2 paper benchmarks SAC and Dreamer-V3 on DMControl (including humanoids albeit with a lower DoF than we consider in this work) and find that DreamerV3 and SAC mostly have comparable data-efficiency but that DreamerV3 tends to converge to a higher asymptotic performance: approx. 750 vs. 600 mean reward on DMControl excluding Humanoid tasks, and approx. 850 vs. 500 reward on the Humanoid Walk task of DMControl. For reference, TD-MPC2 achieves approx. 900 reward on Humanoid Walk in 2M environment steps, whereas DreamerV3 achieves its 850 reward at around 12M environment steps. These numbers are echoed by Humanoid-Bench, which likewise benchmark SAC, DreamerV3, TD-MPC2 on state-based humanoid locomotion tasks. On this benchmark, TD-MPC2 performs significantly better than either method both in terms of asymptotic performance and data-efficiency, and SAC + DreamerV3 generally have similar convergence rate but DreamerV3 has higher asymptotic performance across the board compared to SAC. Our current results on our proposed benchmark are in line with previous results. We observe that both our hierarchical approach and single-level TD-MPC2 solves humanoid control tasks in <3M environment steps, whereas SAC and DreamerV3 fail to learn within 3M steps.\"}", "{\"comment\": \"Dear Reviewer EisB,\\n\\nAs the discussion period ends today, please be sure to read and reply to the authors' response to your request for results involving a manipulation task.\\n\\nBest,\\\\\\nAC\"}", "{\"title\": \"Re: hyper-parameters\", \"comment\": \"We are pleased to hear that these additional baselines seemingly address your remaining concerns. We are more than happy to elaborate on the experimental setup.\\n\\nDreamerV3 and TD-MPC2 are, by design, robust to choice of hyper-parameters and do not require tuning for individual domains/tasks. These are excerpts from their respective abstracts:\", \"dreamerv3\": \"> Robustness techniques based on normalization, balancing, and transformations enable stable learning across domains. [...] Our work allows solving challenging control problems without extensive experimentation, making reinforcement learning broadly applicable.\", \"td_mpc2\": \"> We demonstrate that TD-MPC2 improves significantly over baselines across 104 online RL tasks spanning 4 diverse task domains, achieving consistently strong results with a single set of hyperparameters.\\n\\nWe briefly discuss this in Section 4.1 Experimental Details L319-324 of our paper:\\n\\n> Both our method and baselines use the same hyperparameters across all tasks, as TD-MPC2 and DreamerV3 have been shown to be robust to hyperparameters across task suites (Hansen et al., 2024; Hafner et al., 2023; Sferrazza et al., 2024). For a fair comparison, we experiment with various design choices and hyperparameter configurations for SAC and report the best results that we obtained. We provide further implementation details in Appendix D.\", \"which_we_elaborate_on_in_appendix_d\": \"> **Puppeteer.** We base our implementation off of TD-MPC2 and use default design choices and hyperparameters whenever possible. We experimented with alternative hyperparameters but did not\\nobserve any benefit in doing so.\\n\\n> **TD-MPC2.** We use the official implementation available at https://github.com/nicklashansen/tdmpc2, but modify the implementation to support multi-modal observations and termination conditions as discussed in Section 3.\\n\\n> **DreamerV3.** We use the official implementation available at https://github.com/danijar/dreamerv3, and use the default hyperparameters recommended for proprioceptive DMControl tasks. A key selling point of DreamerV3 is its robustness to hyperparameters across tasks (relative to SAC), but we find that DreamerV3 does not achieve any non-trivial performance\\non our task suite. While DreamerV3 is a model-based algorithm, it does not use planning, which the ablation in Figure 8 (hierarchical planning) finds to be a key driver of performance in Puppeteer and TD-MPC2.\\n\\n> **SAC.** We benchmark against the implementation from https://github.com/denisyarats/pytorch_sac (Yarats & Kostrikov, 2020) due to its strong performance on lower-dimensional DMControl tasks as well as its popularity among the community. We modify\\nthe implementation to support early termination. We experiment with a variety of design choices\\nand hyperparameters as we find vanilla SAC to suffer from numerical instabilities on our task suite\\n(presumably due to high-dimensional observation and action spaces), but are unable to achieve\\nnon-trivial performance. [...] Design choices and hyperparameters that we experimented with are\", \"as_follows\": \"| **Design choice** | **Values** |\\n|-----------------------|------------------------------|\\n| Number of Q-functions | 2,5 |\\n| TD-target | Default, REDQ (Chen et al., 2021) |\\n| Activation | ReLU, Mish, LayerNorm + Mish |\\n| MLP dim | 256, 512, 1024 |\\n| Batch size | 256, 512 |\\n| Learning rate | 3 \\u00d7 10\\u22124, 1 \\u00d7 10\\u22123 |\\n\\nWe use the same hyperparameters and experimental setup in the hierarchical versions of SAC and DreamerV3 as in the single-level versions.\\n\\nWe hope that this clears up any confusion regarding the experimental setup.\"}", "{\"summary\": \"This paper explores the high-dimensional humanoid control from visual observations. Specifically, the proposed approach is based on two RL-trained agent models. High-level agent generates reference trajectories from visual observations. Low-level agent focuses on tracking these trajectories using current low-dimensional state information. The proposed method demonstrated enhanced natural motion control of a 56-DoF simulated humanoid, outperforming baseline models according to experimental results and a user study.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The research addresses a significant and practical challenge in generalist agents: controlling a humanoid agent from visual observations using generalizable world models.\\n2. The methodology involves training a low-level agent on trajectory tracking that is adaptable across a range of control tasks, showing promising generalizability.\\n3. A high-level agent controls the humanoid from visual observations, a task-specific but broadly applicable approach in real-world scenarios.\\n4. A user study validates that the proposed method enables more natural humanoid control, which is preferred by participants.\", \"weaknesses\": \"1. The evaluation heavily relies on the \\\"naturalness\\\" of movements, which depends on subjective human judgments of what is considered \\\"human-like.\\\" This criterion, while important, may not fully evaluate the feasibility of such motions in actual humanoid robots, which face different kinematic and dynamic constraints than humans.\\n2. Based on Figure 5, the episodic return of the baseline TD-MPC2 is comparable or superior to the proposed method across most tasks. It would be beneficial to evaluate other performance metrics such as survival rates or survival times on the final-trained model to provide a more comprehensive evaluation.\\n3. The paper claims \\\"Zero-shot generalization to larger gap lengths,\\\" yet does not compare these results with baseline methods. Including comparative generalization data for the baseline TD-MPC method would strengthen claims of superior generalization.\\n4. Minor issue:\\na) Resource Efficiency: The two level agents training approach might require significantly more time and resources than single-agent baselines. Comparing memory usage, training duration, and inference times across methods would provide critical insights into the practicality of the proposed method.\\nb) Model Reusability: The low-level tracking agent is described as reusable across tasks but it is unclear if this model is applicable only to 56-DoF humanoids or if it can be adapted to different control dimensions.\\nc) There is a typo in the Problem Formulation section (page 2). The environment transition function should be denoted as L, not S.\", \"questions\": \"1. How relevant is the metric of \\\"naturalness\\\" in real-world humanoid control, and is it sufficient to evaluate humanoid trajectory tracking effectively?\\n2. It would be beneficial to include a comparative study on the survival rate or survival time when using the final-trained model\\n3. It would be helpful to include baseline experiments focused on \\\"zero-shot generalization.\\\"\\n4. Please provide details on memory usage, training time, and control (or inference) time across different methods.\\n5. Is the low-level tracking effectively transferred to control different humanoid models with varying degrees of freedom?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the continued discussion, we really appreciate it. To address your comments:\\n\\n> I still believe it is important to showcase the proposed method's performance in manipulation task\\n\\nWe agree that extending our work to manipulation tasks would be really interesting and a natural next step in whole-body humanoid control research. However, we would like to point out that the human mocap dataset (CMU Motion Capture Database [1], and by extension MoCapAct [2] which retargets the mocap data to our specific humanoid model) that we rely on in this work does not contain any object manipulation. While that does not preclude the possibility of pretraining our method on this dataset and transferring it to downstream tasks that involve object manipulation, the pretraining data contains no human prior of how to interact with objects (e.g. grasping). We do not believe that this is a limitation of our method itself, but rather **a limitation of the type of human mocap data that is currently available** to the public.\\n\\n> As the paper is designed for \\\"whole-body humanoid control\\\"\\n\\nWe do believe this to be an accurate claim regardless of whether our benchmark tasks include object manipulation or not. We have demonstrated that our method is able to control a full 56-DoF humanoid model from visual inputs, and produces natural and human-like motions when pretrained on human mocap data. It is evident from our qualitative results (videos are available on our [project webpage](https://rlpuppeteer.github.io)) that our method is capable of controlling *the whole body* and accurately tracks diverse reference motions and poses. While it is unfortunate that no object-centric mocap dataset exists yet for whole-body humanoid control, there is no technical reason for why our method could not be extended to such a setting if it became available.\\n\\nIn summary, we recognize the validity of the reviewer's suggestion yet urge the reviewer to please judge our work based on its technical contributions and scientific merit as a whole. Thanks again for your time and valuable feedback!\\n\\n----\\n\\n[1] Carnegie Mellon University Graphics Lab Motion Capture Database, URL: *http://mocap.cs.cmu.edu* (2003)\\n\\n[2] Wagener, N., Kolobov, A., Frujeri, F. V., Loynd, R., Cheng, C., Hausknecht, M., \\\"MoCapAct: A Multi-Task Dataset for Simulated Humanoid Control\\\", NeurIPS 35 (2022)\"}" ] }
7visV100Ms
Self-Boosting Large Language Models with Synthetic Preference Data
[ "Qingxiu Dong", "Li Dong", "Xingxing Zhang", "Zhifang Sui", "Furu Wei" ]
Through alignment with human preferences, Large Language Models (LLMs) have advanced significantly in generating honest, harmless, and helpful responses. However, collecting high-quality preference data is a resource-intensive and creativity-demanding process, especially for the continual improvement of LLMs. We introduce SynPO, a self-boosting paradigm that leverages synthetic preference data for model alignment. SynPO employs an iterative mechanism wherein a self-prompt generator creates diverse prompts, and a response improver refines model responses progressively. This approach trains LLMs to autonomously learn the generative rewards for their own outputs and eliminates the need for large-scale annotation of prompts and human preferences. After four SynPO iterations, Llama3-8B and Mistral-7B show significant enhancements in instruction-following abilities, achieving over 22.1% win rate improvements on AlpacaEval 2.0 and ArenaHard. Simultaneously, SynPO improves the general performance of LLMs on various tasks, validated by a 3.2 to 5.0 average score increase on the well-recognized Open LLM leaderboard.
[ "preference optimization", "synthetic data", "LLM alignment" ]
Accept (Poster)
https://openreview.net/pdf?id=7visV100Ms
https://openreview.net/forum?id=7visV100Ms
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zgrpzVqhwQ", "yGseG97yBv", "sWJ6YfZ4Lq", "oZNjYUMc7j", "oNMt2ajUmG", "mmx8XE8OoZ", "mRHePqS843", "h7eLNALk5S", "gOnJ2Iz2qB", "g3iacynXEc", "fGIGE9y6gv", "exfqGoU80G", "eLNyel2sV5", "Y6g0roVbFy", "Xrfxk8oGGW", "WPOe8hzQK5", "VoiDmKc8aO", "SowMInxGdG", "QdzE6WYpRQ", "Qa8WRqFZyY", "QNODZoBq4m", "Nxxu04akCE", "Hyv6NpkuHO", "FRC6QoMR14", "CRWWVMgYnz", "B28AYpeUEM", "AMUcVVVbjH", "7YLNTSe6zM", "6vVcxpCHKW", "6YRJzG9JU3", "693GxxT00V" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "decision", "official_comment" ], "note_created": [ 1732198485715, 1732116285759, 1732115603356, 1732677160488, 1732419801336, 1732506514856, 1732586297127, 1730469469235, 1734595705935, 1732252305939, 1732678508222, 1732116844584, 1732200448072, 1732677319784, 1732116890085, 1732418209353, 1732114618349, 1732114396392, 1732418873403, 1732524987347, 1731094006610, 1732115754381, 1730394072131, 1732586744107, 1732480346381, 1730685102127, 1732501155962, 1732200221304, 1729924899263, 1737523456467, 1732501233577 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1529/Authors" ], [ "ICLR.cc/2025/Conference/Submission1529/Authors" ], [ "ICLR.cc/2025/Conference/Submission1529/Authors" ], [ "ICLR.cc/2025/Conference/Submission1529/Authors" ], [ "ICLR.cc/2025/Conference/Submission1529/Authors" ], [ "ICLR.cc/2025/Conference/Submission1529/Reviewer_d9yw" ], [ "ICLR.cc/2025/Conference/Submission1529/Authors" ], [ "ICLR.cc/2025/Conference/Submission1529/Reviewer_LmCF" ], [ "ICLR.cc/2025/Conference/Submission1529/Area_Chair_tZze" ], [ "ICLR.cc/2025/Conference/Submission1529/Authors" ], [ "ICLR.cc/2025/Conference/Submission1529/Reviewer_EKkD" ], [ "ICLR.cc/2025/Conference/Submission1529/Authors" ], [ "ICLR.cc/2025/Conference/Submission1529/Authors" ], [ "ICLR.cc/2025/Conference/Submission1529/Authors" ], [ "ICLR.cc/2025/Conference/Submission1529/Authors" ], [ "ICLR.cc/2025/Conference/Submission1529/Authors" ], [ "ICLR.cc/2025/Conference/Submission1529/Authors" ], [ "ICLR.cc/2025/Conference/Submission1529/Authors" ], [ "ICLR.cc/2025/Conference/Submission1529/Authors" ], [ "ICLR.cc/2025/Conference/Submission1529/Reviewer_LmCF" ], [ "ICLR.cc/2025/Conference/Submission1529/Reviewer_TAJC" ], [ "ICLR.cc/2025/Conference/Submission1529/Authors" ], [ "ICLR.cc/2025/Conference/Submission1529/Reviewer_Ftfv" ], [ "ICLR.cc/2025/Conference/Submission1529/Authors" ], [ "ICLR.cc/2025/Conference/Submission1529/Reviewer_EKkD" ], [ "ICLR.cc/2025/Conference/Submission1529/Reviewer_EKkD" ], [ "ICLR.cc/2025/Conference/Submission1529/Authors" ], [ "ICLR.cc/2025/Conference/Submission1529/Authors" ], [ "ICLR.cc/2025/Conference/Submission1529/Reviewer_d9yw" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1529/Authors" ] ], "structured_content_str": [ "{\"title\": \"Reply to Reviewer EKkD (Part 1/2 )\", \"comment\": \"We sincerely thank Reviewer EKkD for the positive recommendation as well as the valuable suggestions. We really appreciate your kind words that our work is intuitive and has good results. Below we would like to give detailed responses to each of your comments.\\n\\n**Q1: \\u201cHas the synthetic preference data been tested for benchmark data leakage? I didn't see anything suggesting that checks for data leakage were done.\\u201d**\\n\\nThank you for this insightful point. As suggested, we conducted experiments to test for benchmark data leakage. \\n\\nSpecifically, we compared the n-grams of our training datasets (seed SFT data, synthetic preference data, and UltraFeedback for reference) with the n-grams of the test set data to identify any overlaps. If any n-gram from a test data entry appears in our dataset n-grams, that entry is marked as leaked, and we calculate the proportion of leaked data for each test dataset. For datasets with candidate answers, we concatenate the question with candidate answers for analysis; for those without candidate answers, we use only the question. Following HELM\\\\[1\\\\], we set the n-gram size to 13.\", \"the_results_are_as_follows\": \"| Data | Arc | HellaSwag | TQA | MMLU | Winogrande | GSM8k | |\\n| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |\\n| UltraFeedback (for reference) | 0.00085 | 0.00030 | 0.00122 | 0.00199 | 0.00000 | 0.00531 | |\\n| Seed SFT Data | 0.00085 | 0.00010 | 0.00122 | 0.00036 | 0.00079 | 0.00076 | |\\n| Synthetic Preference Data | 0.00000 | 0.00000 | 0.00122 | 0.00064 | 0.00000 | 0.00000 | |\\n| \\n\\n*(Benchmark data leakage test for Open LLM Leaderboard tasks. UltraFeedback data for reference.)*\\n\\n| Data | AlpacaEval 2.0 | Areana-Hard | MT-Bench |\\n| :---- | :---- | :---- | :---- |\\n| UltraFeedback | 0.00248 | 0.00600 | 0.01250 |\\n| Seed SFT Data | 0.00373 | 0.01200 | 0.01250 |\\n| Synthetic Preference Data | 0.00124 | 0.00800 | 0.00000 |\\n| \\n\\n*(Benchmark data leakage test for instruction-following benchmarks. UltraFeedback data for reference.)*\\n\\n\\n\\nOverall, the overlap between our training datasets (seed data and synthetic preference data) and the test sets is very low, indicating that there is no data leakage issue. \\n\\nInterestingly, **synthetic preference data generally shows even less overlap with the test sets than other data**. \\n\\nWe will incorporate these findings into the revised version of our paper. Thank you for highlighting this important aspect, which certainly enhances the robustness of our research.\\n\\n**Q2: \\u201cWhile there's novelty to the pipeline as a whole, the individual components of the pipeline have been known.\\u201d**\\n\\nThe continuous improvement of LLMs has always been a challenging and complex process \\\\[2,3,4\\\\]. In our setting, it mainly involves self-prompt generation, response synthesis, and optimization.\\n- Self-prompt generation: our self-prompt generation method differentiates itself from previous approaches that rely on seed data or larger models by enabling the model to **self-generate** high-quality, diverse prompts based on **random keywords from pretraining corpora**. \\n- Response synthesis: to the best of our knowledge, we are the first to **utilize pre- and post-self-refinement responses to construct synthetic preference pairs**. \\n\\nWhile we adopt SimPO\\u2019s loss function during the optimization phase, the innovations in the first two parts are significant contributions to prompt generation and synthetic preference data.\\n\\n**Q3: Typos**\\n\\nThank you for your meticulous review and for pointing out the typos. Yes, on L247, the (\\\\\\\\sigma) should indeed be (\\\\\\\\beta). We will update the mentioned typos in the latest version.\"}", "{\"title\": \"Reply to Reviewer d9yw (Part 1/3 )\", \"comment\": \"We sincerely thank Reviewer d9yw for your review and are grateful for the time you spent on our submission. We're pleased you find our method effective and well-validated. Below, we provide a point-by-point rebuttal to clarify your concerns.\\n\\n**Q1: Ablation study on the effect of excluding noise keywords**\\n\\nAs suggested, we conduct experiments on the effect of excluding noise keywords. We train and evaluate the Llama3 prompt generator with three random keywords, either including or excluding one noise keyword. We calculate the average similarity across the generated prompts using SentenceTransformer, as mentioned in Section 2.2, and employ GPT-4 Turbo for LLM-as-a-Judge evaluation of prompt quality (on a scale of 1 to 10, where 1 represents a prompt that is very unrealistic, unnatural, or unanswerable, and 10 represents a prompt that is very reasonable, realistic, and answerable).\\n\\n|Noise Condition | Avg Similarity | Quality | \\n|-----------------|----------------|---------| \\n| no noise | 0.0572 | 7.92 | \\n| w noise | 0.0574 | 8.99 | \\n|\\n\\n*( Evaluation run on 1k data for each setting, each example contains 3 keywords )*\\n\\nThe results show that the inclusion of noise keywords improves the quality of the self-generated prompts (as LLMs learn to ignore unrelated words), but even without noise keywords, the self-prompt generator can still generate relatively high-quality prompts.\\n\\n \\n\\n**Q2: Analysis of the types of keywords**\\n\\n**1. Extraction Form**\\n\\nCurrently, our extraction form employs a rule-based random selection. Specifically, we use the NLTK toolkit to filter out stop words, extract all noun phrases from the sentences, and remove any preceding articles. We then randomly sample from these phrases. The decision to use noun phrases is based on following preliminary experiments:\\n\\n| Extraction Form | Avg. Similarity | Quality |\\n| :---- | :---- | :---- |\\n| Noun Phrases | 0.0574 | 8.99 |\\n| Verb Phrases | 0.0865 | 8.74 |\\n| Noun \\\\+ Verb Phrases | 0.0610 | 8.67 |\\n| All Phrases | 0.0569 | 8.67 |\\n| Noun Words | 0.0604 | 8.98 |\\n| Verb Words | 0.0894 | 8.76 | \\n| Noun \\\\+ Verb Words | 0.0725 | 8.52 |\\n| All Words | 0.0598 | 8.61 |\\n|\\n\\n*(Note: \\\"Words\\\" refer to single-word keywords, while \\\"Phrases\\\" refer to each keyword can be a phrase comprising 1 to 3 words.)*\\n\\nAs shown, **using phrases rather than limiting keywords to single words generally results in better diversity and quality**. This improvement may be due to phrases incorporating more inductive bias from the pretraining data for prompt generation. Specifically, while random sampling from all phrases or words without filtering is acceptable, it is not as effective as **random sampling from nouns**, as nouns contain the most important information needed to diversify a sentence.\\n\\nBeyond these evaluation results, we have also manually observed the impact of different extraction forms and found that using phrases as keywords generally resulted in higher quality prompts. Therefore, our experiments utilize randomly sampled noun phrases. We will include these preliminary experiments, which support our choice of keyword extraction methods, in the Appendix to provide further insights into self-prompt generator training.\\n\\n**2. Order**\\n\\nWhether order influences is a very insightful point. In our current approach, we randomize keyword order. To systematically examine this, we compared results from maintaining the original keyword order (also in the pretraining data) versus randomizing it. The results are shown in the table below.\\n\\n| Order | avg similarity | Quality |\\n| :---- | :---- | :---- |\\n| keep original order | 0.0605 | 9.01 | \\n| random | 0.0574 | 8.99 | \\n|\\n\\n*( Evaluation run on 1k data for each setting, each example contains 3 keywords )*\\n\\n\\n\\n\\nThe results indicate that **keyword order has a minimal impact on prompt generation**. Randomization enhances diversity, while preserving the original order retains some semantic context from the pretraining data, yielding slightly higher quality prompts. However, this effect is negligible.\\n\\n \\n**Q3: Vary the number of keywords** \\nThanks for the great suggestion. As suggested, we have added experiments that vary the number of keywords, and the results are presented in the following table. The experimental findings indicate that for training the self-prompt generator, **using 3-5 keywords strikes a good balance between the diversity and quality of the generated prompts**. Including too many keywords can be detrimental to the quality of the generated prompts.\\n\\n| \\\\# of keywords | Avg. Similarity | Quality | |\\n| :---- | :---- | :---- | :---- |\\n| 1 | 0.0621 | 8.71 | |\\n| 3 | 0.0574 | 8.99 | |\\n| 5 | 0.0543 | 8.63 | |\\n| 10 | 0.0541 | 7.08 | |\\n|\\n\\n*( Evaluation run on 1k data for each setting. )*\"}", "{\"title\": \"Reply to Reviewer Ftfv (Part 1/2 )\", \"comment\": \"We sincerely thank Reviewer Ftfv for your review and are grateful for the time you spent on our submission. We are glad for the acknowledgment that our approach is novel, effective and practical to tackling a critical challenge in LLM development. Below we would like to give detailed responses to each of your comments.\\n\\n**Q1: Limited task scope**\\n\\nExcept for instruction-following tasks, we have also evaluated the model performance on **six diverse LM Evaluation Harness tasks** (Table 5, including ARC challenge for reasoning and GSM8k for math domain), as well as **six Open LLM Leaderboard tasks** (Table 4, including PROST for reasoning and MathQA for math domain).\\n\\n\\nWe would like to note that SynPO is not designed for complex reasoning but we agree that comprehensive evaluation across a broader range of tasks would strengthen our findings. To address potential concerns regarding task scope, we have extended our evaluation to include more complex reasoning tasks and specialized domains as suggested. \\n\\n\\n- For complex reasoning tasks, we have evaluated the model performance on **seven additional reasoning-related tasks** from LM Evaluation Harness. The results are presented below: \\n\\n\\n| Model | LogiQA | LogiQA 2.0 | MMLU-Pro| SiQA | QA4MRE | PIQA | NQ Open | \\n|---------------|--------|--------|---------|------------|--------|-------|---------| \\n| Llama3-SFT | 30.88 | 36.39 |36.12 | 46.21 | 46.13 | 81.61 | 11.55 | \\n| SynPO 4 iters | 31.95 | 37.40 |37.28 | 49.18 | 49.30 | 81.83 | 12.80 | \\n|\\n\\n*(Evaluation results on 7 LM Evaluation Harness tasks for reasoning, using Llama3)*\\n\\n\\n- For specialized domains, we evaluated model performance on **four additional tasks from diverse domains** and reported the **domain-specific results on the AGIEval benchmark**: \\n\\n\\n| Model | PubMedQA | RACE | SWAG | EQ-Bench | \\n|---------------|----------|-------|-------|----------| \\n| Llama3-SFT | 73.20 | 44.59 | 76.91 | 46.79 | \\n| SynPO 4 iters | 75.44 | 46.70 | 77.19 | 55.82 | \\n|\\n\\n*(Evaluation results on 4 LM Evaluation Harness tasks for different domains, using Llama3)*\\n\\n\\n| Model | History | Biology | Chemistry | Physics | MathQA | \\n|---------------|---------|---------|-----------|---------|--------| \\n| Llama3-SFT | 38.29 | 34.29 | 24.64 | 32.00 | 24.22 | \\n| SynPO 4 iters | 45.53 | 39.05 | 31.88 | 35.00 | 28.49 | \\n|\\n\\n*(Evaluation results on AGIEval for different domains, using Llama3)* \\n\\n\\n**Specifics for AGIEval:**\\n\\nAGIEval is a comprehensive benchmark specifically designed to assess foundation models in the context of human-centric standardized exams across diverse domains, such as college entrance exams, law school admission tests, math competitions, and lawyer qualification tests.\\n\\n\\n**Other Task Specifics:**\\n\\n- LogiQA: Logical reasoning tasks requiring advanced inference and deduction. \\n\\n- LogiQA 2.0: Large-scale logical reasoning dataset adapted from the Chinese Civil Service Examination. \\n\\n- MMLU-Pro: A refined set of MMLU, integrating more challenging, reasoning-focused questions and expanding the choice set from four to ten options.\\n\\n- SiQA: Social Interaction Question Answering to evaluate common sense and social reasoning. \\n\\n- QA4MRE: Question Answering for Machine Reading Evaluation, assessing comprehension and reasoning. \\n\\n- PIQA: Physical Interaction Question Answering tasks to test physical commonsense reasoning. \\n\\n- NQ Open: Open domain question answering tasks based on the Natural Questions dataset. \\n\\n- PubMedQA: Question answering tasks based on PubMed research articles for biomedical understanding. \\n\\n- RACE: Reading comprehension assessment tasks based on English exams in China. \\n\\n- SWAG: Situations With Adversarial Generations, predicting the next event in videos. \\n\\n- EQ-Bench: Tasks focused on equality and ethics in question answering and decision-making. \\n\\n**Q2: Unexplained performance gaps** \\nWe'd like to note that the scoring method used in AlpacaEval is based on **pair-wise comparison** (as shown in Table 8). This approach reflects the relative performance improvement over the baseline model, which can result in a larger numerical scale of improvement (e.g., SimPO training on UltraFeedback improves AlpacaEval score from 6.2 to 22.0 \\\\[1\\\\]). When we consider absolute score changes on direct evaluation benchmarks (such as the MT-Bench results in Table 3\\\\) or accuracy-based benchmarks (like the Open LLM Leaderboard results in Table 4), the improvements are more moderate but still consistently positive.\"}", "{\"comment\": \"Dear Reviewer Ftfv,\\n\\nWe would like to thank you again for your detailed reviews. We have updated our draft and added replies to your questions with our latest experimental results.\\n\\nSince the rebuttal deadline is approaching soon, we would appreciate it if you could let us know if our responses have addressed your concerns satisfactorily. If so, we would be grateful if you could consider raising your score. We would be happy to have any follow-up discussions or address any additional concerns.\\n\\nThanks very much! Looking forward to your reply.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Further comments and discussions will be appreciated!\", \"comment\": \"Dear Reviewer Ftfv,\\n\\nThank you for your valuable time to review our work and constructive feedback. We posted our response to your comments four days ago, and we wonder if you could kindly share some of your thoughts so we can keep the discussion rolling to address your concern if there are any.\\n\\nIn the previous response,\\n\\n1. As suggested, we provided a more comprehensive evaluation on more broad tasks and domains, we added ten additional tasks (including reasoning tasks and domain specific tasks) and reported the results in domain-specific scores on AGIEval in Appendix J. \\n\\n2. We explained that the scoring method in AlpacaEval is based on pair-wise comparison, which can result in larger numerical improvements (with related results in SimPO paper for reference).\\n\\n3. We clarified that our comparison with the SFT baseline does not involve contrasting synthetic data with real data. We further explained why synthetic data performs better at the prompt level, because synthetic prompts perform slightly better due to better diversity control and gradual self-learning of the distribution gap.\\n\\nWe would appreciate it if you could kindly take a look at both the revision and our response to your comments. If you have any further questions, we are happy to discuss them!\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"I believe the updates provided by the authors will further enhance the clarity and strength of this paper. Thank you for your response, and I will adjust the score accordingly.\"}", "{\"title\": \"Response to Reviewer EKkD\", \"comment\": \"Many thanks for your positive feedback on our paper and your response to our rebuttal.\\n\\nTo further test for data leakage, we conducted the embedding-based check as suggested \\\\[1\\\\] (and we are very grateful for pointing us towards this elegant approach). We utilized the GPT-4-Turbo API to report the contamination percentage (%) on the test set, with UltraFeedback results serving as a reference. For a quick check, we randomly selected 1000 samples from test sets larger than 1000 samples (using the full set for those with fewer than 1000 samples) for experimentation.\", \"the_results_are_as_follows\": \"| Data | Arc | HellaSwag | TQA | MMLU | Winogrande | GSM8k | \\n| :---- | :---- | :---- | :---- | :---- | :---- | :---- |\\n| UltraFeedback (for reference) | 2.2% | 1.5% | 2.1% | 0.4% | 1.2% | 0.0% |\\n| Seed SFT Data | 1.4% | 1.6% |0.5% | 0.3% | 1.1% | 0.0% |\\n| Synthetic Preference Data | 1.0% | 0.6% | 0.9% | 0.3% | 0.2% | 0.0% |\\n| \\n\\n*(Embedding-based data leakage test results on LLM Leaderboard tasks.)*\\n\\n\\n| Data | AlpacaEval 2.0 | Arena-Hard | MT-Bench |\\n| :---- | :---- | :---- | :---- |\\n| UltraFeedback | 5.3% | 1.8% | 1.5% |\\n| Seed SFT Data | 4.5% | 2.0% | 1.6% |\\n| Synthetic Preference Data | 3.9% | 1.6% | 1.2% |\\n| \\n\\n*(Embedding-based data leakage test results on three instruction-following benchmarks.)*\\n\\nOverall, the conclusion remains that the gains cannot be attributed to data leakage. In comparison, embedding-based detection can identify more instances of semantic overlap, making it a more effective method of detection. Also, synthetic preference data generally shows even less overlap with the test sets than other data.\\n\\n\\nWe agree that demystifying the reasons for the gains on TruthfulQA will improve the clarity. As suggested, we will incorporate explanations for the performance gains on TruthfulQA in our final version.\\n\\nAgain, thank you again for your detailed reviews and valuable insights\\\\!\\n\\n \\n**Reference:** \\n*\\\\[1\\\\] Yang S, Chiang W L, Zheng L, et al. Rethinking benchmark and contamination for language models with rephrased samples\\\\[J\\\\]. arXiv preprint arXiv:2311.04850, 2023\\\\.*\"}", "{\"summary\": \"This paper proposes a framework called SynPO that leverages synthetic preference data for human alignment. The framework starts with the training of a self-prompt generator to create large-scale synthetic prompts. This prompt-generator is trained from the LLM itself, starting from some initial SFT data, and can generate prompts based on keywords. In an iterative process, these prompts are then given to the model to be trained, and passed through a response improver, which is retrained each round to reduce the gap between policy model outputs and gold standard responses. The authors perform experiments with Mistral-Base 7B and Llama3-8B, starting from versions that have undergone SFT supervision on a chat-dataset. They show improvements on three alignment benchmarks: MT-Bench, Arena Hard and AlpacaEval 2.0. They also show an overall improvement on standard benchmarks including Arc, HellaSwag, TQA and MMLU.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper does quite extensive experimentation, investigate the diversity of the prompts and do ablations to understand the impact of various parts of their pipeline. The topic will be well-received I believe given the recent attention for alignment and the cost of generating human preference data.\", \"weaknesses\": \"In the introduction, it is not entirely clear to me what model is what. E.g. it is mentioned that a self-prompt generator is trained to create synthetic prompts, and after that it is mentioned that model-generated responses are used as rejected candidates and a response improver is used to improve the response. It is also mentioned that a small amount of SFT data is used to steer the generation of synthetic preference data. But what is what? Which model is used with the synthetic preference data? What is the \\\"response\\\"? Is that coming from the self-prompt generator? Are the self-prompt generator and the response improver the same model? Some of this is cleared up later, but the reader does not know that while trying to comprehend that initial paragraph.\", \"small_point\": \"I don't think it is entirely accurate to write that you use Mistral-base and Llama 3 base model, because those models did not undergo SFT. The models stem from these models (as does the Llama 3 8B instruct model), but they are not those base models.\\n\\nLastly, I am a bit confused about the improvement on the standard benchmarks. Many of these benchmarks are things learned in pretraining (e.g. knowledge). The fact that some of these scores go up, especially on knowledge benchmarks, is a bit suspicious because knowledge is something that can evidently not be learned from just self-improvement loops. It makes me wonder if the improvement is just due to the input coming from GPT-4. I would ask if you considered a basilen where you do not update the policy model in between, but just do multiple rounds trying to squeeze out more of the GPT generated data without doing iterations with the model itself, or just do more SFT with the initial data, but I think that those are the ablations presented in 4.2 and the difference seems quite large.\", \"questions\": \"The fact that some of these scores go up, especially on knowledge benchmarks, is a bit suspicious because knowledge is something that can evidently not be learned from just self-improvement loops. It makes me wonder to what extent the improvement is just coming from the GPT-4 data. Your experiments in 4.2 seem to refute that idea (though you don't look at the benchmarks there I believe), leaving me a bit puzzled. Do you have any other explanation for why the model would improve on something it shouldn't improve on?\", \"flag_for_ethics_review\": \"['Yes, Legal compliance (e.g., GDPR, copyright, terms of use)']\", \"details_of_ethics_concerns\": \"I don't think the OpenAI terms allow the use of outputs of their models to train other models.\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper introduces SynPO, a self-boosting framework that leverages synthetic preference data to iteratively improve the performance of large language models (LLMs). The approach trains a self-prompt generator and response improver to produce synthetic prompts and refine responses, eliminating the need for large-scale human annotation. The method is examined on Llama3-8B and Mistral-7B, showing significant improvements in alignment tasks (e.g., AlpacaEval 2.0) and general benchmarks (e.g., MMLU, TruthfulQA).\", \"strengths\": \"The paper addresses a critical challenge of lack of high-quality preference data by introducing a scalable and iterative synthetic data generation approach. The experiments are extensive, with strong empirical results supported by ablation studies and analyses of key components (e.g., noise keywords, extraction forms). The method demonstrates strong applicability to alignment tasks with clear presentations.\", \"weaknesses\": \"The paper lacks detailed discussion on out-of-distribution robustness, and while the authors argue for the effectiveness of synthetic data, additional analyses on unexplained performance gains (e.g., on knowledge benchmarks) could improve the work. The introduction and framing of key components (e.g., model roles) were initially unclear, though this was improved during the rebuttal period.\\n\\nI recommend accepting this paper for its scalable method, solid empirical results and clear presentation.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, several key points were raised and addressed by the authors:\\n\\n1. The proposed self-boosting method relies on external data:\\n\\nThe authors clarified that \\\"self-boosting\\\" refers to the iterative self-improvement mechanism, with minimal external input required. And they use automatic keyword selection.\\n\\n2. Concerns were raised about the improvement on knowledge benchmarks and potential in-distribution biases. The authors expanded evaluations to broader domains (e.g., AGIEval) and provided embedding-based data leakage checks, which verifies minimal overlap and robustness.\\n\\n3. Reviewers want to see more ablation studies to understand why synthetic data outperformed alternatives. The authors provided experiments comparing synthetic and real data, showing the advantages of synthetic data in diversity and iterative learning.\\n\\n4. Confusion in the introduction about model roles was noted. The authors revised this section for clarity.\\n\\nOverall, authors addressed reviewers' concerns well.\"}", "{\"title\": \"General Response to the Reviewers\", \"comment\": \"We sincerely thank all the reviewers for their great efforts in reviewing our paper and for their constructive comments, which clearly helped us strengthen our paper. We are encouraged to find that the reviewers appreciate the novelty and intuitiveness of SynPO (Reviewer EKkD, Ftfv), the sound solution introducing diversity in the self-improvement process (Reviewer TAJC), the extensive and solid experimentation (Reviewer EKkD, LmCF), and the clear presentation quality (Reviewer EKkD).\\n\\nWe have followed the helpful suggestions from all reviewers, and updated the additional experiments and discussion in the new version. Here, we make a brief summary of the changes made in the updated version:\\n\\n\\n1. We have included preliminary experiments on keyword analysis in Appendix H (Reviewers TAJC, d9yw), which support our choice of keyword extraction methods and provide deeper insights into the training of the self-prompt generator. \\n2. We have revised paragraph 3 in the introduction for improved clarity (Reviewer LmCF) and corrected typos in the main text (Reviewer EKkD). \\n3. We have expanded the discussion on the mentioned works in Section 5 (Reviewer d9yw) in the revised version. \\n4. We have conducted experiments to test for benchmark data leakage in Appendix I (Reviewer EKkD). It is worth mentioning that synthetic preference data shows even less overlap with test sets compared to other data. \\n5. In our original paper, we evaluated on three instruction-following benchmarks (AlpacaEval 2.0, ArenaHard, MT-Bench), six diverse LLM Harness tasks, and the well-recognized Open LLM Leaderboard. In the revised version, we have expanded our evaluation to include ten additional tasks (including reasoning tasks and domain-specific tasks) and report domain-specific scores on AGIEval in Appendix J (Reviewer Ftfv, d9yw).\\n\\nOnce again, we thank all reviewers for their valuable feedback. We are happy to continue the discussion if there are any further questions.\"}", "{\"comment\": \"Wonderful! I think this additional analysis would make the paper quite strong.\\nI really appreciate the quick and precise turnaround by the authors.\"}", "{\"title\": \"Reply to Reviewer d9yw (Part 2/3 )\", \"comment\": \"**Q4: Validation of out-of-distribution data**\\n\\nWe would like to highlight that for evaluation, in addition to instruction-following tasks, we have assessed the model's performance on diverse downstream tasks that are not in-distribution data, **including six varied LLM Harness tasks and six Open LLM Leaderboard tasks**.\\n\\nTo address potential concerns, we provide more diverse evaluation results on additional tasks, including various domains in AGIEval benchmarks and 10 downstream tasks spanning a wide range of evaluation data.\\n\\n**A. AGIEval Domains**\\n| Model | History | Biology | Chemistry | Physics | MathQA | \\n|---------------|---------|---------|-----------|---------|--------| \\n| Llama3-SFT | 38.29 | 34.29 | 24.64 | 32.00 | 24.22 | \\n| SynPO 4 iters | 45.53 | 39.05 | 31.88 | 35.00 | 28.49 | \\n|\\n\\n*(Evaluation results on AGIEval for different domains, using Llama3)*\\n\\n\\n**B. More Diverse Tasks**\\n\\n| Model | LogiQA |MMLU-Pro| SiQA | QA4MRE | NQ Open | PubMedQA | RACE | SWAG | EQ-Bench | Story Cloze | \\n|---------------|--------|--------|------------|--------|---------|----------|-------|-------|----------|-------------| \\n| SFT | 30.88 |36.12 | 46.21 | 46.13 | 11.55 | 73.20 | 44.59 | 76.91 | 46.79 | 10.55 | \\n| SynPO 4 iters | 31.95|37.28 | 49.18 | 49.30 | 12.80 | 75.44 | 46.70 | 77.19 | 55.82 | 12.80 | \\n|\\n\\n*(Evaluation results on 10 additional LM Evaluation Harness tasks for reasoning, using Llama3)*\\n\\n**Specifics for AGIEval:**\\n\\nAGIEval is a comprehensive benchmark specifically designed to assess foundation models in the context of human-centric standardized exams across diverse domains, such as college entrance exams, law school admission tests, math competitions, and lawyer qualification tests. \\n\\n**Other Task Specifics:**\\n\\n- LogiQA: Logical reasoning tasks requiring advanced inference and deduction. \\n\\n- MMLU-Pro: A refined set of MMLU, integrating more challenging, reasoning-focused questions and expanding the choice set from four to ten options.\\n\\n- SiQA: Social Interaction Question Answering to evaluate common sense and social reasoning. \\n\\n- QA4MRE: Question Answering for Machine Reading Evaluation, assessing comprehension and reasoning. \\n\\n- NQ Open: Open domain question answering tasks based on the Natural Questions dataset. \\n\\n- PubMedQA: Question answering tasks based on PubMed research articles for biomedical understanding. \\n\\n- RACE: Reading comprehension assessment tasks based on English exams in China. \\n\\n- SWAG: Situations With Adversarial Generations, predicting the next event in videos. \\n\\n- EQ-Bench: Tasks focused on equality and ethics in question answering and decision-making. \\n\\n- Story Cloze: Tasks to predict story endings, focusing on narrative logic and coherence.\\n\\n\\n \\n**Q5: Comparison with COT methods**\\n\\n- Our approach and CoT methods are complementary. SynPO addresses how to iteratively generate preference data for continuous training of the model given a small amount of seed SFT data, while CoT focuses on enhancing the reasoning process and rationale in the generated SFT data. \\n\\n- To further illustrate the complementarity between our method and CoT, we conducted the following experiment:\\n\\n| Method | LC (%) | WR (%) |\\n| :---- | :---- | :---- |\\n| Seed SFT | 20.1 | 19.7 |\\n| Seed SFT (COT) | 21.5 | 22.4 |\\n| SynPO | 32.1 | 33.6 |\\n| SynPO (COT) | 32.3 | 35.4 |\\n| \\n\\nThe experiment was conducted under the same settings as Table 7 in our paper. Following Mukherjee et al. \\\\[1\\\\], the results for SFT (CoT) and SynPO (CoT) were obtained by directly applying CoT to seed data SFT and iteratively training with the SynPO method under identical conditions. Specifically, the CoT method involved randomly selecting one of the system messages for CoT from Mukherjee et al.'s approach during both the seed data creation and model response generation stages. This resulted in seed data and self-generated preference data with more extensive reasoning processes. The results indicate that incorporating CoT significantly improves the win rate on AlpacaEval 2.0, while having a minimal impact on the length-controlled win rate.\"}", "{\"title\": \"Reply to Reviewer LmCF\", \"comment\": \"We really appreciate your effort for reviewing our paper and your acknowledgement of our paper\\u2019s contribution. We are also glad for the acknowledgment that the problem our experiments are extensive and the topic will be well-received. Below, we would like to give detailed responses to each of your comments.\\n\\n**Q1: Clarity of the initial paragraph** \\nThanks for pointing out the potential confusion. We will clarify them point by point below and update our expressions in paragraph 3 in our introduction for better understanding.\\n\\n* The initial model, which undergoes preference optimization, is first trained to be a prompt generator and is used with the synthetic preference data. \\n* The \\\"response\\\" refers to the outputs generated by the initial model when given synthetic prompts. Only the prompts are generated by the self-prompt generator; the responses are the LLM's completions of these prompts. \\n* The self-prompt generator and the response improver are trained from the same base model.\\n\\nAlso, we present the SynPO algorithm in Appendix A for a more higher formulated explanation for reference. \\nThank you very much for the constructive comments again, which really help us further improve our clarity.\\n\\n**Q2: Small point for model name** \\n\\nThanks for pointing out this\\\\! Yes, we agree that using \\u2018-base\\u2019 may bring some confusions, and we have replaced the model names from Mistral-base, Llama3-base to Mistral-base-SFT and Llama3-base-SFT, respectively. We\\u2019ll include this revision in our latest version.\\n\\n**Q3: Confusion for improvement on things learned in pretraining**\\n\\nAs shown in Table 5, SynPO primarily enhances performance in commonsense reasoning tasks (such as ARC and HellaSwag) and model honesty (TruthfulQA). For knowledge benchmarks like MMLU, the focus is more on maintaining performance rather than improving it.\\n\\nRegarding the observed improvements in certain knowledge-related benchmarks (e.g., OBQA), there are two main explanations:\\n\\n1. **Alignment Benefits**: We agree that the knowledge contained within the model is acquired during the pretraining phase. However, instruction-tuning and alignment can enable the model to better induce and articulate this knowledge in its responses\\\\[1,2,3\\\\]. For example, Self-Rewarding\\\\[1\\\\] has improved Llama2's performance on Natural Questions. \\n2. **Self-Refinement Gains:** Our method involves a self-refinement process, which has been shown to be effective in various tasks \\\\[4, 5\\\\]. In most scenarios, verification and refinement can be simpler than direct generation, allowing the model to correct inaccuracies and improve its performance \\\\[6,7\\\\]. This process of pre- and post-refinement helps the model enhance its capabilities on these tasks.\\n\\nYes, our ablation study detailed in Section 4.2 examines the impact of seed SFT data. The results, presented as \\\"Seed SFT results\\\" in Table 7, demonstrate that Llama3 reaches optimal performance after the second epoch. Additional training epochs did not yield improvements and, at times, even diminished performance. This confirms that the self-refinement process effectively enhances the model's ability to follow instructions and perform tasks, rather than simply benefiting from GPT-4 input.\\n\\nAgain, we sincerely thank you for the constructive comments and positive feedback, which really help us further improve our work. If you have any further questions, we are happy to address them.\\n\\n**Reference:**\\n\\n*\\\\[1\\\\] Yuan, Weizhe, et al. \\\"Self-rewarding language models.\\\" arXiv preprint arXiv:2401.10020 (2024).*\\n\\n*\\\\[2\\\\] Chen Z, Deng Y, Yuan H, et al. Self-play fine-tuning converts weak language models to strong language models\\\\[J\\\\]. arXiv preprint arXiv:2401.01335, 2024\\\\.*\\n\\n*\\\\[3\\\\] Yin, Yueqin, et al. \\\"Self-Augmented Preference Optimization: Off-Policy Paradigms for Language Model Alignment.\\\" arXiv preprint arXiv:2405.20830 (2024).*\\n\\n*\\\\[4\\\\] Pan, Liangming, et al. \\\"Automatically correcting large language models: Surveying the landscape of diverse self-correction strategies.\\\" arXiv preprint arXiv:2308.03188 (2023).*\\n\\n*\\\\[5\\\\] Weng, Yixuan, et al. \\\"Large language models are better reasoners with self-verification.\\\" arXiv preprint arXiv:2212.09561 (2022).*\\n\\n*\\\\[6\\\\] Kamoi R, Zhang Y, Zhang N, et al. When can llms actually correct their own mistakes? a critical survey of self-correction of llms\\\\[J\\\\]. Transactions of the Association for Computational Linguistics, 2024, 12: 1417-1440.*\\n\\n*\\\\[7\\\\] Madaan, Aman, et al. \\\"Self-refine: Iterative refinement with self-feedback.\\\" Advances in Neural Information Processing Systems 36 (2024).*\"}", "{\"comment\": \"Dear Reviewer TAJC,\\n\\nThank you once again for your thorough reviews. We have revised our draft and included responses to your questions. \\n\\nAs the rebuttal deadline is nearing, we would greatly appreciate it if you could confirm whether our responses have adequately addressed your concerns. If they have, we would be thankful if you could consider increasing your score. We are open to further discussions or addressing any additional points you may have. Thank you very much! \\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Reply to Reviewer d9yw (Part 3/3 )\", \"comment\": \"**Q6: Complementary references**\\nThanks for pointing out the related literature\\\\! Lin et al.'s work \\\\[1\\\\] shares a similar spirit with our prompt generator in generating sentences from keywords, but they focus on explicitly testing machines' commonsense reasoning ability by generating a coherent sentence with a given set of common concepts. Seo et al.'s paper \\\\[2\\\\] leverages Korean morphological variations to create synthetic data, which is then enhanced in quality using contrastive learning. We will add discussion on these related and insightful works in Section 5 in our final revision.\\n\\n\\nThank you very much for the constructive comments, which really help us further improve our work. We hope our answers have addressed your concerns. If you have any further questions, we are happy to address them.\\n\\n**Reference** \\n*\\\\[1\\\\] Mukherjee, Subhabrata, et al. \\\"Orca: Progressive learning from complex explanation traces of gpt-4.\\\" arXiv preprint arXiv:2306.02707 (2023).* \\n*\\\\[2\\\\] Lin, B. Y., Zhou, W., Shen, M., Zhou, P., Bhagavatula, C., Choi, Y., & Ren, X. (2020, November). CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2020 (pp. 1823-1840).* \\n*\\\\[3\\\\] Seo, J., Moon, H., Lee, J., Eo, S., Park, C., & Lim, H. S. (2023, December). CHEF in the Language Kitchen: A Generative Data Augmentation Leveraging Korean Morpheme Ingredients. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 6014-6029).*\"}", "{\"title\": \"Further comments and discussions will be appreciated!\", \"comment\": \"Dear Reviewer TAJC,\\n\\nThank you for your valuable time to review our work and for your constructive feedback. We posted our response to your comments four days ago, and we wonder if you could kindly share some of your thoughts so we can keep the discussion rolling to address your concern if there are any.\\n\\nIn the previous response,\\n\\n1. As suggested, we added our preliminary experiments on keywords analysis in the Appendix H which support our choice of keyword extraction methods, to provide further insights into self-prompt generator training.\\n\\n2. We clarified that noise keywords are an optional strategy for the prompt generator. Our method does not generate bad samples but rather induces pre- and post-self-improvement responses as natural rejected and chosen candidates. This approach is fundamentally different from previous methods.\\n\\n3. We explained that while a small amount of seed data is required, the term \\\"self-boosting\\\" emphasizes the self-driven nature of the model's iterative enhancement. The external knowledge required is minimal, and the selection of keywords is automatic and rule-based.\\n\\n4. We demonstrated that even less capable LLMs can be fine-tuned to be good self-prompt generators. It is worth mentioning that even a 0.5B model can generate high-quality prompts after training with our method.\\n\\nWe would appreciate it if you could kindly take a look at both the revision and our response to your comments. If you have any further questions, we are happy to discuss them!\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Reply to Reviewer TAJC (Part 2/2 )\", \"comment\": \"**Q3: Which type of keywords are more effective**\\n\\nAs clarified, the noise keywords maintain the same type as the extracted keywords for better robustness. In our implementation, the keywords are randomly selected from rule-based filtered phrases. Specifically, we use the NLTK toolkit to filter out stop words, extract all noun phrases from the sentences, and remove any preceding articles. We then randomly sample from these phrases. The decision to use noun phrases is based on the following preliminary experiments:\\n\\n| Extraction Form | avg similarity | Quality | |\\n| :---- | :---- | :---- | :---- |\\n| Noun Phrases | 0.0574 | 8.99 | |\\n| Verb Phrases | 0.0865 | 8.74 | |\\n| Noun \\\\+ Verb Phrases | 0.0610 | 8.67 | |\\n| All Phrases | 0.0569 | 8.67 | |\\n| Noun Words | 0.0604 | 8.98 | |\\n| Verb Words | 0.0894 | 8.76 | |\\n| Noun \\\\+ Verb Words | 0.0725 | 8.52 | |\\n| All Words | 0.0598 | 8.61 | |\\n|\\n\\n*(Note: \\\"Words\\\" refer to single-word keywords, while \\\"Phrases\\\" refer to each keyword and can be a phrase comprising 1 to 3 words.)*\\n\\nAs shown, using phrases rather than limiting keywords to single words generally results in better diversity and quality. This improvement may be due to phrases incorporating more inductive bias from the pretraining data for prompt generation. Specifically, while random sampling from all phrases or words without filtering is acceptable, it is not as effective as random sampling from nouns, as nouns contain the most important information needed to diversify a sentence.\\n\\nBeyond these evaluation results, we have also manually observed the impact of different extraction forms and found that using phrases as keywords generally resulted in higher quality prompts. Therefore, our experiments utilize randomly sampled noun phrases. We will include these preliminary experiments, which support our choice of keyword extraction methods, in the Appendix to provide further insights into self-prompt generator training.\\n\\n\\n**Q4: New insights in creating the bad samples** \\nAs clarified in Q2, we do not regard prompts with noise keywords and their corresponding responses as bad samples. The synthetic prompts are the same for both chosen and rejected samples. Our main intuition in constructing good and bad samples lies in **inducing pre- and post-self-improvement responses as natural rejected and chosen candidates, respectively** (Section 2.2). The chosen responses provide clear guidance on what approximates a gold standard response through the iterative self-refinement process. This method of generating synthetic preference is fundamentally different from previous methods that sample multiple responses from a model and then score them as good or bad samples \\\\[1\\\\]. We sincerely hope that this clarification addresses your concerns regarding the novelty of our insights.\\n\\n\\n**Q5: Prompt generation capability of less capable LLMs** \\nThe prompt generation capability of an LLM can be influenced by the model's scale. However, since our method synthesizes training data through the construction of keywords to prompts, even a 0.5B model can be finetuned to be a good self-prompt generator. To further illustrate this point, we conducted experiments using the Qwen2.5 model with 0.5B and 1.5B parameters:\\n\\n| Model | Avg. Similarity | Quality | |\\n| :---- | :---- | :---- | :---- |\\n| Llama3-8B | 0.0574 | 8.99 | |\\n| Qwen2.5-Instruct-1.5B | 0.0602 (+0.0028) | 8.25 (-0.74) | |\\n| Qwen2.5-Instruct-0.5B | 0.0617 (+0.0043) | 8.03 (-0.96) | |\\n|\\n\\n*( Evaluation run on 1k data for each setting )*\\n\\nIt can be observed that, consistent with characteristics of LLMs, the prompt generation capability is also affected by the model size. However, even **a small model with 0.5B parameters can become a good self-prompt generator after training with our method** (generating prompts with high quality and diversity). \\n\\nOn the other hand, we focus on the continuous improvement of LLMs. Thus, self-boosting primarily addresses the shortage of high-quality preference data. Our experiments mainly use 7B and 8B models, which are moderately sized. **For very weak small models, further enhancement can be easily achieved by using a stronger teacher model to construct distillation data.** Even a slightly larger teacher model can provide adequate supervision at a low cost.\\n\\nOverall, many thanks for your insightful points and suggestions. These comments really help improve our work. We hope our answers have addressed your concerns. If you have any further questions, we are happy to address them.\\n\\n**Reference:** \\n*\\\\[1\\\\] Li, Haoran, et al. \\\"Synthetic data (almost) from scratch: Generalized instruction tuning for language models.\\\" arXiv preprint arXiv:2402.13064 (2024).* \\n*\\\\[2\\\\] Shi, Taiwei, Kai Chen, and Jieyu Zhao. \\\"Safer-instruct: Aligning language models with automated preference data.\\\" arXiv preprint arXiv:2311.08685 (2023).* \\n*\\\\[3\\\\] Yuan, Weizhe, et al. \\\"Self-rewarding language models.\\\" arXiv preprint arXiv:2401.10020 (2024).*\"}", "{\"title\": \"Reply to Reviewer TAJC (Part 1/2 )\", \"comment\": \"We sincerely thank Reviewer TAJC for the review and are grateful for the time you spent with our submission. We are glad for the acknowledgement that our approach is sound and provides transparency for control and analysis. We wish to address your concerns by giving detailed responses to each of your comments as follows:\\n\\n**Q1: The self-boosting mechanism**\\n\\nYes, a small amount of seed data is required. \\n\\n- Compared to \\\"self-improvement,\\\" we use the term \\\"self-boosting\\\" to emphasize that the model's continuous iterative enhancement is **self-driven (by the self-refinement process and pre- and post-refinement contrast)** rather than claiming to be entirely independent. We aim to highlight that, unlike previous works that rely on external tools or teacher LLMs[1][2], SynPO is mainly a process of self-correction and self-driven improvement.\\n- Additionally, we would like to clarify that the **external knowledge required by our method is minimal**, similar to the self-rewarding approach[3], including a small amount of seed data. \\n- Besides, the selection of keywords is **entirely automatic (simple rule-based) and does not require careful selection or human effort**.\\n\\n\\n**Q2: Concerns and questions about bad noising keywords**\\n\\nThere seems to be some misunderstandings on the motivation of noise keywords introduction and our insights in creating the rejected samples.\\n\\n- First, we would like to clarify that the introduction of noise keywords is merely an **optional trick to enhance the robustness** of the prompt generator (L155-158). For each prompt $x^*_i$ in the seed data, we randomly extract two keywords from $x^*_i$ and one noise keyword from $x^*_j, j \\u2260 i$. This ensures that the training data output excludes semantically irrelevant input keywords, guiding the model to generate prompts based on relevant keywords while disregarding unrelated words in the given list. Therefore, this strategy **aims to ensure the naturalness and semantic coherence of the generated prompt, not to produce bad samples**.\\n- Since the noise keywords serve only to introduce noise, there is no need for an additional strategy to ensure one keyword is more noisy than another. Simply sampling randomly from the entire keyword list, excluding the current prompt's keywords, is sufficient.\\n\\nTo further demonstrate that noise keywords are an optional strategy in our method, we have conducted experiments to show their impact. We trained and evaluated the Llama3 prompt generator with three random keywords, either including or excluding one noise keyword. We calculated the average similarity across the generated prompts using Sentence-Transformer, as mentioned in Section 2.2, and employed GPT-4 Turbo for LLM-as-a-Judge evaluation of prompt quality (on a scale of 1 to 10, where 1 represents a prompt that is very unrealistic, unnatural, or unanswerable, and 10 represents a prompt that is very reasonable, realistic, and answerable).\\n\\n\\n| Noise Condition | Avg. Similarity | Quality | \\n|-----------------|----------------|---------| \\n| no noise | 0.0572 | 7.92 | \\n| w noise | 0.0574 | 8.99 | \\n|\\n\\n*(Evaluation run on 1k data for each setting. Each example contains 3 keywords (3 keywords for the 'no noise' setting, 2 keywords and 1 noise keyword for the 'w noise' setting)*\\n\\nThe results are listed in the following table. They show that the inclusion of noise keywords improves the quality of the self-generated prompts (as LLMs learn to ignore unrelated words), but **even without noise keywords, the self-prompt generator can still generate relatively high-quality prompts**.\"}", "{\"title\": \"Further comments and discussions will be appreciated!\", \"comment\": \"Dear Reviewer d9yw,\\n\\nThank you for your valuable time to review our work and constructive feedback. We posted our response to your comments four days ago, and we wonder if you could kindly share some of your thoughts so we can keep the discussion rolling to address your concern if there are any.\\n\\nIn the previous response,\\n\\n1. As suggested, we added discussion on the mentioned works in Section 5 in the revised version. \\n\\n2. We discussed and added our preliminary experiments on keywords analysis in the Appendix H which support our choice of keyword extraction methods, to provide further insights into self-prompt generator training. \\n\\n3. We conduct additional experiments to justify the complementarity of our method with CoT.\\n\\n4. We added experiments to provide a more comprehensive evaluation on more broad tasks and out-of-distribution data, we added ten additional tasks (including reasoning tasks and domain specific tasks) and reported the results in domain-specific scores on AGIEval in Appendix J. \\n \\n\\nWe would appreciate it if you could kindly take a look at both the revision and our response to your comments. If you have any further questions, we are happy to discuss them\\\\!\\n\\nBest regards,\\n\\nAuthors\"}", "{\"comment\": \"Thank you for the clarifications :)\"}", "{\"summary\": \"The paper presents \\\"SynPO\\\", which trains a self-prompt generator by controlling the keywords used. Noises (extracted also from responses) are inserted to produce the \\\"bad\\\" responses so that a SFT setup is possible where good and bad responses are now available. A separate step involves a response regenerator that is used to refine the model responses to get the good responses.\\nThe diversity comes from the selection of keywords, which can be sampled from a large pretraining set like RefinedWeb, and the model improves by learning to discern bad and good responses iteratively.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The strength of the paper is that it offers a sound solution in terms of introducing diversity in the self-improvement process. While this is not exactly novel, but I do see a world where this works and improves model progressively. Moreover, use of keyword means that it provides a transparency in what we can control, the granularity of control, and ease of analysis.\", \"weaknesses\": \"The weaknesses are the following:\\n1) I'm not sure if this is completely \\\"self-boosting\\\" as the title claimed, since external knowledge is needed, and careful selection of the keyword might be needed to guide the LLM to learn properly. \\n2) I think the paper does not provide much insights into the types of noise (keywords) that are more effective, which seems rather important as an insights in creating the bad samples. \\n3) The method relies heavily on the model's ability to generate out from a prompt containing keyword, it might or might not work with less capable LLMs. \\n4) The writing is rather unclear, I think it could have been written in a much more understandable manner.\", \"questions\": \"Do you have any insights on what kind of noise keywords are bad? What exactly does it make one keyword more noisy than another?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer Ftfv (Part 2/2 )\", \"comment\": \"**Q3: Regarding why synthetic data works better and ablation studies**\\n\\nRegarding why synthetic data performs better and related ablation studies, we would like to clarify that our comparison with the SFT baseline does not involve contrasting synthetic data with real data. With limited SFT data as seed data, our goal is to construct effective preference data to continuously improve LLMs' instruction-following and end-task capability. The SFT baseline serves as the initial checkpoint for further self-boosting.\\n\\nAs for the reasons why synthetic data works well, there are several main reasons:\\n\\n1. Compared to human-collected preference data (whose format is also preference pairs), SynPO is an iterative and nearly on-policy process. At each iteration, the current model state is considered for constructing synthetic preference data. As validated by Chen et al.\\\\[2\\\\] and Yuan et al.\\\\[3\\\\], **on-policy preference data are usually more effective than static data**. \\n2. Compared to continuously training on seed SFT data (as shown in our ablation study in Section 4.2, Table 7), in scenarios where only a limited amount of seed SFT data is available, the number of SFT data is limited and **the format of SFT is not as effective for alignment as preference pair format data**. Preference pairs can lead to more effective learning of human preferences than SFT data, which may overfit to specific examples\\\\[2\\\\]. Our approach of self-synthesizing effective preference pairs allows for iterative improvements in model performance.\\n\\nIn Table 6, we further compare the effects of synthetic prompts and manually collected prompts of the same scale. The results show that **synthetic prompts perform slightly better due to two main reasons**: \\n1. Synthetic data can **better control for diversity** (as shown in Figures 4 and 5).\\n2. Our method allows the model to **gradually self-learn the distribution gap** during the self-improvement training process, enabling it to address specific shortcomings at each stage rather than repeatedly learning from a fixed distribution as with real data.\\n\\n \\nOverall, we greatly appreciate your efforts for your thoughtful and insightful comments on our paper. We hope our answers have addressed your concerns.\\n\\n \\n**Reference**\\n\\n*\\\\[1\\\\] Meng, Yu, Mengzhou Xia, and Danqi Chen. \\\"Simpo: Simple preference optimization with a reference-free reward.\\\" arXiv preprint arXiv:2405.14734 (2024).*\\n\\n*\\\\[2\\\\] Chen Z, Deng Y, Yuan H, et al. Self-play fine-tuning converts weak language models to strong language models\\\\[J\\\\]. arXiv preprint arXiv:2401.01335, 2024\\\\.*\\n\\n*\\\\[3\\\\] Yuan, Weizhe, et al. \\\"Self-rewarding language models.\\\" arXiv preprint arXiv:2401.10020 (2024).*\\n\\n*\\\\[4\\\\] Ouyang L, Wu J, Jiang X, et al. Training language models to follow instructions with human feedback\\\\[J\\\\]. Advances in neural information processing systems, 2022, 35: 27730-27744.*\"}", "{\"summary\": \"The paper introduces SynPO, a self-boosting framework that improves LLMs using synthetic preference data rather than human annotations. The key innovation is a two-part process: (1) A self-prompt generator creates diverse prompts using just three keywords, and (2) A response improver that refines model outputs. The refined outputs are used as \\\"chosen\\\" responses and the original outputs as \\\"rejected\\\" responses for preference learning. Testing on Llama3-8B and Mistral-7B shows significant improvements in instruction following (~22-30% win rate improvements on benchmarks) and general performance (3-5% increase on Open LLM leaderboard) after four iterations.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"Novel and Practical Solution:\\nAddresses a critical challenge in LLM development - the scarcity of high-quality preference data\\nProvides a scalable way to generate training data without relying on human annotations\\nSimple but effective keyword-based prompt generation strategy\", \"weaknesses\": \"Limited Task Scope:\\nPrimary evaluation focuses on instruction-following tasks with limited testing on complex reasoning tasks or specialized domains\\nSome tasks show performance degradation (e.g., Mistral's performance on GSM8K)\", \"unexplained_performance_gaps\": \"The dramatic improvement over SFT baseline (e.g., from 6.6% to 34.0% win rate) needs more analysis\\nLimited ablation studies on why the synthetic data works so much better\", \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer d9yw\", \"comment\": \"Thanks a lot for your positive feedback and for raising your score. We're really glad to hear that our revisions have improved the paper. If you have any further questions, we are happy to address them.\"}", "{\"comment\": \"I appreciate the detailed responses to my queries.\\n\\nIt's great to see that the gains can't be attributed to data leakage. However, my only qualm with the analysis is that it should be embedding-based rather than n-gram-based - [lmsys blog](https://lmsys.org/blog/2023-11-14-llm-decontaminator/) \\n\\nWhile not the main focus of this work, it will be great to demystify the reasons for the gains on TruthfulQA with SynPO/SimPO.\"}", "{\"summary\": \"The paper illustrates a synthetic data generation pipeline, SynPO, for self-generating preference data to align the model. The pipeline consists of (a) self-generating diverse prompts at scale and (b) self-generating paired responses for these prompts. The paired responses are constructed by (i) first sampling generations from the base model, (ii) training a response improver model to synthetically generate the preferred response, and (iii) pairing the preferred response with the base model response to create paired preference data. The method relies only on initial high-quality seed SFT data, and all the remaining data is synthetic. Applying the SynPO method for multiple iterations results in significant alignment performance gains.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Each component of the SynPO pipeline is intuitive and contributes to the final model performance\", \"Strong and extensive empirical results, including data analysis (Figure 4, 5) and extensive ablation experiments.\", \"Creative uses of the seed SFT data for multiple components of the pipeline.\", \"Very well written and structured.\"], \"weaknesses\": [\"Has the synthetic preference data been tested for benchmark data leakage? I didn't see anything suggesting that checks for data leakage were done.\", \"While there's novelty to the pipeline as a whole, the individual components of the pipeline have been known.\"], \"there_are_a_few_typos_as_well\": [\"L193: \\\"an infinite\\\" -> \\\"a huge\\\" (clearly what is being described is not \\\"infinite\\\")\", \"L247: \\\\sigma -> \\\\beta. Also, state what is \\\\sigma? I assume it's the sigmoid function.\", \"L314: \\\"involve\\\" -> \\\"compare against\\\"\", \"L316: \\\"involves\\\" -> \\\"use\\\"\"], \"questions\": [\"Data filtering:\", \"For preference data, why is the rejected response across iterations the one generated by the base model? Why are the model(s) from the previous iteration(s) not used for generating rejected responses?\", \"Given that the \\\"chosen response\\\" quality keeps improving in the preference data since the rejected response is from the base model, do the number of filtered-out preference pair data, as stated on L230, come down across iterations?\"], \"results\": [\"I don't understand the performance gains on TruthfulQA. Any reason for/pattern to these gains?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Follow up to reveiwer TAJC\", \"comment\": \"Dear Reveiwer TAJC,\\n\\nWe would like to thank you again for your detailed reviews. We have updated our draft and added replies to your Cons with our latest experimental results.\\n\\nSince the rebuttal deadline is approaching soon, a lot of papers have finished the discussion. Given that your current score is 5, we would appreciate it if you could let us know if our responses have addressed your concerns satisfactorily. If your concerns have not been resolved, could you please let us know about it so that we have the opportunity to respond before the deadline?\\n\\nWe would be happy to have any follow-up discussions or address any additional concerns.\\n\\nThanks very much! Looking forward to your reply.\\n\\nBest,\\n\\nAuthors\"}", "{\"title\": \"Reply to Reviewer EKkD (Part 2/2 )\", \"comment\": \"**Q4: \\u201cWhy is the rejected response across iterations the one generated by the base model? Why are the model(s) from the previous iteration(s) not used for generating rejected responses?\\u201d**\\n\\nYes, using the outputs of the model from the previous iteration as rejected responses is a natural approach and was indeed our initial attempt. However, we found that this approach did not yield good results. Specifically, starting from the second round, each iteration would drop by 1.2 to 2.6 points on the AplacaEval 2.0 benchmark compared to using the one generated by the base model (using Llama 3B). This issue arises from the preference optimization loss derived from SimPO\\\\[5\\\\] (in L241-247 of our paper).\\n\\nWhen we mix the preference data from the (t-1) and (t) iterations, the outputs from the (t-1) model appear as $y^l$ in half of the data, requiring a reduction in probability, but as $y^w$ in the other half, requiring an increase in probability. This causes a conflict in learning, and consistently regarding the initial model outputs as rejected ones avoids this contradiction. \\n\\n**Q5: Do the number of filtered-out preference pair data, as stated on L230, come down across iterations?** \\nYes, as the quality of model generation improves with each SynPO iteration, the number of filtered-out preference pairs decreases. In our experiments, we randomly integrated 10,000 preference pairs from each iteration into the overall synthetic preference dataset (L911). \\n\\n**Q6: Explanation for the performance gains on TruthfulQA**\", \"there_are_two_main_reasons_for_the_observed_performance_gains_on_the_truthfulqa_benchmark\": \"1. **Correlation between preference alignment and TruthfulQA::** TruthfulQA assesses the truthfulness of responses from language models, a goal that aligns closely with preference alignment. Our synthetic preference dataset, which includes instances emphasizing truthfulness, enhances the model's ability to accurately interpret context and generate truthful responses. This aligns with findings from the Meng et al.\\\\[5\\\\], which states:\\n> \\\"Preference optimization methods consistently improve TruthfulQA performance, with some enhancements exceeding 10%. Similarly, we hypothesize that the preference dataset contains instances that emphasize truthfulness, which helps the model better understand the context and generate more truthful responses.\\\"\\n \\n2. **Utilization of SimPO Loss:** We employ SimPO loss for preference optimization, which has significantly boosted TruthfulQA performance compared to other methods, as detailed in Table 9 of the SimPO paper\\\\[5\\\\]. \\n \\n\\nIt's also worth noting that aside from TruthfulQA, the SynPO approach has led to significant improvements in other tasks. For instance, on EQ-Bench, we observed an improvement from a score of 46.79 to 55.82 with the Llama3-SynPO-4iter model. This alignment is particularly beneficial for tasks that involve safety and helpfulness, demonstrating the broad applicability of our approach.\\n\\nOverall, we greatly appreciate your efforts for your thoughtful and insightful comments on our paper. We would be happy to do any follow-up discussion or address any further comments. \\n\\n**Reference:** \\n*\\\\[1\\\\] Liang P, Bommasani R, Lee T, et al. Holistic evaluation of language models\\\\[J\\\\]. arXiv preprint arXiv:2211.09110, 2022\\\\.* \\n\\n*\\\\[2\\\\] Yuan, Weizhe, et al. \\\"Self-rewarding language models.\\\" arXiv preprint arXiv:2401.10020 (2024)\\\\.* \\n\\n*\\\\[3\\\\] Wu, Tianhao, et al. \\\"Meta-rewarding language models: Self-improving alignment with llm-as-a-meta-judge.\\\" arXiv preprint arXiv:2407.19594 (2024).*\\n\\n*\\\\[4\\\\] Yin, Yueqin, et al. \\\"Self-Augmented Preference Optimization: Off-Policy Paradigms for Language Model Alignment.\\\" arXiv preprint arXiv:2405.20830 (2024).*\\n\\n*\\\\[5\\\\] Meng, Yu, Mengzhou Xia, and Danqi Chen. \\\"Simpo: Simple preference optimization with a reference-free reward.\\\" arXiv preprint arXiv:2405.14734 (2024).*\"}", "{\"summary\": \"The authors introduce SynPO (Self-Boosting Preference Optimization), a framework that enables LLMs to autonomously generate synthetic data and enhance their performance through iterative self-boosting. SynPO leverages a self-prompt generator and response improver to collaboratively produce high-quality synthetic preference data for training. This process allows the model to learn responses aligned with preferences, leading to performance improvements on benchmarks like the Open LLM Leaderboard, AlpacaEval 2.0, and Arena-Hard.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The authors contribute by demonstrating infinite generation potential through experiments that examine performance changes relative to the number of synthetic datasets. They also clarify the limitations by distinguishing this capacity from the sustained utility of such datasets over time.\\n\\n2. The contribution to diversity within the keyword list is effectively demonstrated, with supporting experiments that validate this aspect.\\n\\n3. Previous research has rarely considered multi-turn interactions; however, the authors establish the effectiveness of their proposed method under multi-turn dialogue conditions.\\n\\n4. The self-boosting approach employed in the optimization process is appropriate and well-suited for enhancing model performance.\", \"weaknesses\": \"1. An ablation study on the effect of excluding noise keywords is necessary for Lines 156\\u2013157.\\n\\n2. Analysis of the types of keywords is insufficient. The authors justify random sampling based on diversity, yet the study lacks analysis on which extraction forms might be more beneficial, or order affects outcomes.\\n\\n3. It would be interesting to see experiments that vary the number of keywords or increase them significantly.\\n\\n4. The most concerning issue is the weak validation of out-of-distribution data. Since the seed dataset reflects OpenAI GPT\\u2019s preferences, as do Alpaca 2.0 and Arena-Hard, this likely influences the performance shown in Table 2. Additionally, while Table 4 shows considerable improvement on TQA included in the UltraFeedback dataset, effects are minimal on other datasets, suggesting that SynPO does not fully resolve in-distribution bias issues. There are also doubts about whether this approach could be more efficient or effective than methods in other studies applying CoT rather than simple SFT.\", \"questions\": \"1. References are needed for studies on a word-to-text task [1] and similar augmentation research where word sets are extracted from sentences in existing datasets, combined to create new synthetic datasets, and enhanced in quality using contrastive learning [2].\\n\\n [1] *Lin, B. Y., Zhou, W., Shen, M., Zhou, P., Bhagavatula, C., Choi, Y., & Ren, X. (2020, November). CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2020 (pp. 1823-1840).*\\n\\n [2] *Seo, J., Moon, H., Lee, J., Eo, S., Park, C., & Lim, H. S. (2023, December). CHEF in the Language Kitchen: A Generative Data Augmentation Leveraging Korean Morpheme Ingredients. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (pp. 6014-6029).*\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Follow up to reveiwer d9yw\", \"comment\": \"Dear Reveiwer d9yw,\\n\\nWe would like to thank you again for your detailed reviews. We have updated our draft and added replies to your Cons with our latest experimental results.\\n\\nSince the rebuttal deadline is approaching soon, a lot of papers have finished the discussion. Given that your current score is 5, we would appreciate it if you could let us know if our responses have addressed your concerns satisfactorily. If your concerns have not been resolved, could you please let us know about it so that we have the opportunity to respond before the deadline?\\n\\nWe would be happy to have any follow-up discussions or address any additional concerns.\\n\\nThanks very much! Looking forward to your reply.\\n\\nBest,\\n\\nAuthors\"}" ] }
7vV8KZ7VEl
EVA-Gaussian: 3D Gaussian-based Real-time Human Novel View Synthesis under Diverse Camera Settings
[ "Yingdong Hu", "Zhening Liu", "Jiawei Shao", "Zehong Lin", "Jun Zhang" ]
The feed-forward based 3D Gaussian Splatting method has demonstrated exceptional capability in real-time human novel view synthesis. However, existing approaches are restricted to dense viewpoint settings, where camera view angles are less than 60 degrees. This limitation constrains their flexibility in free-viewpoint rendering across a wide range of camera view angle discrepancies. To address this limitation, we propose a real-time pipeline named EVA-Gaussian for 3D human novel view synthesis across diverse multi-view camera settings. Specifically, we first introduce an Efficient cross-View Attention (EVA) module to accurately estimate the position of each 3D Gaussian from the source images. Then, we integrate the source images with the estimated Gaussian position map to predict the attributes and feature embeddings of the 3D Gaussians. Moreover, we employ a recurrent feature refiner to correct artifacts caused by geometric errors in position estimation and enhance visual fidelity. To further improve synthesis quality, we incorporate a powerful anchor loss function for both 3D Gaussian attributes and human face landmarks. Experimental results on the THuman2.0 and THumansit datasets showcase the superiority of our EVA-Gaussian approach in rendering quality across diverse camera settings. Project page: https://anonymousiclr2025.github.io/iclr2025/EVA-Gaussian.
[ "Fast Human Reconstruction; Generalizable 3D Gaussian Splatting" ]
Reject
https://openreview.net/pdf?id=7vV8KZ7VEl
https://openreview.net/forum?id=7vV8KZ7VEl
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zxLyTUTfYk", "zMpyb5t2Ys", "ySfPmoqTjl", "wZvCyA5ryp", "uOY8wCKmeZ", "u13qjTR9uP", "sP60JU8V4O", "qyYQUCCDeu", "qHE9SxRyzx", "mitqTOnQRk", "l0tP9FwZAc", "jw6INzPC1k", "iL4hCOKQFE", "i0GTpWccct", "h0CS4NLpO4", "fB6zOmWblM", "f6Paph91v2", "ZFXcUVpc3d", "Z721CtlzlD", "XR5hV6TGpu", "PLs0kVrUlN", "OrT2XEMA7j", "NZ4jec2ExB", "MpmHmGczF5", "MBT1bckC1F", "LBSND0eTFa", "KQ4v6H52HJ", "Ilxb7zNQI1", "C0YvaBKmvT", "ATEVYg5eBt", "9xOBl48Q0W", "8WQGKo9w3r", "5I1A8DuSUn", "5BW9TRTxuj", "4DBTsgmsx9", "3GYcmQQATy", "1KIvypCN0H" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731855462491, 1734924123518, 1731855172007, 1731926022780, 1731854666524, 1732532805750, 1730352428593, 1732583618693, 1732244234416, 1731854995786, 1731855627598, 1731855120723, 1732243540998, 1731854627833, 1731854395745, 1732599030025, 1732532796072, 1737523571171, 1731855228295, 1731854504298, 1732067454401, 1732244210536, 1730044045917, 1731854707677, 1732542528157, 1732243446720, 1732598947299, 1730731316044, 1731853770944, 1730622868155, 1732598797383, 1731854568852, 1731855039721, 1731855376152, 1731855419556, 1731854752437, 1731855703777 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Area_Chair_6Ehb" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Reviewer_bSU5" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Reviewer_WLeF" ], [ "ICLR.cc/2025/Conference/Submission3352/Reviewer_WLeF" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Reviewer_KDU6" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Reviewer_bSU5" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Reviewer_qNfu" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Reviewer_KDU6" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Reviewer_qNfu" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ], [ "ICLR.cc/2025/Conference/Submission3352/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response 3/3 to Reviewer WLeF (Q2)\", \"comment\": \"**Q.2** **In Table 1, the paper presents the running time of each method wat a resolution of 256\\u00d7256. However, GPS-Gaussian declares that it can synthesize 2K-resolution novel views with 25 FPS. This situation is suggeseted to be explained more clearly.**\\n\\n**A.2** Thank you for this comment. In Table 1 of our paper, we present a comprehensive comparison between our proposed EVA-Gaussian method and existing feed-forward 3D Gaussian reconstruction methods, including GPS-Gaussian, PixelSplat, MVSplat, and MVSGaussian. We evaluate these methods using several metrics, PSNR, SSIM, and LPIPS, while also including inference time to assess real-time performance. \\n\\nIt is important to note that the high GPU memory demands of PixelSplat, MVSplat, and MVSGaussian limit their ability to process high-resolution source view images. To ensure a fair comparison, all methods are tested at a low resolution of 256\\u00d7256. As indicated in Table 1, our EVA-Gaussian method achieves the best performance, with an inference time that is merely 15 ms longer than GPS-Gaussian, but still 15 ms faster than the third fastest method.\\n\\nWhile GPS-Gaussian is capable of synthesizing 2K-resolution novel views with an impressive 25 FPS, it does so by using 1024\\u00d71024 source images as input. To ensure a fair assessment, in Fig. 1 of our paper, we have compared the reconstruction speeds of EVA-Gaussian and GPS-Gaussian at this higher resolution of 1024\\u00d71024, utilizing the same NVIDIA A800 GPU. The results demonstrate that EVA-Gaussian is only 17 ms slower than GPS-Gaussian, while significantly improving the visual quality of the synthesized images.\\n\\nIn addition, we would like to emphasize that both GPS-Gaussian and EVA-Gaussian generate 3D models capable of rendering at 2K resolution. To clarify this point and address any potential confusion, we have added a comparison of EVA-Gaussian and GPS-Gaussian for 2K-resolution novel view synthesis in the revised Appendix.\\n\\n\\n[1] SuperGlue: Learning feature matching with graph neural networks. In CVPR, 2020.\\n\\n[2] LoFTR: Detector-Free Local Feature Matching with Transformers. In CVPR, 2021.\\n\\n[3] DKM: Dense Kernelized Feature Matching for Geometry Estimation\\n\\n[4] He, Y., Yan, R., Fragkiadaki, K., & Yu, S. I. (2020). Epipolar transformers. In Proceedings of the ieee/cvf conference on computer vision and pattern recognition (pp. 7779-7788).\\n\\n[5] Geiger, Andreas, et al. \\\"GTA: A Geometry-Aware Attention Mechanism for Multi-View Transformers.\\\" (2023).\"}", "{\"metareview\": \"The paper presents a method for real-time novel view synthesis for humans from multi-view inputs, using 3D Gaussian Splatting. It relies on a cross-view attention module to estimate position for the 3D Gaussians, image features for attribute estimation and a recurrent refinement step. The paper is well-written and presents improved empirical results compared to recent state-of-the-art. However, the methodological choices like cross-view attention and refinement are known in prior works and while their combination is shown to be effective, the obtained abilities are not significantly superior to prior works like GPS-Gaussian. Thus, it is recommended that the paper is not ready for acceptance at ICLR.\", \"additional_comments_on_reviewer_discussion\": \"While the initial scores were weaker, the reviewers considered the author rebuttal, following which two of the reviewers raised their scores. KDU6 remains unconvinced on the contributions with respect to GPS-Gaussian. While qNfu raised their score after the rebuttal, it still leans towards rejection based on the scope of the problem and quality of results. WLeF requires clarifications and leans to accept once they are provided by the rebuttal. Reviewer bSU5 is the most positive after concerns on cross-domain generalization are addressed by the rebuttal, keeping their score to borderline accept. However, despite the author rebuttal, there was no strong support for acceptance among the reviewers.\"}", "{\"title\": \"Response 4/5 to Reviewer qNfu (Q4-6)\", \"comment\": \"**Q.4** **Does \\\"any\\\" in line 197 include views such as up and side-up perspectives?**\\n\\n**A.4** Thank you for this question. Yes, our model can infer a variety of camera viewpoints, including both up and side-up perspectives. To illustrate this capability, we have included the rendered results in Figure 11 of the revised Appendix.\\n\\n\\n**Q.5** **Why do regularizing opacities help ensure consistency? Could it contribute to transparency artifacts, as seen in the video results?**\\n\\n**A.5** Thank you for this question. We would like to emphasize that all feed-forward 3D Gaussian reconstruction methods, including GPS-Gaussian, MVSplat, and MVSGaussian, are based on the assumption that the positions of 3D Gaussians directly correspond to the depth maps inferred from source view images. However, this assumption is not theoretically guaranteed.\\n\\nIn our paper, we introduce regularization for both the scales and opacities of the 3D Gaussians to ensure depth consistency. Specifically, as theoretically proved in Appendix C, when the scales of the 3D Gaussians are sufficiently small and their opacities approach either 0 or 1, the rendered depth, which typically represents the depth of the 3D model, aligns precisely with the positions of the 3D Gaussians. This alignment ensures that when we use inferred depth maps to position the 3D Gaussians, the resulting depth of the 3D model remains consistent with the inferred depth maps. Moreover, these inferred depth maps are aligned with the ground truth human model depth during the supervision phase, thereby reinforcing overall depth consistency.\\n\\nAs for the transparency artifacts, it is worth noting that such artifacts are also observed in GPS-Gaussian. This suggests that such artifacts are likely due to discrepancies between the source views and the novel views, rather than the regularization terms themselves. Notably, the opacity regularization term $O_ilog(O_i)$ does not promote transparency. Instead of allowing opacities to settle at intermediate values (e.g., 0.5), this term encourages opacities to converge towards the extremes of 0 or 1, achieving its minimum. Therefore, the regularization effectively minimizes the likelihood of transparency artifacts arising from the opacities of the 3D Gaussians.\\n\\n\\n**Q.6** **GPS results show broken hands and feet; why does the proposed method avoid this issue?**\\n\\n**A.6** Thank you for this question. GPS-Gaussian works well under dense camera settings, where camera view angles are less than 60 degrees. However, the adaptability of GPS-Gaussian is limited by its reliance on stereo-matching modules. The stereo-matching module first performs stereo rectification, which warps the source view images onto a common image plane, and then searches for correspondencies along the x-axis. Unfortunately, this heavy reliance on stereo rectification can lead to distortion and incompleteness in the rectified images when the camera angles are large. This drawback is reported by GHG [2] and also mentioned in our paper in line 434. \\n\\nOur proposed method effectively overcomes this limitation by utilizing the EVA module, which removes the reliance on stereo rectification and enables the generation of high-fidelity 3D Gaussian position maps even under significant variations in source camera angles. The effectiveness of the EVA module stems from the integration of strong inductive bias related to camera settings, which significantly reduces both computational load and temporal costs, thereby enhancing the efficiency of 3D Gaussian position estimation.\"}", "{\"comment\": \"Thanks for the reply. My concerns have been well solved. This paper extends the setting of GPS-Gaussian to allow the generation by using views across large angles. The running overhead is also acceptable. A potential problem is that the design of EVA assumes all cameras should be at a similar height and with a similar pitch angle, which may limit its generalizability to various scenarios. Nevertheless, this may be feasible in some human capturing scenes with fixed cameras. I'll keep my rating to borderline accept.\"}", "{\"title\": \"Response 5/7 to Reviewer KDU6 (Q4)\", \"comment\": \"**Q.4** **Although the paper emphasizes that the proposed approach is real-time and adaptable to various camera settings, this adaptability seems largely due to the two-view correlation strategy, which should also apply to GPS-Gaussian. It is unclear what unique contributions in this work specifically enhance real-time performance and camera adaptability.**\\n\\n**A.4** Thank you for this comment. We acknowledge that GPS-Gaussian is indeed a real-time method and adaptable to other camera settings. However, the adaptability of GPS-Gaussian is limited by its reliance on stereo-matching modules. The stereo-matching module first performs stereo rectification, which warps the source view images onto a common image plane, and then searches for correspondencies along the x-axis. Unfortunately, this heavy reliance on stereo rectification can lead to distortion and incompleteness in the rectified images when the camera angles are large. Consequently, when the camera angles exceed 60 degrees, GPS-Gaussian struggles to achieve satisfactory performance, as illustrated by the qualitative visualization results presented in the paper. In contrast, our proposed method effectively overcomes this limitation by utilizing the EVA module, which enables the generation of high-fidelity 3D Gaussian position maps even under significant variations in source camera angles. The effectiveness of the EVA module stems from the integration of strong inductive bias related to camera settings, which significantly reduces both computational load and temporal costs, thereby enhancing the efficiency of 3D Gaussian position estimation. To clarify the unique contributions of the EVA module, we provide a detailed explanation in the following:\\n\\nFor multiview correspondence retrieval across different image views, epipolar attention [H] has demonstrated its effectiveness by performing attention along the epipolar lines. This approach is based on the principle that a pixel in the source image corresponds to a pixel along the epipolar line in the target image. However, the sampling and attention processes in traditional epipolar attention are computationally and temporally intensive, as shown in Table 1. \\n\\nIn the context of feed-forward human reconstruction, where cameras are closely positioned and oriented towards the same point on the human body, the correspondence connections between matched pairs align parallel to the x-axis, as depicted in Figure 8 of the revised Appendix. This specific alignment allows us to simplify traditional epipolar attention. Unlike existing methods, such as [A], [I], and [J], which rely on extensive attention across broader pixel ranges, our approach focuses on nearby pixels along the x-axis within a 1D localized window. Moreover, considering that correspondences may not be perfectly aligned with the x-axis, we implement this attention mechanism within the deeper layers of the UNet architecture, as shown in Figure 9 of the revised Appendix. In these layers, the features of each pixel are aggregated from its neighboring pixels through preceding convolutional layers, thereby enhancing the robustness of feature matching. In addition, to mitigate the potential loss of multiview correspondences at the boundaries of local windows, we perform the attention mechanism twice, with the second iteration using a window shifted by half its size. Figure 10 in the revised Appendix illustrates the key differences between EVA and other attention mechanisms, demonstrating the efficiency gains achieved through our approach. \\n\\nIn summary, by simplifying the attention process and exploiting the specific camera configuration inductive biases, EVA-Gaussian not only enhances real-time performance but also significantly improves adaptability to diverse camera settings, surpassing the capabilities of existing methods like GPS-Gaussian.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for handling our manuscript and providing valuable feedback. We hope that our responses have sufficiently addressed the concerns you raised. We welcome more discussion if you have more questions and suggestions. As the discussion deadline is approaching, we would be very grateful if you could take a moment to review our reply.\"}", "{\"summary\": \"The paper pays attention on real-time human novel view synthesis with a new pipeline called EVA-Gaussian across diverse camera settings. based on 3D Gaussian representation, which is composed of a position estimation stage, a attributes estimation stage, and a feature refinement stage. To improve position estimation, the paper designs an Efficient cross-View Attention (EVA) module to enhance multi-view correspondence retrieval. Then a recurrent feature refiner is proposed to mitigate geometric artifacts caused by position estimation errors. Experiments on THuman2.0 and THumansit demonstrate the effectiveness of the proposed pipeline.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1 A new human novel view synthesis pipeline composed of a position estimation stage, a attributes estimation stage, and a feature refinement stage with 3D Gaussian.\\n\\n2 An Efficient cross-View Attention module to enhance learning of 3D Gaussian.\\n\\n3 A recurrent feature refiner that fuses RGB images and feature maps to mitigate geometric artifacts caused by position estimation errors.\", \"weaknesses\": \"1 The movitivation of EVA shoule be expressed more clearly. The EVA module uses cross-view attention to enhance 3D Gaussian position learning. However, the idea have been used in various feature matching methods to establish correspondences across different veiws, such as SuperGlue [1], LoFTR [2], DKM [3]. It is suggested to explain the module more clearly.\\n\\n[1] SuperGlue: Learning feature matching with graph neural networks. In CVPR, 2020.\\n\\n[2] LoFTR: Detector-Free Local Feature Matching with Transformers. In CVPR, 2021.\\n\\n[3] DKM: Dense Kernelized Feature Matching for Geometry Estimation\\n\\n\\n2 In Table 1, the paper presents the running time of each method wat a resolution of 256\\u00d7256. However, GPS-Gaussian declares that it can synthesize 2K-resolution novel views with 25 FPS. This situation is suggeseted to be explained more clearly.\", \"questions\": \"The questions and suggestions are listed in the part of Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the detailed responses. The authors addressed most of my concerns in the rebuttal phase, and thus I would like to raise my score to 6.\"}", "{\"comment\": \"# Dear Reviewers WLeF,\\n\\nWe kindly remind you to review our revisions and individual responses to evaluate if they can address your concerns. If our responses and additional results have sufficiently addressed your concerns, we would greatly appreciate your consideration of increasing your score. We are more than happy to address any remaining questions and concerns. We look forward to hearing from you again.\\n\\n**Best Regards,**\\n\\nThe Authors\"}", "{\"title\": \"Response 1/5 to Reviewer qNfu (Weakness 1-4)\", \"comment\": \"# Dear Reviewer qNfu,\\n\\nWe sincerely thank the reviewer for the constructive comments. We hope the following responses well address the concerns.\", \"as_for_the_weaknesses\": \"**W.1** **There are concerns about time expansion regarding the project page provided in the abstract.**\\n\\n**W.A.1** Thank you for raising this concern. We would like to clarify that the most recent update to our project page on our anonymous Github account (https://github.com/anonymousiclr2025) was made on Tue, 01 Oct 2024 03:17:03 GMT. This timestamp aligns with the submission guidelines provided, ensuring that our project page adheres to the required timelines. Therefore, there are no issues related to time expansion.\\n\\n**W.2** **Methods like GPS address the sparse-view human Gaussian splatting problem, which conflicts with lines 33\\u201335 of the abstract.**\\n\\n**W.A.2** Thank you for this comment. We acknowledge that our original abstract did not sufficiently clarify the concept of sparse camera settings. Specifically, in accordance with the camera settings in [2], our focus is on addressing sparse-view human Gaussian reconstruction when the camera view angle exceeds 60 degrees. In contrast, GPS-Gaussian employs camera view angles of 45 degrees, which does not fall within the sparse-view category in this context. We have revised the abstract to make this point clearer.\\n\\n**W.3** **The claim of \\\"diverse camera settings\\\" is somewhat overstated, as the method cannot resolve single-view settings, which are addressed by other approaches, such as HumanSplat [1].**\\n\\n**W.A.3** Thank you for this comment. We agree that EVA-Gaussian is not designed to effectively handle monocular camera settings due to its cross-view attention design. However, it is applicable to a wide range of multi-camera configurations in human reconstruction. Specifically, EVA-Gaussian effectively accommodates varying camera view angles and different numbers of cameras. As demonstrated in Table 2 of our paper, EVA-Gaussian performs effectively under different camera angles, while Table 3 illustrates its capability with varying numbers of cameras.\\n\\nIt is important to clarify that we did not claim our EVA-Gaussian can resolve all possible camera settings. Instead, our focus is on demonstrating its ability to effectively handle a diverse set of multi-view scenarios.\\n\\nIn addition, Humansplat [1] relies on the human SMPL model, which is difficult to obtain in real-world applications, as discussed in lines 120\\u2013129 of our paper. Moreover, it requires 9.3 seconds to generate novel views from source images, significantly hindering its practicality. In contrast, both GPS-Gaussian and our EVA-Gaussian are optimized for real-time human reconstruction tasks, making them more suitable for practical applications. **Therefore, in terms of inference speed and applicability, HumanSplat is not directly comparable to our proposed method.**\\n\\nTo address your concern, we have revised the abstract to clearly emphasize our focus on diverse multi-view camera settings.\\n\\n**W.4** **The method cannot infer the back view when only front stereo views are provided, as demonstrated in their video.**\\n\\n**W.A.4** Thank you for your comment. Indeed, incomplete reconstruction is a significant limitation inherent in all feed-forward 3D Gaussian reconstruction methods, **as these approaches primarily focus on reconstruction rather than generation.** Moreover, for certain downstream applications that require precise representations of the human body, such as human-robot interaction [3][4], **it is crucial to avoid hallucinating invisible parts, as this could lead to inaccurate and potentially misleading outcomes.**\\n\\nIn contrast to the previous SOTA method, GPS-Gaussian, our proposed EVA-Gaussian addresses the challenge of limited viewpoint angles by enabling feed-forward 3D reconstruction using source images captured from a broader range of camera angles. Notably, EVA-Gaussian can accommodate four or more input views, provided GPU resources are sufficient, as illustrated in our experimental results. This capability allows EVA-Gaussian to utilize a limited set of front-facing images to infer a comprehensive 3D model that can be rendered from a wider range of camera angles.\"}", "{\"title\": \"Response 1/2 to Reviewer bSU5 (Q1-3)\", \"comment\": \"# Dear Reviewer bSU5,\\n\\nWe sincerely thank the reviewer for the constructive comments. We hope the following responses well address the concerns.\\n\\n**Q.1** **The cross-domain generalizability should be further discussed. The paper seems to only conduct in-domain tests. As a generalizable method, it is necessary to evaluate its cross-domain generalizability on different datasets or data captured in the real world.**\\n\\n**A.1** Thank you for this comment. We totally agree that cross-domain generalizability is important for a generalizable reconstruction method. In response to your comment, we have conducted additional experiments and analyses to evaluate the cross-domain generalization capabilities of EVA-Gaussian. Specifically, we compare EVA-Gaussian with GPS-Gaussian across multiple diverse datasets, as detailed in the revised Appendix. Moreover, we have included visualization results of EVA-Gaussian applied to real-world data to demonstrate its practical applicability. Our experimental results suggest that EVA-Gaussian generalizes well across different domains and enhances robustness when provided with sufficient training data. \\n\\n**Q.2** **Resource cost. As multi-view images are aggregated by a unified attention module, there may be a higher GPU memory cost than previous works. Need more reported results, explanations, and discussion about the consumption.**\\n\\n**A.2** Thank you for this comment. We admit that the use of attention mechanisms does lead to increased GPU memory consumption, as discussed in our paper (lines 535-539) and illustrated in Table 1 below. However, compared to previous feed-forward methods such as PixelSplat, MVSplat, and GPS-Gaussian, our EVA-Gaussian method maintains highly competitive GPU memory efficiency. Specifically, EVA-Gaussian ranks as the second-lowest in GPU memory demands among these methods across all tested image resolutions. GPS-Gaussian is more memory-efficient because it utilizes stereo matching instead of attention mechanisms for estimating 3D Gaussian positions. This approach, however, results in significant distortion and incompleteness when handling large variations in camera view angles, as detailed in Sec. 5.2 and Sec. 5.3 of our paper. \\n\\nMoreover, as presented in Table 2 below, EVA-Gaussian outperforms all attention-based methods, including PixelSplat and MVSplat, by requiring the least GPU memory cost and achieving the fastest inference time. This efficiency underscores the effectiveness of our approach in balancing memory consumption with performance.\\n\\n**Q.3** **Typos. In figure 2, \\\"4.1\\\", \\\"4.2\\\", \\\"4.3\\\" should be \\\"4.2\\\", \\\"4.3\\\", \\\"4.4\\\" to align with the sections.**\\n\\n**A.3** Thank you very much for your careful review. We have corrected the typos in the revised paper.\"}", "{\"title\": \"Response 3/5 to Reviewer qNfu (Q1-3)\", \"comment\": \"As for the Questions:\\n\\n**Q.1** **Can the method only interpolate between input views, or is it able to hallucinate invisible parts?**\\n\\n**A.1** Thank you for this question. Indeed, our method, as well as all feed-forward 3D Gaussian reconstruction methods, cannot hallucinate invisible parts, **as these approaches primarily focus on reconstruction rather than generation.** Moreover, for certain downstream applications that require precise representations of the human body, such as human-robot interaction [3][4], **it is crucial to avoid hallucinating invisible parts, as this could lead to inaccurate and potentially misleading outcomes.**\\n\\nHowever, **this does not mean that our method is simply interpolating between input views**. Our approach first reconstructs the 3D model from multiple source views and then renders them onto a specific image plane for novel view synthesis. 3D reconstruction has long been a fundamental problem in computer vision, and it remains a common practice to estimate 3D models (e.g., point clouds, explicit neural radiance fields, and 3D Gaussians) from multiple images, reproject them into the 3D space, and remap them onto the novel image plane. This procedure heavily relies on the principles of multi-view geometry and **cannot be simply achieved by interpolating between input views**.\\n\\nIn conclusion, our method cannot hallucinate invisible parts and it synthesizes novel views with a strong reliance on multi-view geometry. We hope this addresses your concerns.\\n\\n**Q.2** **What causes the jumping and transparency issues in the video?**\\n\\n**A.2** Thank you for this question. In our experimental videos, our EVA-Gaussian method is implemented to ensure a fair comparison with GPS-Gaussian by also utilizing a pair of adjacent source view images for human model inference. This setting inherently restricts the novel view camera angles to lie within the angular range determined by these two source cameras. When the human model rotates beyond this range, additional image pairs from opposing viewpoints are necessary to accurately infer the model and synthesize novel views. This dependence on multiple image pairs leads to transitions between different inferences of the human model, which explains the abrupt transitions observed during smooth view changes. Moreover, the quality of novel view images is primarily guaranteed within the angular boundaries established by the source view cameras. As the novel view camera approaches these boundaries, transparency artifacts emerge due to the limited coverage of the inferred human model. \\n\\nIt is important to note that our EVA-Gaussian is specifically designed to address this issue. EVA-Gaussian can effectively handle scenarios where the angle of source view images is large, allowing novel view cameras to operate in a broader range while maintaining high-quality image synthesis. As shown in Table 2 of our paper, EVA-Gaussian can achieve novel view synthesis with a 90-degree angle, while GPS-Gaussian fails to work effectively. More importantly, EVA-Gaussian is capable of incorporating three or more source view images. This capability ensures consistent human model representation as the novel view camera transitions across intervals spanned by adjacent source cameras, effectively eliminating noticeable artifacts such as transparency and abrupt transitions.\\n\\nIn addition, these issues can be effectively mitigated by engineering designs, such as using more source view images as input or restricting the novel view camera angles to lie within the angular range determined by all source cameras.\\n\\n**Q.3** **How does the shifted window embedding strategy work? Are the window positions for cross-attention consistent across different image sources?**\\n\\n**A.3** Thank you for this question. Before carrying out Efficient cross-View Attention, the intermediate features are divided into fixed-length 1D windows along the x-axis, as shown in Fig. 10(C) of the revised Appendix. EVA performs cross-attention between localized windows at identical coordinates across multiple image views, out of the consideration that cameras are closely positioned and oriented towards the same point on the human body in the context of feed-forward human reconstruction. Moreover, considering that correspondences may not be perfectly aligned with the x-axis, we implement this attention mechanism within the deeper layers of the UNet architecture, as shown in Figure 9 of the revised Appendix. In these layers, the features of each pixel are aggregated from its neighboring pixels through preceding convolutional layers, thereby enhancing the robustness of feature matching. In addition, to mitigate the potential loss of multiview correspondences at the boundaries of local windows, we perform the attention mechanism twice, with the second iteration using a window shifted by half its size. This process is demonstrated in Fig. 10(D) of the revised Appendix.\"}", "{\"comment\": \"Dear Reviewer bSU5,\\n\\nThank you so much for your insightful feedback and for taking the time to review our work. We greatly appreciate your kind words. Your comments and insights have been invaluable in refining our paper.\"}", "{\"title\": \"Response 4/7 to Reviewer KDU6 (Q3)\", \"comment\": \"**Q.3** **The use of facial landmarks as a regularization method is intuitive but not unique. While many related works are focused on human head avatars, this approach is already well-established and should be appropriately credited and compared with existing methods.**\\n\\n**A.3** Thank you for this comment. We agree that the use of facial landmarks as a regularization method is a well-established method for human head avatars. For instance, previous studies such as [E] have employed facial landmarks as a regularization term to effectively control the movement of existing head avatars, while [F] has incorporated facial landmarks as anchors in a triplet loss framework to ensure that the remapping results align with human expressions.\\n\\nNonetheless, our approach differs significantly from these methods, which primarily focus on facial expression transfer. Instead, we integrate facial landmarks to regularize the 3D Gaussian attributes of human models, thereby enabling a more consistent and robust 3D reconstruction across multiple viewpoints. Although there are existing works, such as [G], that explore multi-view consistency to enhance model performance, their approaches are based on point cloud multi-view matching. This differs significantly from our 3D Gaussian regularization, as the attributes associated with 3D Gaussian representations, such as scales, opacities, and positions, are fundamentally different from those of point clouds. Importantly, our proposed anchor loss is specifically designed to regularize these 3D Gaussian attributes. As elaborated in Appendix C, we provide a theoretical analysis to demonstrate how this regularization facilitates depth alignment and significantly enhances the quality of novel view synthesis. To the best of our knowledge, our work is the first to apply facial landmark regularization to 3D Gaussian attributes in this context, thereby advancing the state of the art in 3D human reconstruction across multiple views.\"}", "{\"title\": \"Response 1/7 to Reviewer KDU6 (Q1)\", \"comment\": \"# Dear Reviewer KDU6,\\n\\nWe sincerely thank the reviewer for the constructive comments. We hope the following responses well address the concerns.\\n\\n**Q.1** **The proposed pipeline shows considerable similarity to GPS-Gaussian. While GPS-Gaussian uses cost volume to estimate depth from two selected views, this work relies on cross-view attention. Furthermore, embedding image features into depth maps is also similar in both approaches.**\\n\\n**A.1** Thank you for this comment. While our proposed pipeline may initially appear similar to GPS-Gaussian, there are significant differences in both depth estimation and feature embedding.\\n\\nTraditional cross-volume attention mechanisms estimate depth by constructing a cost volume from multiple selected views. This process is computationally intensive, as it requires computing probability scores across a wide range of hypothesized depth values to form the volume. Consequently, this method becomes increasingly inefficient and less scalable when handling multiple views or higher-resolution images, which limits its effectiveness in more complex scenarios. While GPS-Gaussian adapts the cost volume to facilitate high-resolution processing with reduced temporal and computational resources by leveraging stereo-matching algorithms, it remains heavily dependent on stereo rectification. This reliance poses a significant limitation in sparse camera setups, especially when camera angles exceed 60 degrees.\\n\\nIn contrast, our EVA-Gaussian introduces an Efficient cross-View Attention (EVA) module that streamlines the aggregation of information across multiple views. The EVA module enhances computational efficiency by focusing on relevant features and enables robust depth estimation even under sparse camera settings. This novel attention design not only reduces computational overhead but also improves scalability, making our approach more suitable for complex and high-resolution environments.\\n\\nWe also note that several feed-forward 3D Gaussian reconstruction methods share a similar pipeline that embeds image features into depth maps to infer Gaussian attribute maps, like MVSGaussian [C] and TranSplat [D]. However, the specific procedures for feature embedding and post-processing in our pipeline differ significantly from existing approaches. Most current methods, such as PixelSplat and MVSplat, jointly estimate depth maps and attribute maps, making it difficult to supervise the details of the estimated depth map. Although GPS-Gaussian addresses this issue by separating the estimation of depth maps and attribution maps, it does not extend 3D Gaussian attributes to capture more spatial information for mitigating geometric errors. In contrast, we incorporate a **Feature Refinement** process to encode spatial information in Gaussian feature attributes and enhance the overall image quality in a recurrent manner. In addition, to ensure that 3D Gaussians align accurately with the surface of the human body, we introduce a **3D Gaussian Attribute Regularization** method. This approach maintains the consistency between the depth of the 3D model and the inferred multi-view depth, thereby ensuring multi-view consistency in critical facial areas across different source views.\\n\\nIn summary, our pipeline uniquely integrates **Efficient cross-View Attention**, **Feature Refinement**, and **3D Gaussian Attributes Estimation** to achieve real-time human 3D Gaussian reconstruction. These innovations not only distinguish our pipeline from GPS-Gaussian but also allow us to achieve SOTA performance across various datasets and camera settings under multiple metrics. A more detailed discussion of the novelty of the proposed pipeline is provided in the revised Appendix D.\"}", "{\"comment\": \"Dear Reviewer WLeF,\\n\\nThank you so much for your insightful feedback and for taking the time to review our work. We greatly appreciate your kind words. Your comments and insights have been invaluable in refining our paper.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for handling our manuscript and providing valuable feedback. We hope that our responses have sufficiently addressed the concerns you raised. We welcome more discussion if you have more questions and suggestions. As the discussion deadline is approaching, we would be very grateful if you could take a moment to review our reply.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response 5/5 to Reviewer qNfu (Q7-8)\", \"comment\": \"**Q.7** **Since the method does not use human priors, would it also work for objects if the dataset were changed?**\\n\\n**A.7** Thank you for this question. Since EVA-Gaussian does not rely on explicit human priors, it has the potential to be applied to object-only datasets, provided that the camera priors, e.g., extrinsic and intrinsic parameters, remain consistent. This potential is demonstrated by our cross-dataset validation experiments detailed in the revised Appendix F. Specifically, a model trained on THuman2.0, a dataset exclusively containing human subjects, performs effectively on THumansit, which includes both humans and objects like chairs. This indicates that EVA-Gaussian may generalize well with object datasets.\\n\\nHowever, a critical limitation is the absence of a suitable dataset that features fixed and aligned camera parameters to evaluate the performance of EVA-Gaussian when trained on a human dataset. In addition, we currently lack a sufficiently large object dataset to train EVA-Gaussian effectively. Therefore, it remains uncertain to what extent EVA-Gaussian can effectively handle a wide variety of objects.\\n\\n\\n**Q.8** **What specific baseline is used for the ablation of the EVA module?**\\n\\n**A.8** Thank you for this question. In the ablation of the EVA module, we remove the EVA module from the Gaussian position estimation network $\\\\mathcal{D}^P_{\\\\theta_1}$, which reduces the network to a standard UNet architecture. All other components of the EVA-Gaussian pipeline remain unchanged, allowing us to evaluate the impact of the EVA module on the overall performance.\\n\\n\\n[1] HumanSplat: Generalizable Single-Image Human Gaussian Splatting with Structure Priors\\n\\n[2] Generalizable Human Gaussians for Sparse View Synthesis. In ECCV, 2024.\\n\\n[3] Reconstructing human hand pose and configuration using a fixed-base exoskeleton. In ICRA.\\n\\n[4] Immfusion: Robust mmwave-rgb fusion for 3d human body reconstruction in all weather conditions. In ICRA.\\n\\n[5] https://web.twindom.com/twinstant-mobile-full-body-3d-scanner/\\n\\n[6] Volumetric Avatar Reconstruction with Spatio-Temporally Offset RGBD Cameras. In VR 2023\"}", "{\"title\": \"Response 2/7 to Reviewer KDU6 (Q2)\", \"comment\": \"**Q.2** **The novelty and specific contributions of the cross-view attention module are unclear. Cross-view transformers, as implemented here, have been extensively explored in prior work, such as [A] and [B]. A more thorough analysis and additional experiments are needed to compare the proposed EVA module with existing techniques.**\\n\\n**A.2** Thank you for this comment. We agree with the reviewer that cross-view transformers have been extensively explored and have demonstrated impressive performance across various applications. However, we would like to clarify that our EVA module is specifically tailored for the task of feed-forward human 3D Gaussian reconstruction, which necessitates reconstructing high-resolution 3D human models from images (e.g., $1024\\\\times1024$) in real time, ideally within 100 milliseconds. Unlike general cross-view transformers, EVA leverages a strong inductive bias specifically designed for the camera settings in this task. By performing cross-view attention exclusively among nearby pixels within a shifted window, EVA achieves localized attention. This approach enhances computational efficiency while preserving essential spatial relationships critical for accurate 3D reconstruction. \\n\\nThe cross-view model [A] referenced by the reviewer is specialized for segmentation tasks and operates between BEV features and RGB image features. In this framework, the BEV feature map serves as the query, while multi-view image features act as the keys and values. In contrast, our EVA performs cross-attention directly between multiple source view image features, enabling a more integrated and seamless fusion of information across views. This fundamental difference allows EVA to better address the requirements of 3D Gaussian reconstruction. Moreover, the cross-view model [B] mentioned is computationally intensive, which requires several seconds for inference even at a lower resolution of $256\\\\times256$. Similarly, large cross-view transformer modules used in 3D generation tasks suffer from significant limitations in speed and GPU memory consumption, making them impractical for our real-time requirements. We have discussed these limitations in our manuscript (lines 145-148) to highlight the inapplicability of such models.\\n\\nThe most closely related attention mechanisms to our work are the epipolar attention used by PixelSplat and the self-attention employed by MVSplat. As shown in Table 1, we have conducted a comparative analysis of our module against these approaches, focusing on both temporal and computational costs. We have also adapted the attention mechanism from [A] to our task to ensure a fair comparison. The results in Table 1 demonstrate the effectiveness and superiority of our EVA module compared to these existing attention mechanisms.\\n\\nTo address your concerns, we have clarified the novelty and contributions of the EVA module in the revised manuscript, including the comparison results in the supplementary material.\"}", "{\"comment\": \"Thank you for the detailed feedback and the extensive additional experiments.\\n\\nAfter carefully reviewing the rebuttal and considering the comments from other reviewers, I still view the Efficient Cross-View Attention Module, Feature Refinement, and Attribute Regularization as incremental contributions that provide limited new insights into the problem. While these components enhance an existing pipeline, they do not address the underlying fundamental challenges. However, I am impressed by the efficiency of the EVA module. I remain inclined toward a negative rating but will not strongly oppose acceptance if there is substantial support from the other reviewers.\"}", "{\"comment\": \"# Dear Reviewers qNfu,\\n\\nWe kindly remind you to review our revisions and individual responses to evaluate if they can address your concerns. If our responses and additional results have sufficiently addressed your concerns, we would greatly appreciate your consideration of increasing your score. We are more than happy to address any remaining questions and concerns. We look forward to hearing from you again.\\n\\n**Best Regards,**\\n\\nThe Authors\"}", "{\"summary\": \"This paper proposes a novel EVA-Gaussian that uses multi-view images to synthesize human novel views. It introduces multi-view transformer into the field of human NVS and makes improvements to improve efficiency. It also proposes two concise feature refinement and anchor loss to enhance detail performance. Experiments on two prevailing datasets can well validate the claimed contributions and state-of-the-art performance compared to existing methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Performance is good both quantitatively and qualitatively. The results are promising.\", \"This work can work with significant changes in camera viewpoint angles.\", \"The paper is easy to follow.\"], \"weaknesses\": [\"The cross-domain generalizability should be further discussed. The paper seems to only conduct in-domain tests. As a generalizable method, it is necessary to evaluate its cross-domain generalizability on different datasets or data captured in the real world.\", \"Resource cost. As multi-view images are aggregated by a unified attention module, there may be a higher GPU memory cost than previous works. Need more reported results, explanations, and discussion about the consumption.\", \"Typos. In figure 2, \\\"4.1\\\", \\\"4.2\\\", \\\"4.3\\\" should be \\\"4.2\\\", \\\"4.3\\\", \\\"4.4\\\" to align with the sections.\"], \"questions\": \"Refer to weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response 6/7 to Reviewer KDU6 (Q5)\", \"comment\": \"**Q.5** **The experiment videos show noticeable artifacts, similar to those seen in GPS-Gaussian, including transparent Gaussians in novel views and abrupt transitions during smooth view changes.**\\n\\n**A.5** In our experimental videos, our EVA-Gaussian method is implemented to ensure a fair comparison with GPS-Gaussian by also utilizing a pair of adjacent source view images for human model inference. This setting inherently restricts the novel view camera angles to lie within the angular range determined by these two source cameras. When the human model rotates beyond this range, additional image pairs from opposing viewpoints are necessary to accurately infer the model and synthesize novel views. This dependence on multiple image pairs leads to transitions between different inferences of the human model, which explains the abrupt transitions observed during smooth view changes. Moreover, the quality of novel view images is primarily guaranteed within the angular boundaries established by the source view cameras. As the novel view camera approaches these boundaries, transparency artifacts emerge due to the limited coverage of the inferred human model. \\n\\nIt is important to note that our EVA-Gaussian is specifically designed to address this issue. EVA-Gaussian can effectively handle scenarios where the angle of source view images is large, allowing novel view cameras to operate in a broader range while maintaining high-quality image synthesis. As shown in Table 2 of our paper, EVA-Gaussian can achieve novel view synthesis with a 90-degree angle, while GPSGaussian fails to work effectively. More importantly, EVA-Gaussian is capable of incorporating three or more source view images. This capability ensures consistent human model representation as the novel view camera transitions across intervals spanned by adjacent source cameras, effectively eliminating noticeable artifacts such as transparency and abrupt transitions.\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for your thoughtful rebuttal. Most of my concerns have been addressed, and I could increase my score from 3 to 5. However, I believe that the proposed method for the task does not fully meet the bar for acceptance at ICLR. The task appears to be somewhat rigidly defined. Additionally, it does not seem appropriate to claim that not addressing unseen areas is an advantage through forced explanations.\"}", "{\"comment\": \"Dear Reviewer KDU6,\\n\\nThank you so much for your insightful feedback and for taking the time to review our work. We greatly appreciate your kind words and the updated score. Your comments and insights have been invaluable in refining our paper.\"}", "{\"comment\": \"# Dear Reviewer qNfu,\\n\\nThank you for your feedback and for taking the time to review our work.\\n\\nWe would like to clarify the importance of this task and our contributions to society.\\n\\n**Recovering novel view images from a set of images captured by well-posed cameras has long been a fundamental task for real-time human novel view synthesis.** Numerous studies, including those based on Signed Distance Fields (SDF) such as PIFu [1], PIFuHD [2], and Function4D [3], as well as methods based on 3D Gaussian splatting like GHG [4], GPS-Gaussian [5], and others [6][7], have aimed to solve this task under well-posed camera settings. Subsequent works have demonstrated that these algorithms can be effectively deployed in real-world systems, such as VirtualCube [8], Tele-Aloha [9], and GPS-Gaussian+ [10], where they perform well in critical applications like holographic communication and human-robot interaction.\\n\\nOur approach, EVA-Gaussian, operates within the same framework as previous methods, utilizing feed-forward 3D Gaussian reconstruction techniques. Our main contribution is that we are the first to fully leverage the priors inherent in the camera settings. We have designed powerful components, including the EVA module, feature refinement module, and anchor loss, which have proven exceptionally beneficial for the task of human novel view synthesis, whether under dense camera settings (with camera angles less than 60 degrees) or sparse camera settings (with camera angles larger than 60 degrees). Notably, our work consistently outperforms previous methods in terms of the quality of novel view images while maintaining reasonable computational and temporal costs.\\n\\nIn addition, **none of the aforementioned methods, including Function4D [3], Pixelsplat [6], MVSplat [11], MVGaussian [12], GHG [4], GPS-Gaussian [5], VirtualCube [8], Tele-Aloha [9], and GPS-Gaussian+ [10], hallucinate invisible parts**, as all these works aim to recover a realistic human novel view in real-time. This realistic reconstruction can benefit **certain** downstream tasks such as human-robot interaction. We also want to point out that all these methods can reconstruct a complete human model, given a sufficient number of surrounding source view images (for example, a minimum of four surrounding views in EVA-Gaussian). \\n\\nOn the other hand, there are indeed other works that leverage generative models [13][14] or human prior models [15][16] (e.g., SMPL) for human novel view synthesis. We fully acknowledge that these works contribute significantly to human avatar reconstruction, achieving complete 3D human models from limited front views. However, this constitutes a different line of research, often accompanied by high temporal costs (e.g., Humansplat takes 100 times longer than EVA-Gaussian) and may yield unrealistic reconstruction results. Real-time human reconstruction for novel view synthesis emphasizes achieving real-time and realistic reconstruction results and benefits many important downstream tasks such as holographic communication. All these points have been thoroughly discussed in our paper.\\n\\nFurthermore, we would like to claim that **our EVA-Gaussian is currently the most effective solution for the task of real-time human novel view synthesis.**\\n\\n\\n**Best Regards,**\\n\\nThe Authors\\n\\n[1] PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization, in CVPR 2019\\n\\n[2] PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization, in CVPR 2020\\n\\n[3] Function4D: Real-time Human Volumetric Capture from Very Sparse Consumer RGBD Sensors, in CVPR 2021\\n\\n[4] Generalizable Human Gaussians for Sparse View Synthesis, in ECCV 2024\\n\\n[5] GPS-Gaussian: Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis, in CVPR 2024\\n\\n[6] Learning to Infer Implicit Surfaces without 3d Supervision, in NeurIPS 2019\\n\\n[7] Deep Volumetric Video from Very Sparse Multi-view Performance Capture, in ECCV 2018\\n\\n[8] VirtualCube: An Immersive 3D Video Communication System\\n\\n[9] Tele-Aloha: A Low-budget and High-authenticity Telepresence System Using Sparse RGB Cameras, in SIGGRAPH 2024\\n\\n[10] GPS-Gaussian+: Generalizable Pixel-wise 3D Gaussian Splatting for Real-Time Human-Scene Rendering from Sparse Views\\n\\n[11] MVSplat: Efficient 3D Gaussian Splatting from Sparse Multi-View Images, in ECCV 2024\\n\\n[12] Fast Generalizable Gaussian Splatting Reconstruction from Multi-View Stereo, in ECCV 2024\\n\\n[13] HumanSplat: Generalizable Single-Image Human Gaussian Splatting with Structure Priors\\n\\n[14] StdGEN: Semantic-Decomposed 3D Character Generation from Single Images\\n\\n[15] Expressive Whole-Body 3D Gaussian Avatar\\n\\n[16] GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians\"}", "{\"summary\": \"This paper proposes a real-time human novel view synthesis framework that leverages 3D Gaussians as the core representation. The approach employs a cross-view attention module to estimate geometric properties for each 3D Gaussian and uses image features to predict their attributes. The novel view image is generated through a splatting technique refined in a recurrent manner.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The experimental results demonstrate improved performance over GPS-Gaussian, particularly in the facial region.\", \"weaknesses\": \"he proposed pipeline shows considerable similarity to GPS-Gaussian. While GPS-Gaussian uses cost volume to estimate depth from two selected views, this work relies on cross-view attention. Furthermore, embedding image features into depth maps is also similar in both approaches.\\n\\nThe novelty and specific contributions of the cross-view attention module are unclear. Cross-view transformers, as implemented here, have been extensively explored in prior work, such as [A] and [B]. A more thorough analysis and additional experiments are needed to compare the proposed EVA module with existing techniques.\\n\\nThe use of facial landmarks as a regularization method is intuitive but not unique. While many related works are focused on human head avatars, this approach is already well-established and should be appropriately credited and compared with existing methods.\\n\\nAlthough the paper emphasizes that the proposed approach is real-time and adaptable to various camera settings, this adaptability seems largely due to the two-view correlation strategy, which should also apply to GPS-Gaussian. It is unclear what unique contributions in this work specifically enhance real-time performance and camera adaptability.\\n\\nThe experiment videos show noticeable artifacts, similar to those seen in GPS-Gaussian, including transparent Gaussians in novel views and abrupt transitions during smooth view changes.\\n\\n[A] Zhou, Brady, and Philipp Kr\\u00e4henb\\u00fchl. \\\"Cross-view transformers for real-time map-view semantic segmentation.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.\\n[B] Pham, Trung X., Zhang Kang, and Chang D. Yoo. \\\"Cross-view Masked Diffusion Transformers for Person Image Synthesis.\\\" arXiv preprint arXiv:2402.01516, 2024.\", \"questions\": \"Given the points raised in the weaknesses, could the authors clarify the main novelty and contribution of this paper?\\nWhat specific advantages does the proposed EVA module offer?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General Response\", \"comment\": [\"# Dear Reviewers, ACs, and PCs,\", \"We sincerely thank you for your dedication, support, and insightful feedback on our paper. Your constructive comments have greatly assisted us in enhancing the quality of our paper. We have carefully considered all the feedback, addressed each question, and included new experimental results to strengthen our study. Below, we summarize the main revisions we have made:\", \"## Details on the EVA Module\", \"[KDU6] [WLeF] We have added a **comparison between EVA and existing attention mechanisms** in Appendix D to clarify its contributions and advantages;\", \"[KDU6] [qNfu] We have included **more structure and operational details** in Appendix D to provide a clearer understanding of the EVA module's functionality.\", \"## More Experimental Results\", \"[KDU6] [bSU5] A comprehensive **comparison of the computational cost** among our method, GPS-Gassuain, and existing attention mechanisms has been added in Appendix D to highlight efficiency gains;\", \"[qNfu] [WLeF] We have included **visualization** results from novel view cameras at 1) **random position** and 2) **higher resolution** in Appendix E to demonstrate the effectiveness of our method in diverse scenarios;\", \"[bSU5] We have incorporated **cross-dataset validation** results in Appendix F to validate the robustness of our method across different datasets;\", \"[bSU5] An evaluation using **real-world data** has been added in Appendix G to further validate our approach in practical applications.\", \"## Improvements in Writing and Presentation\", \"[qNfu] We have included Humansplat in comparison in Sec. 2;\", \"[qNfu] The term \\\"sparse camera setting\\\" has been clearly defined in the abstract to ensure clarity;\", \"[qNfu] The phrase \\\"diverse camera setting\\\" has been revised to \\\"diverse multi-view camera setting\\\" in the abstract for improved specificity;\", \"[bSU5] Typos in Figure 2 have been corrected.\", \"We kindly invite the reviewers to examine our revisions and detailed responses. We are more than willing to address any remaining questions or concerns. If our responses and additional results sufficiently address your concerns, we would greatly appreciate your consideration of increasing your scores. We are truly grateful for your engagement and invaluable suggestions, which have significantly contributed to enhancing our work.\", \"**Best Regards,**\", \"The Authors\"]}", "{\"summary\": \"The paper proposes a multiview-based method for human novel view synthesis using a 3D Gaussian representation, which consists of three main steps.\\nFirst, it estimates position maps with cross-view attention. Next, it combines the position maps and raw images to estimate Gaussian parameters along with feature embedding. Finally, a feature refiner is applied to correct artifacts by splatting features and refining the output image.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The paper is well-written, well-structured, and clear.\", \"The proposed EVA module is effective in initializing the Gaussian position, outperforming one-step Gaussian parameter regression.\", \"Incorporating features with original 3D Gaussian and splatting to refine rendering results is interesting.\"], \"weaknesses\": [\"There are concerns about time expansion regarding the project page provided in the abstract.\", \"Methods like GPS address the sparse-view human Gaussian splatting problem, which conflicts with lines 33\\u201335 of the abstract.\", \"The claim of \\\"diverse camera settings\\\" is somewhat overstated, as the method cannot resolve single-view settings, which are addressed by other approaches, such as HumanSplat [1].\", \"The method cannot infer the back view when only front stereo views are provided, as demonstrated in their video.\", \"The ablation study implies that the effectiveness of anchor loss is limited.\", \"There are no examples of in-the-wild scenarios presented.\", \"The video exhibits issues with jumping and transparency.\", \"[1] HumanSplat: Generalizable Single-Image Human Gaussian Splatting with Structure Priors\"], \"questions\": [\"Can the method only interpolate between input views, or is it able to hallucinate invisible parts?\", \"What causes the jumping and transparency issues in the video?\", \"How does the shifted window embedding strategy work? Are the window positions for cross-attention consistent across different image sources?\", \"Does \\\"any\\\" in line 197 include views such as up and side-up perspectives?\", \"Why do regularizing opacities help ensure consistency? Could it contribute to transparency artifacts, as seen in the video results?\", \"GPS results show broken hands and feet; why does the proposed method avoid this issue? Since the method does not use human priors, would it also work for objects if the dataset were changed?\", \"What specific baseline is used for the ablation of the EVA module?\", \"---\", \"I may change my opinion depending on the authors' rebuttal and whether they can address my concerns.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"# Dear Reviewer KDU6,\\n\\nThank you once again for your insightful feedback and for taking the time to review our work. \\n\\nFinally, we would like to clarify the importance of this task and our contributions to society.\\n\\nRecovering novel view images from a set of images captured by well-posed cameras has long been a fundamental task for real-time human novel view synthesis. Numerous studies, including those based on Signed Distance Fields (SDF) such as PIFu [A], PIFuHD [B], and Function4D [C], as well as methods based on 3D Gaussian splatting like GHG [D], GPS-Gaussian [E], and others [F][G], have aimed to solve this task under well-posed camera settings. Subsequent works have demonstrated that these algorithms can be effectively deployed in real-world systems, such as VirtualCube [H], Tele-Aloha [I], and GPS-Gaussian+ [J], where they perform well in critical applications like holographic communication and human-robot interaction.\\n\\nOur work utilizes the same framework as these previous methods and employs a feed-forward 3D Gaussian reconstruction pipeline similar to other works such as Pxielsplat, MVSplat, MVGaussian, and GPS-Gaussian. Although these works may appear similar at a high level, each has its unique designs in the underlying model architecture. Our EVA-Gaussian method fully leverages the priors inherent in the camera settings and introduces powerful components, including the EVA module, feature refinement module, and anchor loss. These innovations have proven exceptionally beneficial for the task of human novel view synthesis, whether under dense camera settings (with camera angles less than 60 degrees) or sparse camera settings (with camera angles larger than 60 degrees). Notably, our work consistently outperforms previous methods in terms of the quality of novel view images while maintaining reasonable computational and temporal costs.\\n\\nMoreover, we would like to claim that **our EVA-Gaussian is currently the most effective solution for the task of real-time human novel view synthesis.**\\n\\n\\n**Best Regards,**\\n\\nThe Authors\\n\\n[A] PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization, in CVPR 2019\\n\\n[B] PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization, in CVPR 2020\\n\\n[C] Function4D: Real-time Human Volumetric Capture from Very Sparse Consumer RGBD Sensors, in CVPR 2021\\n\\n[D] Generalizable Human Gaussians for Sparse View Synthesis, in ECCV 2024\\n\\n[E] GPS-Gaussian: Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis, in CVPR 2024\\n\\n[F] Learning to Infer Implicit Surfaces without 3d Supervision, in NeurIPS 2019\\n\\n[G] Deep Volumetric Video from Very Sparse Multi-view Performance Capture, in ECCV 2018\\n\\n[H] VirtualCube: An Immersive 3D Video Communication System\\n\\n[I] Tele-Aloha: A Low-budget and High-authenticity Telepresence System Using Sparse RGB Cameras, in SIGGRAPH 2024\\n\\n[J] GPS-Gaussian+: Generalizable Pixel-wise 3D Gaussian Splatting for Real-Time Human-Scene Rendering from Sparse Views\"}", "{\"title\": \"Response 3/7 to Reviewer KDU6 (Q2 Table)\", \"comment\": \"**Table 1** Comparison of the temporal and computational efficiency among different attention modules.\\n\\n| Input Size || 2 * 64 * 128 * 128 || |2 * 64 * 256 * 256 || |2 * 32 * 256 * 256 ||\\n|---------------------------------------|-----------------------|----------|------------------|-----------------------|----------|------------------|-----------------------|----------|------------------|\\n| Module | Time (Inference Once) | Params | GPU Memory Usage | Time (Inference Once) | Params | GPU Memory Usage | Time (Inference Once) | Params | GPU Memory Usage |\\n| **Cross Attention (in [A])** | 0.159s | 0.0336M | 49736 MiB | |Out Of Memory| | |Out Of Memory| |\\n| **Self-Attention (in MVSplat)** | 0.0353s | 0.789M | 3808 MiB | 0.304s | 0.789M | 36290 MiB | 0.263s | 0.198M | 32536 MiB |\\n| **Epipolar Attention (in PixelSplat)**| 0.0583s | 5.062M | 15554 MiB | 0.193s | 5.062M | 60562 MiB | 0.169s | 3.194M | 59404 MiB |\\n| **EVA Attention (window size=16)** | 0.007225s | 0.0661M | 944 MiB | 0.0177s | 0.0661M | 2200 MiB | 0.0143s | 0.0167M | 1404 MiB |\\n| **EVA Attention (window size=32)** | 0.006533s | 0.0661M | 944 MiB | 0.0149s | 0.0661M | 2192 MiB | 0.0116s | 0.0167M | 1404 MiB |\\n| **EVA Attention (window size=64)** | 0.006307s | 0.0661M | 944 MiB | 0.0139s | 0.0661M | 2192 MiB | 0.0106s | 0.0167M | 1404 MiB |\\n| **EVA Attention (window size=256)** | 0.006565s | 0.0661M | 944 MiB | 0.0234s | 0.0661M | 2192 MiB | 0.0167s | 0.0167M | 1404 MiB |\"}", "{\"title\": \"Response 2/5 to Reviewer qNfu (Weakness 5-7)\", \"comment\": \"*W.5** **The ablation study implies that the effectiveness of anchor loss is limited.**\\n\\n**W.A.5** Thank you for this comment. It is important to note that the anchor loss is specifically designed to regularize the critical facial areas to enhance the overall visual quality of images. Given that facial regions typically constitute only a small fraction of the human body within images, the resulting quantitative performance gains may seem limited. However, as demonstrated by the visualization results presented in our ablation study, the model without anchor loss shows noticeable distortions, particularly in facial regions. The incorporation of anchor loss effectively mitigates these distortions, ensuring that essential facial details are preserved. This underscores the important role of anchor loss in improving human visual perception, even if the improvements in quantitative metrics appear modest.\\n\\n**W.6** **There are no examples of in-the-wild scenarios presented.**\\n\\n**W.A.6** Thank you for this comment. We would like to clarify that our primary focus is not on processing casually captured in-the-wild data, such as videos used for reconstructing human avatars. Instead, we concentrate on scenarios where cameras are strategically positioned and images are synchronously captured, for example, in holographic communication systems (see lines 53-54 in the paper). While there are existing systems [6] and in-the-wild datasets from GPS-Gaussian that align with our settings, these datasets are not publicly available. \\n\\nTo address your concern, we have evaluated our model on the publicly available HuMMan dataset, which is a real-world dataset that features a wide camera view angle of 90 degrees, in Figure 13 of the revised Appendix. Notably, GPS-Gaussian fails to generate reasonable results, so we have not included its outcomes. The results demonstrate that EVA-Gaussian can generalize effectively to real-world scenarios.\\n\\n**W.7** **The video exhibits issues with jumping and transparency.**\\n\\n**W.A.7** The observed jumping and transparency artifacts arise when novel view cameras approach the boundaries of the 3D model or when there is a significant deviation between the orientations of the novel view cameras and the source view cameras. A more detailed explanation of this phenomenon can be found in the response to your Question 2.\\n\\nWe note that these problems can be effectively mitigated through thoughtful engineering strategies. For instance, increasing the number of source view images used as input can enhance stability. In addition, constraining the novel view cameras to remain within the boundaries defined by the two source view images can further mitigate these issues. We will incorporate these stategies to enhance the visual stability and integrity of the rendered videos on our project page.\"}", "{\"title\": \"Response 1/3 to Reviewer WLeF(Q1)\", \"comment\": \"# Dear Reviewer WLeF,\\n\\nWe sincerely thank the reviewer for the constructive comments. We hope the following responses well address the concerns.\\n\\n**Q.1** **The movitivation of EVA shoule be expressed more clearly. The EVA module uses cross-view attention to enhance 3D Gaussian position learning. However, the idea have been used in various feature matching methods to establish correspondences across different veiws, such as SuperGlue [1], LoFTR [2], DKM [3]. It is suggested to explain the module more clearly.**\\n\\n**A.1** Thank you for the suggestion. We agree that many feature matching methods utilize cross-view attention to establish correspondences across different views, such as the Attentional Graph Neural Network in SuperGlue [1] and the Local Feature Transformer in LoFTR [2]. To address your concern, we have provided a more detailed explanation in the revised Appendix to clarify the motivation, novelty, and contributions of EVA.\\n\\nWhile our EVA module shares the underlying principle of cross-view attention with these approaches, it introduces significant innovations tailored to the specific requirements of our application. For instance, SuperGlue [1] leverages an Attentional Graph Neural Network to selectively focus on relevant keypoint image features, integrating embedded 3D positional information to construct a local graph. However, the attention mechanisms in this framework are restricted to interactions among these keypoints. In contrast, our EVA module employs cross attention more broadly. Although it also focuses on the most relevant pixels, it is designed to output a dense depth map rather than sparse matching results. This allows EVA to encompass pixels across the entire image, enhancing spatial awareness. Similarly, while the Local Feature Transformer in LoFTR [2] performs both self and cross attention on a reduced set of intermediate features, our EVA specifically implements cross attention within a 1D localized window along the x-axis. This design enables EVA to efficiently focus on relevant pixels, while minimizing computational and temporal overhead. The comparison results in Table 1 demonstrate the effectiveness and superiority of our EVA module compared to existing attention mechanisms in feed-forward 3D Gaussian reconstruction.\\n\\nIt is important to note that EVA is specifically tailored to the camera setting used in feed-forward human reconstruction. In this context, where cameras are closely positioned and oriented towards the same point on the human body, the correspondence connections between matched pairs align parallel to the x-axis, as depicted in Figure 8 of the revised Appendix. This specific alignment allows us to simplify traditional cross-view attention mechanisms, like epipolar attention [4]. Unlike existing methods, such as LoFTR [2] and GTA [5], which rely on extensive attention across broader pixel ranges, our approach focuses on nearby pixels along the x-axis within a 1D localized window. Moreover, considering that correspondences may not be perfectly aligned with the x-axis, we implement this attention mechanism within the deeper layers of the UNet architecture, as shown in Figure 9 of the revised Appendix. In these layers, the features of each pixel are aggregated from its neighboring pixels through preceding convolutional layers, thereby enhancing the robustness of feature matching. In addition, to mitigate the potential loss of multiview correspondences at the boundaries of local windows, we perform the attention mechanism twice, with the second iteration using a window shifted by half its size. Figure 10 in the revised Appendix illustrates the key differences between EVA and other attention mechanisms, demonstrating the efficiency gains achieved through our approach.\"}", "{\"title\": \"Response 2/3 to Reviewer WLeF (Q1 Table)\", \"comment\": \"**Table 1** Comparison of the temporal and computational efficiency among different attention modules.\\n\\n| Input Size || 2 * 64 * 128 * 128 || |2 * 64 * 256 * 256 || |2 * 32 * 256 * 256 ||\\n|---------------------------------------|-----------------------|----------|------------------|-----------------------|----------|------------------|-----------------------|----------|------------------|\\n| Module | Time (Inference Once) | Params | GPU Memory Usage | Time (Inference Once) | Params | GPU Memory Usage | Time (Inference Once) | Params | GPU Memory Usage |\\n| **Self-Attention (in MVSplat)** | 0.0353s | 0.789M | 3808 MiB | 0.304s | 0.789M | 36290 MiB | 0.263s | 0.198M | 32536 MiB |\\n| **Epipolar Attention (in PixelSplat)**| 0.0583s | 5.062M | 15554 MiB | 0.193s | 5.062M | 60562 MiB | 0.169s | 3.194M | 59404 MiB |\\n| **EVA Attention (window size=16)** | 0.007225s | 0.0661M | 944 MiB | 0.0177s | 0.0661M | 2200 MiB | 0.0143s | 0.0167M | 1404 MiB |\\n| **EVA Attention (window size=32)** | 0.006533s | 0.0661M | 944 MiB | 0.0149s | 0.0661M | 2192 MiB | 0.0116s | 0.0167M | 1404 MiB |\\n| **EVA Attention (window size=64)** | 0.006307s | 0.0661M | 944 MiB | 0.0139s | 0.0661M | 2192 MiB | 0.0106s | 0.0167M | 1404 MiB |\\n| **EVA Attention (window size=256)** | 0.006565s | 0.0661M | 944 MiB | 0.0234s | 0.0661M | 2192 MiB | 0.0167s | 0.0167M | 1404 MiB |\"}", "{\"title\": \"Response 7/7 to Reviewer KDU6 (Summary)\", \"comment\": \"**Q** **Given the points raised in the weaknesses, could the authors clarify the main novelty and contribution of this paper? What specific advantages does the proposed EVA module offer?**\\n\\n**Summary** Thank you for your questions. We have addressed each point raised in the reviews in detail. To summarize, our paper introduces a novel pipeline for real-time human 3D Gaussian reconstruction that significantly differs from existing approaches like GPS-Gaussian. Our method incorporates several key innovations:\\n\\n1. **Efficient Cross-View Attention (EVA) Module**: Unlike traditional cross-volume attention mechanisms, EVA streamlines information aggregation across multiple views by focusing on relevant features within localized windows. This design enhances computational efficiency and scalability, enabling robust depth estimation even with sparse camera setups.\\n\\n2. **Feature Refinement**: We have implemented a feature refinement process to better encode spatial information in Gaussian feature attributes. Through recurrent processing, this refinement significantly improves image quality, ensuring that spatial details are meticulously captured and accurately represented in the reconstructed 3D model.\\n\\n3. **3D Gaussian Attribute Regularization**: To maintain consistency and fidelity in the reconstructed human models, we incorporate facial landmarks into the regularization of the 3D Gaussian attributes. This integration ensures that the depth of the 3D model aligns with multi-view depth, particularly in critical facial regions, thereby preserving multi-view consistency and enhancing reconstruction accuracy.\\n\\nWith these novel designs, our pipeline not only enhances real-time performance but also significantly improves adaptability to diverse camera settings, outperforming GPS-Gaussian and achieving SOTA performance.\", \"the_eva_module_offers_several_advantages_over_existing_attention_mechanisms\": \"- **Computational Efficiency**: As demonstrated in Table 1, EVA significantly reduces inference time and GPU memory usage compared to cross attention [A], self-attention [MVSplat], and epipolar attention [PixelSplat]. Notably, with a window size of 64, EVA achieves an impressive inference time of approximately 0.006 seconds, which is much faster than other methods.\\n\\n- **Scalability and Adaptability**: EVA is designed for high-resolution 3D human reconstruction. Its localized attention approach allows it to adapt seamlessly to various camera settings without the heavy computational burdens that limit other cross-view transformers.\\n\\n- **Enhanced Depth Estimation**: By leveraging a strong inductive bias specific to our task, EVA performs cross-view attention among nearby pixels within shifted windows. This strategy preserves essential spatial relationships, leading to more accurate and reliable depth estimations even under challenging conditions with significant camera angle variations.\\n\\n[A] Zhou, Brady, and Philipp Kr\\u00e4henb\\u00fchl. \\\"Cross-view transformers for real-time map-view semantic segmentation.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.\\n\\n[B] Pham, Trung X., Zhang Kang, and Chang D. Yoo. \\\"Cross-view Masked Diffusion Transformers for Person Image Synthesis.\\\" arXiv preprint arXiv:2402.01516, 2024.\\n\\n[C] Liu, T., Wang, G., Hu, S., Shen, L., Ye, X., Zang, Y., ... & Liu, Z. (2024). Fast Generalizable Gaussian Splatting Reconstruction from Multi-View Stereo. arXiv preprint arXiv:2405.12218.\\n\\n[D] Zhang, C., Zou, Y., Li, Z., Yi, M., & Wang, H. (2024). Transplat: Generalizable 3d gaussian splatting from sparse multi-view images with transformers. arXiv preprint arXiv:2408.13770.\\n\\n[E] Onizuka, H., Thomas, D., Uchiyama, H., & Taniguchi, R. I. (2019). Landmark-guided deformation transfer of template facial expressions for automatic generation of avatar blendshapes. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (pp. 0-0).\\n\\n[F] Liu, C., Ham, J., Postma, E., Midden, C., Joosten, B., & Goudbeek, M. (2013). Representing affective facial expressions for robots and embodied conversational agents by facial landmarks. International Journal of Social Robotics, 5, 619-626.\\n\\n[G] Luo, X., Huang, J. B., Szeliski, R., Matzen, K., & Kopf, J. (2020). Consistent video depth estimation. ACM Transactions on Graphics (ToG), 39(4), 71-1.\\n\\n[H] He, Y., Yan, R., Fragkiadaki, K., & Yu, S. I. (2020). Epipolar transformers. In Proceedings of the ieee/cvf conference on computer vision and pattern recognition (pp. 7779-7788).\\n\\n[I] Geiger, Andreas, et al. \\\"GTA: A Geometry-Aware Attention Mechanism for Multi-View Transformers.\\\" (2023).\\n\\n[J] Wang, X., Zhu, Z., Huang, G., Qin, F., Ye, Y., He, Y., ... & Wang, X. (2022, October). Mvster: Epipolar transformer for efficient multi-view stereo. In European Conference on Computer Vision (pp. 573-591). Cham: Springer Nature Switzerland.\"}", "{\"title\": \"Response 2/2 to Reviewer bSU5 (Tables)\", \"comment\": \"**Table 1** Comparison of the temporal and computational efficiency among different different feed-forward 3D Gaussian reconstruction methods.\\n\\n| Batchsize=1 | | GPU Memory Usage || | \\n|---------------------------------------|-----------------------|----------|------------------|---------------------|\\n| Input Image Resolution | 2 * 3 * 128 * 128|2 * 3 * 256 * 256 |2 * 3 * 512 * 512 |2 * 3 * 1024 * 1024 |\\n| PixelSplat | 6099 MiB | 13429 MiB | 49598 MiB | Out of Memory |\\n| MVSplat | 3040 MiB | 6584 MiB | 27082 MiB | Out of Memory |\\n| GPS-Gaussian | 1909 MiB | 2357 MiB | 4035 MiB | 11215 MiB |\\n| EVA-Gaussian | 2289 MiB | 3171 MiB | 7185 MiB | 24121 MiB |\\n\\n--------------------------------------\\n\\n**Table 2** Comparison of the temporal and computational efficiency among different attention modules.\\n\\n| Input Size || 2 * 64 * 128 * 128 || |2 * 64 * 256 * 256 || |2 * 32 * 256 * 256 ||\\n|---------------------------------------|-----------------------|----------|------------------|-----------------------|----------|------------------|-----------------------|----------|------------------|\\n| Module | Time (Inference Once) | Params | GPU Memory Usage | Time (Inference Once) | Params | GPU Memory Usage | Time (Inference Once) | Params | GPU Memory Usage |\\n| **Self-Attention (in MVSplat)** | 0.0353s | 0.789M | 3808 MiB | 0.304s | 0.789M | 36290 MiB | 0.263s | 0.198M | 32536 MiB |\\n| **Epipolar Attention (in PixelSplat)**| 0.0583s | 5.062M | 15554 MiB | 0.193s | 5.062M | 60562 MiB | 0.169s | 3.194M | 59404 MiB |\\n| **EVA Attention (window size=16)** | 0.007225s | 0.0661M | 944 MiB | 0.0177s | 0.0661M | 2200 MiB | 0.0143s | 0.0167M | 1404 MiB |\\n| **EVA Attention (window size=32)** | 0.006533s | 0.0661M | 944 MiB | 0.0149s | 0.0661M | 2192 MiB | 0.0116s | 0.0167M | 1404 MiB |\\n| **EVA Attention (window size=64)** | 0.006307s | 0.0661M | 944 MiB | 0.0139s | 0.0661M | 2192 MiB | 0.0106s | 0.0167M | 1404 MiB |\\n| **EVA Attention (window size=256)** | 0.006565s | 0.0661M | 944 MiB | 0.0234s | 0.0661M | 2192 MiB | 0.0167s | 0.0167M | 1404 MiB |\"}" ] }
7vH8DO2oPk
Multi-Epoch Learning with Data Augmentation for Deep Click-Through Rate Prediction
[ "ZhongxiangFan", "Zhaocheng Liu", "Jian Liang", "Dongying Kong", "Han Li", "Peng Jiang", "Shuang Li", "Kun Gai" ]
This paper investigates the one-epoch overfitting phenomenon in Click-Through Rate (CTR) models, where performance notably declines at the start of the second epoch. Despite extensive research, the efficacy of multi-epoch training over the conventional one-epoch approach remains unclear. As a result, all potential rewards from multi-epoch training can hardly be obtained. We identify the overfitting of the embedding layer instead of the Multi-Layer Perceptron (MLP) layers, as the primary issue. To address this, we introduce a novel Multi-Epoch learning with Data Augmentation (MEDA) framework. We design algorithms for both non-incremental and incremental learning scenarios in the industry. MEDA minimizes overfitting by reducing the dependency of the embedding layer on trained data, and achieves data augmentation through training the MLP with varied embedding spaces. MEDA's effectiveness is established on our finding that pre-trained MLP layers can adapt to new embedding spaces and enhance model performances. This adaptability highlights the importance of the relative relationships among embeddings over their absolute positions. We conduct extensive experiments on several public and business datasets, and the effectiveness of data augmentation and superiority over conventional single-epoch training are consistently demonstrated for both non-incremental and incremental learning scenarios. To our knowledge, MEDA represents the first universally reliable multi-epoch training strategy tailored for deep CTR prediction models. We provide theoretical analyses of the reason behind the effectiveness of MEDA. Finally, MEDA has exhibited significant benefits in a real-world incremental-learning online advertising system.
[ "Click-Through Rate Prediction", "Overfitting", "Multi-Epoch Learning", "Incremental Learning" ]
https://openreview.net/pdf?id=7vH8DO2oPk
https://openreview.net/forum?id=7vH8DO2oPk
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vhXBCaDgMn", "k3cBzPIPDo", "jxj1BANsjJ", "fMKH8EavC5", "HIP75jgmIa", "DaqavQgr8e" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1737029673480, 1730475579851, 1730017303496, 1730474589230, 1730345961723, 1730609722709 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission751/Authors" ], [ "ICLR.cc/2025/Conference/Submission751/Reviewer_1BL9" ], [ "ICLR.cc/2025/Conference/Submission751/Reviewer_Ci45" ], [ "ICLR.cc/2025/Conference/Submission751/Reviewer_JMbv" ], [ "ICLR.cc/2025/Conference/Submission751/Reviewer_8P5f" ], [ "ICLR.cc/2025/Conference/Submission751/Reviewer_tXTT" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper targets the single-epoch phenomenon in the CTR domain. The author identifies the overfitting as coming from the embedding layer. Specifically, a multi-epoch data augmentation method(MEDA) is proposed to study the disentangling of the dependency between embedding and the MLP layer. Both an incremental and non-incremental approach are proposed. The proposed method has proven effective with both online and offline experiments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed MEDA framework is simple yet effective on the listed datasets.\\n2. The proposed method has proven effective with both online and offline experiments.\\n3. The presentation of this paper is easy-to-follow.\", \"weaknesses\": \"1. The hyper-parameter in all experiments is not carefully tuned. Hence, we don't know if the effectiveness of MEDA is caused by the method itself or the weak baselines.\\n2. The generalizability of the one-epoch phenomenon and MEDA is over-emphasized. Not all CTR datasets witness a one-epoch phenomenon. For instance, no paper ever reported such a phenomenon on Criteo, which can be considered one of the most important datasets in CTR prediction. Based on the personal experimental experience, the author personally feels Criteo does not fit into this case.\\n3. The paper also lacks a theoretical analysis of what causes the one-epoch phenomenon and why MEDA can alleviate such a phenomenon. Appendix A.4 fails to make the reviewer understand these points.\\n4. MEDA is only tested on the MLP backbone. Other backbones such as transformer and resnet are missing in this paper.\\n5. The paper lacks discussion and comparison with existing model retraining techniques.\\n6. No code artifact is provided, undermining the reproducibility of this paper.\", \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies the multi-epochs overfitting problem in CTR prediction tasks. A solution that maintains multiple groups of embeddings is introduced to solve the problem. Experiments on five datasets are conducted to demonstrate the effectiveness.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. Provide an effective solution that addresses the multi-epochs overfitting problem.\\n\\n2. Extensive experiments on public datasets and industrial scenarios across several baselines.\", \"weaknesses\": \"1. The key insight in Figure 2 needs more analysis. \\\"Data 1 Emb as Fixed\\\" also leads to overfitting, which may be caused by the new categorical ids in Data 2. Please conduct an analysis separating the performance between already-seen IDs and new IDs.\\n\\n2. Limited contribution. The reason for multi-epoches overfitting has already been studied in [1]. The proposed solution, especially for incremental learning, seems just to maintain k-replications (with different but random initialization) embeddings for k epochs respectively, to avoid learning from already fitted embeddings. I suggest that the authors should provide a more in-depth analysis of how the provided solution differs from or improves upon simply maintaining k-replications of embeddings.\\n\\n3. Writing issue: some concepts are not well-explained. Does the absolution position (in Sec 4.1) refer to the feature sign? What does the training algorithm (in Sec 4 Non-incremental Learning) mean? Why do multiple groups of embedding form distinct embedding spaces?\\n\\n4. Reproducibility and Open-Sourcing: Details in the experiments are not clear, and the paper does not mention if the code will be made available public. Ensuring reproducibility is critical for broader validation and adoption of the proposed techniques.\", \"questions\": \"See above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper investigates the one-epoch overfitting (OEO) phenomenon in CTR prediction models and empirically identifies the overfitting of embedding parameters as the primary cause. Based on this insight, the authors propose a method of randomly reinitializing embedding parameters at the beginning of each epoch and continually learning the parameters of MLPs. This proposed method is simple yet effective, enabling multi-epoch learning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The one-epoch overfitting (OEO) phenomenon is a common issue often encountered in industry and has been reported in previous work. This paper specifically investigates this issue and proposes a simple workaround to enable multi-epoch learning, thereby improving model performance. In summary, the paper has the following strengths:\", \"The paper reproduces the OEO phenomenon in open benchmark datasets, validating its existence.\", \"Extensive experiments and analyses are conducted to identify the main cause of OEO, which is the overfitting of embedding parameters.\", \"The paper presents a simple solution to avoid OEO by randomly reinitializing embedding parameters at the beginning of each epoch, a method tailored for both incremental and non-incremental learning scenarios.\", \"Both offline experiments and online A/B tests have been conducted to validate the effectiveness of the proposed method.\"], \"weaknesses\": \"I think the paper has the following major limitations that need further improvement.\\n\\n1. The paper claims, 'We provide theoretical analyses of the reason,' but I did not find any theoretical analysis throughout the paper. The authors are suggested to give a theoretical analysis before the approach.\\n2. The authors primarily experiment with the Amazon dataset and presume the existence of one-epoch overfitting (OEO). However, according to the results reported in the benchmark paper 'Open Benchmarking for Click-Through Rate Prediction,' OEO does not always occur (some datasets exhibit OEO, while others do not; some models experience OEO, while others do not). Given these variations, the paper lacks a rigorous analysis of the conditions under which OEO occurs. Comparing datasets where OEO occurs with those where it does not could better help identify the causes, for example, by highlighting differences between the datasets.\\n3. While the proposed method to avoid OEO is simple, it introduces multiple groups of embedding parameters in the incremental MEDA setting. Given that the size of embedding parameters is substantial, this approach will require multiple times the memory resources. I am unsure whether the return on investment (ROI) is sufficiently high to justify this increased resource usage. The authors are suggested to include some metrics on memory usage to indicate the practicality of the method.\\n4. The method is simple and effective. But the authors lack a discussion whether some alternatives are considered, for example whether using a smaller learning rate for embeddings and a larger learning rate for MLPs could mitigate the OEO issue.\", \"questions\": \"1. The term SEL is not defined upon its first occurrence.\\n2. In the related work, the authors mention graph learning and state that \\\"graph learning research has delved into fine-tuning pre-trained models for new graphs.\\\" However, there is no clear connection drawn between graph learning and CTR prediction research, particularly for this paper. The authors are suggested to explicitly state the relevance of graph learning to the work on CTR prediction, or just remove this section if it's not directly related.\\n3. The paper states, \\\"Figure 3 shows that the embedding overfits on each trained data sample,\\\" but the plot does not track individual data samples. It is unclear how the authors determine that overfitting occurs on each sample. For example, the training loss may drop and then slightly increase without MEDA, and the testing loss may also fluctuate. The authors can clarify how they drew this conclusion from the data presented in Figure 3.\\n4. The paper claims, \\\"the initial embedding of the second epoch precisely memorizes the information of any data sample in the first epoch.\\\" However, there is no empirical evidence provided to support this statement. Given that models typically experience information loss during training, this claim seems unlikely. The authors are suggested to provide empirical evidence supporting this claim, or to clarify if this is a hypothesis rather than a proven fact.\\n5. The sentence \\\"Because, first, Figure 2(a)...\\\" contains a syntax error.\\n6. On line 269, the figure reference may be incorrect. The text states, \\\"Furthermore, our results in Figure 4(a)...\\\" but the figure shows the cosine similarity between MLPs, not the convergence of MLPs.\\n7. On line 311, the details about incremental MEDA are unclear. Specifically, the algorithm \\\"selected based on requirements such as computation/storage costs\\\" is not described. How are these selections made according to computation and storage costs? Additionally, Figure 1(b) mentions two operations, \\\"copy\\\" and \\\"select,\\\" but the \\\"copy\\\" operation is not described in Algorithm 2.\\n8. In the A/B testing section, the authors indicate that they use incremental MEDA. However, the reported numbers (test AUC increase, duration of A/B test, training time) are identical to those reported in the first Arxiv version, which only discussed non-incremental MEDA in that experiment. It is unclear whether two separate A/B tests were conducted or if the same results are being reused.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors propose a novel Multi-Epoch learning with Data Augmentation (MEDA) framework, covering both non-incremental and incremental learning settings. The experimental results of both public and business datasets show that MEDA effectively achieves the desired effect of data augmentation and MEL can outperform the conventional SEL by a significant margin.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"1. The one-epoch problem has long been an unresolved challenge, and I\\u2019m glad to see the authors making further in-depth exploration in this area\\u2014it\\u2019s truly fascinating.\\n2. According to my understanding, the core of Non-incremental MEDA is that the MLP part is trained normally, and the Embedding part is randomly initialized with each epoch. Simple approach, experimentally effective.\", \"weaknesses\": \"1. The authors claim that it is easy to overfit the Embedding layer at the first epoch, so why does the collaborative filtering model not have this problem?\\n2. Algorithms 1 and 2 are not written clearly. For example, the $E_r$ for lines 276 and 277 look the same, but why do I have to repeat the explanation. Feel that the writing is relatively messy, and can not make people understand what the algorithm is doing at once.\\n3. In Figure 3, the AUC without MEDA, why is the training phase flat, but the test phase shows a zigzag shape?\\n4. In Figure 3, the AUC rises zigzag in the presence of MEDA. In Figure 5, why is it smooth again? Their horizontal and vertical coordinates are the same.\", \"questions\": \"See weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper investigates OEO, considering overfitting in the embedding layer caused by high-dimensional data sparsity as a critical issue. To address this, it proposes a framework called Multi-Epoch learning with Data Augmentation (MEDA), which decouples the embedding layer and the data\\u00a0and reinitializes embedding parameters at each epoch. The results demonstrate its effectiveness in non-incremental and incremental learning scenarios, with benefits observed in real-world applications.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1,Mitigates OEO Phenomenon: By reinitializing embedding parameters at each training epoch, the proposed approach (MEDA) effectively mitigates the Over-fitting Embedding Overhaul (OEO) phenomenon, leading to enhanced performance in CTR prediction models.\\n\\n2,Thorough Analysis of Problem Causes: The paper provides a detailed analysis of the underlying causes of the embedding overfitting issue, showing a strong grasp of the challenges inherent to CTR prediction.\\n\\n3,Promising Experimental Results: Results across multiple datasets demonstrate the proposed method\\u2019s effectiveness, adding empirical support to its benefits.\\n\\n4,Clarity of Presentation: The clear and well-organized writing style makes the paper easy to follow, especially for readers unfamiliar with the nuances of CTR modeling.\", \"weaknesses\": \"1,Limited Novelty of Proposed Framework: The innovation in this work appears limited to reinitializing embeddings each epoch. The paper suggests that embedding overfitting stems from initial embeddings over-memorizing data specifics across epochs; however, this can often be mitigated by reshuffling the data between epochs, an approach that isn\\u2019t fully explored here.\\n\\n2,Lack of Experimental Breadth: The experimental comparisons mainly focus on one-epoch versus multi-epoch MEDA trials across different CTR models (e.g., DIN, DIEN), but they do not include other techniques for addressing embedding overfitting, such as embedding dropout, data reshuffling, or regularization. Including these would provide a more comprehensive evaluation of MEDA\\u2019s effectiveness.\\n\\n3,Marginal AUC Improvement in Online Experiments: While the online experiments indicate a slight AUC improvement, there\\u2019s limited evidence of improvement in CTR, the paper\\u2019s primary focus. Since CTR is critical to the study, clearer gains in CTR would make a stronger case for MEDA, alongside secondary metrics like revenue.\\n\\n4,Training Instability Due to Frequent Reinitialization: Reinitializing embeddings every epoch may cause instability, as the MLP layer must continually adapt to the changing embeddings, leading to performance fluctuations. A comparison with alternative methods like data reshuffling could offer a clearer view of MEDA\\u2019s trade-offs, particularly around stability.\", \"questions\": \"please check the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7ut8T9iJ7P
Unleashing the Potential of Classification with Semantic Similarity for Deep Imbalanced Regression
[ "Ruizhi Pu" ]
Recent studies have empirically demonstrated the feasibility of incorporating classification regularizers into Deep Imbalanced Regression (DIR). By segmenting the entire dataset into distinct groups and performing classification regularization on these groups, previous works primarily focused on capturing ordinal characteristic of the DIR in the feature space. Consequently, this direct integration would lead the model to focus merely on learning discriminative features and treating the DIR as a classification task but lacks of an end-to-end solution. As a result, data similarity, another aspect of the continuity of data as the label similarity across the data in DIR also implies feature similarity of the data has always been ignored. Therefore, the effectiveness of these classification-based approaches are significantly limited in DIR. To tackle this problem, we investigate the similarity characteristics of the data in DIR to unleash the potential of classification in helping DIR. Specifically, we first split the imbalance of the datasets into a global level cross-group imbalance and instance-level in-group imbalance. Then, to fully exploit the potential of classification under the DIR task, we propose an asymmetric soft labeling strategy to capture the global data similarity to handle the cross-group imbalance. In the meantime, we introduce the instance label distribution smoothing to address the intra-group imbalance with a multi-heads regressor. More importantly, we associatedly link up the group classification to guide the learning of the multi-heads regressor, which can further harness the classification to solve the DIR from end-to-end. Extensive experiments in the real-world datasets also validates the effectiveness of our proposed method.
[ "Deep imbalanced regression" ]
Reject
https://openreview.net/pdf?id=7ut8T9iJ7P
https://openreview.net/forum?id=7ut8T9iJ7P
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yyQDl51BOs", "lPhJNvUBGp", "WEneYB9Nfe", "SS4LqHBmeu", "JgdTXcVRq6", "IwfIEA1qah", "61xFDhh6cT" ], "note_type": [ "official_review", "decision", "official_review", "official_review", "meta_review", "official_comment", "official_review" ], "note_created": [ 1730496516624, 1737523475585, 1730383055484, 1730295823570, 1734617457767, 1732646837315, 1730289097004 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1944/Reviewer_dier" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1944/Reviewer_Kyvn" ], [ "ICLR.cc/2025/Conference/Submission1944/Reviewer_Fz1p" ], [ "ICLR.cc/2025/Conference/Submission1944/Area_Chair_Q9G1" ], [ "ICLR.cc/2025/Conference/Submission1944/Reviewer_dier" ], [ "ICLR.cc/2025/Conference/Submission1944/Reviewer_bTbt" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes splitting Deep Imbalanced Regression into global group classification and local instance regression tasks. Soft labelling and label distribution smoothing are used to address imbalances. Experiments show that this method is effective on real-world one-dimensional datasets.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"An effort to address an exciting problem.\", \"A two-level approach to deep imbalanced regression.\"], \"weaknesses\": [\"The proposed approach's effectiveness and soundness are not well justified, mainly because there is no theoretical proof.\", \"The claim in lines 60-62 states, \\\"Therefore, semantic similarity, another aspect of the continuity in DIR where the similarity across labels would also reflect the similarity of their features, is **always** overlooked by the previous works.\\\" is not entirely valid, since there are several approaches modelling continuity in the feature spaces [1,2,3].\", \"The extension of the proposed method to multi-dimensional label spaces has not been investigated or experimented with.\", \"The discussion in section 2.2 is not convincing of how semantic similarity is modelled with the proposed approach.\", \"The ablation study is not complete.\"], \"questions\": \"- How this method is compared to [4]?\\n- How is LDS in 3.3 different from [1]?\\n- How $g_{hard}$ is selected?\\n\\n\\n\\n\\n\\n[1] Delving into Deep Imbalanced Regression, ICML 2021.\\n\\n[2] RankSim: Ranking Similarity Regularization for Deep Imbalanced Regression, ICML 2022.\\n\\n[3] ConR: Contrastive regularizer for deep imbalanced regression, ICLR 2024.\\n\\n[4] Group-aware Contrastive Regression for Action Quality Assessment, ICCV 2021.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper explores semantic similarity to enhance the effectiveness of classification in DIR. Specifically, the imbalance in DIR is decomposed into global and local imbalances, and a symmetric and asymmetric soft labeling strategy is proposed. This strategy captures semantic similarity to address the global group imbalance effectively.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. An end-to-end approach decomposes the DIR objective into a global between-group imbalance classification task and a local within-group imbalance regression task. Semantic similarity is leveraged through symmetric and asymmetric soft labeling strategies and label distribution smoothing, marking an innovative shift from traditional methods that embed classification regularizers directly in feature space.\\n2. The paper is well-structured and logically clear, with accessible methodology, making it easier for readers to follow.\\n3. The authors have open-sourced their code, demonstrating a commendable commitment to research transparency and accessibility.\", \"weaknesses\": \"1. The methodology includes pseudo-labeling, label smoothing, and contrastive learning, which are widely used techniques. While the symmetric descending soft labeling strategy is interesting, it may not sufficiently support the novelty of the entire study.\\n2. Soft labeling plays a crucial role in the algorithm. How does the paper ensure the generation of high-quality soft labels?\\n3. There are differences in the comparison algorithms across the three datasets, with certain advanced baselines missing from the IMDB-WIKI-DIR and STS-B-DIR datasets.\\n4. Details on some hyperparameter settings are lacking. The paper does not specify values for $\\\\beta$ in the soft label settings and the batch size for contrastive learning. Additionally, the temperature coefficient for contrastive learning is not explicitly shown in the equation.\", \"questions\": \"See the above weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper highlights that previous works primarily focus on learning the ordinal characteristics of data within the feature space, but overlook that the similarity across labels can reflect the similarity of the corresponding features. The authors propose a framework that decomposes the regression task into group classification and instance regression within each group. They customize symmetric group labels and derive asymmetric group labels based on the statistics of the samples within the group. A group classifier predicts the sample group labels and calculates the classification loss using these asymmetric group labels. For samples within the group, inspired by Yang et al. (2021), the authors employ smoothed labels to calculate the regression loss.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Clear problem definition.\\n2. Adequate introduction to related work.\\n3. Logical and concise writing.\", \"weaknesses\": [\"I have several concerns in the following aspects:\", \"1. Previous work has employed smoothed labels to calculate the mean squared error (MSE) for training regression models. The authors introduce a group classifier based on this concept; however, according to the article, the group classifier operates independently from the instance regressor that predicts \\\\( y \\\\). The group label is not directly related to the sample label, and the authors have not provided the generation process or conducted an ablation experiment for the group label. More experiments and inferences are needed to demonstrate the necessity of the proposed group classifier.\", \"2. Some writing needs better explanation:\", \"The legend in Fig. 1 requires clearer color interpretation.\", \"The definition of the sign \\\\( k \\\\) in Eq. 5 is missing.\", \"The final training objective needs clarification.\", \"The presence of peaks in Fig. 3(b) and (c) when the group number is 40 needs an explanation.\", \"Line 247: Missing part of \\\\( p(g|) \\\\).\", \"Line 324: Lack of space after the '/'.\", \"3. Further ablation studies are recommended:\", \"An ablation study excluding the classification term is needed.\", \"An ablation study excluding contrastive learning should also be included.\"], \"questions\": \"1. A group contains several classes. On what basis are the same group labels assigned to each class within the group?\\n2. How can you explain the statement \\\"our asymmetric soft labeling not only leverages the knowledge from the whole dataset\\\"? According to Eq. 3, only the group statistics are included in the calculation.\\n3. As mentioned in line 249, the normalized features may hinder adaptation. What is the performance of the features without normalization?\\n4. How does the semantic similarity \\\\( \\\\hat{p}(y) \\\\) assist in addressing imbalanced regression? In previous work, \\\"Daso: Distribution-aware semantics-oriented pseudo-label for imbalanced semi-supervised learning,\\\" a semantic classifier is biased towards the tail classes. Is there a relationship between this work and the current study?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper proposes an end-to-end solution for Deep Imbalanced Regression (DIR) by integrating classification regularization through asymmetric soft labeling and instance label distribution smoothing. However, the overall review is negative, with reviewers expressing concerns about the unclear novelty of the approach and the lack of experiments involving multi-dimensional label spaces. Based on these considerations, the paper is not recommended for acceptance at this time.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, the reviewers' opinions remained unchanged.\"}", "{\"comment\": \"Thank you for the clarifications.\\nThis work has the potential to address the DIR tasks. However, there are still fundamental concerns:\\n- The motivation, novelty, and critical difference between the proposed method and the mentioned baselines require more in-depth and clear justifications.\\n- Although your response mentions no assumption on label space dimension, experiments are required to show the scalability of the proposed method to high-dimensional tasks like depth estimation. The experiments are limited to 1D label spaces, and the effectiveness of the proposed method for multi-dimensional label spaces is not empirically justified.\\n\\nTherefore, I keep my score.\"}", "{\"summary\": \"The authors of this paper concern with the label imbalance issue in regression tasks. They have observed that directly incorporating classification regularizers widely used in current methods may potentially distort the geometrical structure of the feature space due to the label imbalance. To address this issue, they propose both symmetric and asymmetric soft labeling strategies to capture semantic similarity so as to handle the label imbalance. The experimental results empirically show the effectiveness of the proposed soft labeling strategy.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The label imbalance issue is important for regression tasks and the phenomenon of distorting the geometrical structure of the feature space due to label imbalance observed in this paper seems to be interesting.\", \"The experimental results and ablation study show the effectiveness of the proposed soft labeling strategy.\"], \"weaknesses\": \"-\\t*The motivation is unclear.* First, the description of deep imbalance regression is too simple and it is difficult for some readers to understand the task and its scenarios. Some visual examples are expected in the introduction. Second, the phenomenon of distorting the geometrical structure of the feature space seems to be interesting, but how and why does it affect the performance of deep imbalance regression? Finally, while the authors present two soft labeling strategies based on semantic similarity to smooth the labels, they do not directly address the label imbalance. It would be better to clarify how the semantic similarity and the proposed method specifically deal with the label imbalance. Otherwise, the proposed label smoothing based on semantic similarity modifies the original one-hot label vector to a soft label. It may introduce some noises, so how do the authors deal with those noisy supervision?\\n-\\t*The contribution may be incremental.* The paper only proposes two soft labeling strategies for group classification from the label smoothing perspective, and employs existing label distribution smoothing [1] and group contrastive representation learning [2] tricks. So, the contribution seems to be incremental.\\n-\\t*Some theoretical analyses are required.* The paper could be strengthened by providing some theoretical analysis and insights to support the empirical findings.\\n-\\t*Minor issues.* (1) The definition of $c(\\\\psi)$ is missing. (2) \\u201caccurate estimating\\u201d in line 156 should be \\u201caccurate estimation\\u201d. (3) The figures in this paper are unclear.\\n\\n[1] Yuzhe Yang, Kaiwen Zha, Yingcong Chen, Hao Wang, and Dina Katabi. Delving into deep imbalanced regression. In International conference on machine learning, pp. 11842\\u201311851. PMLR, 2021.\\n\\n[2] Kaiwen Zha, Peng Cao, Jeany Son, Yuzhe Yang, and Dina Katabi. Rank-n-contrast: Learning\\ncontinuous representations for regression. In Thirty-seventh Conference on Neural Information\\nProcessing Systems, 2023.\", \"questions\": [\"From section 3.5, the training procedure consists of two stages, i.e., training feature encoder and inducing multiple regressor heads, which may result in a suboptimal model. Why not train feature encoder and multiple regressor heads jointly as [1]?\", \"What is the experimental setting of cross-entropy?\", \"[1] Mahsa Keramati, Lili Meng, and R. David Evans. Conr: Contrastive regularizer for deep imbalanced regression. In The Twelfth International Conference on Learning Representations, 2024.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7uDI7w5RQA
See What You Are Told: Visual Attention Sink in Large Multimodal Models
[ "Seil Kang", "Jinyeong Kim", "Junhyeok Kim", "Seong Jae Hwang" ]
Large multimodal models (LMMs) "see" images by leveraging the attention mechanism between text and visual tokens in the transformer decoder. Ideally, these models should focus on key visual information relevant to the text token. However, recent findings indicate that LMMs have an extraordinary tendency to consistently allocate high attention weights to specific visual tokens, even when these tokens are irrelevant to the corresponding text. In this study, we investigate the property behind the appearance of these irrelevant visual tokens and examine their characteristics. Our findings show that this behavior arises due to the massive activation of certain hidden state dimensions, which resembles the attention sink found in language models. Hence, we refer to this phenomenon as the visual attention sink. In particular, our analysis reveals that removing the irrelevant visual sink tokens does not impact model performance, despite receiving high attention weights. Consequently, we recycle the attention to these tokens as surplus resources, redistributing the attention budget to enhance focus on the image. To achieve this, we introduce Visual Attention Redistribution (VAR), a method that redistributes attention in image-centric heads, which we identify as innately focusing on visual information. VAR can be seamlessly applied across different LMMs to improve performance on a wide range of tasks, including general vision-language tasks, visual hallucination tasks, and vision-centric tasks, all without the need for additional training, models, or inference steps. Experimental results demonstrate that VAR enables LMMs to process visual information more effectively by adjusting their internal attention mechanisms, offering a new direction to enhancing the multimodal capabilities of LMMs.
[ "Large multimodal models", "Visual attention sink", "Visual attention redistribution" ]
Accept (Poster)
https://openreview.net/pdf?id=7uDI7w5RQA
https://openreview.net/forum?id=7uDI7w5RQA
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zq5mJObFD3", "yhSL8ebO3h", "u9veAktSdA", "ptLbQ0vINt", "mr0RkehtCT", "k3vymzieOH", "hk5j3JE6hZ", "gx5WA8boPo", "dvYeAlM4S3", "dM0QQ695Nu", "dEvbcomDZZ", "d2prhGGqmI", "cuDVnKmwtX", "bSQH0jtqbn", "afyhcQvOLx", "acnCbA9IIj", "Yd6RaIoFB3", "YP0x5rL7Fs", "Wq5jXipKrX", "VH6YLO7Ffs", "TAPtUg0Wku", "Sf9dyvXEI5", "SSTAuq7B2e", "RUnZU8wJ4a", "PiRFH1zlPX", "Og6cwzu2ID", "OMEH1jqAi9", "NseIcR80tg", "Mu4iEGxG7K", "IAGAX3Wmk3", "DRd5iBdTN2", "CVdGjBz9RV", "AuwZx6CUB3", "ATF1zk1KaG", "7q9pZGjZEh", "7lAslE6sio", "55j62dIQFe", "3fXNsW6veP", "0oNAmSTpBx" ], "note_type": [ "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732355098602, 1732354593538, 1740422093460, 1733039697296, 1733219429293, 1732947639225, 1733219756348, 1729778758158, 1732353848780, 1729922063790, 1732716718676, 1732523103799, 1732354825928, 1732964084473, 1732355034895, 1733039641547, 1732354491323, 1732964031259, 1730292902814, 1733307249492, 1734737928663, 1733219081880, 1733114160693, 1733307403101, 1732354809358, 1732354574543, 1733379658888, 1732380207510, 1737523434811, 1733113767206, 1732503908184, 1733039574557, 1733113683167, 1732716652472, 1732353796933, 1733307345327, 1732355062857, 1732355012179, 1730835216687 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1085/Authors" ], [ "ICLR.cc/2025/Conference/Submission1085/Authors" ], [ "~Seil_Kang1" ], [ "ICLR.cc/2025/Conference/Submission1085/Reviewer_P74f" ], [ "ICLR.cc/2025/Conference/Submission1085/Reviewer_P74f" ], [ "ICLR.cc/2025/Conference/Submission1085/Authors" ], [ "ICLR.cc/2025/Conference/Submission1085/Reviewer_P74f" ], [ "ICLR.cc/2025/Conference/Submission1085/Reviewer_b7X3" ], [ "ICLR.cc/2025/Conference/Submission1085/Authors" ], [ "ICLR.cc/2025/Conference/Submission1085/Reviewer_KaLi" ], [ "ICLR.cc/2025/Conference/Submission1085/Reviewer_P74f" ], [ "ICLR.cc/2025/Conference/Submission1085/Reviewer_KaLi" ], [ "ICLR.cc/2025/Conference/Submission1085/Authors" ], [ "ICLR.cc/2025/Conference/Submission1085/Authors" ], [ "ICLR.cc/2025/Conference/Submission1085/Authors" ], [ "ICLR.cc/2025/Conference/Submission1085/Reviewer_P74f" ], [ "ICLR.cc/2025/Conference/Submission1085/Authors" ], [ "ICLR.cc/2025/Conference/Submission1085/Authors" ], [ "ICLR.cc/2025/Conference/Submission1085/Reviewer_P74f" ], [ "ICLR.cc/2025/Conference/Submission1085/Authors" ], [ "ICLR.cc/2025/Conference/Submission1085/Area_Chair_NNmg" ], [ "ICLR.cc/2025/Conference/Submission1085/Reviewer_P74f" ], [ "ICLR.cc/2025/Conference/Submission1085/Authors" ], [ "ICLR.cc/2025/Conference/Submission1085/Authors" ], [ "ICLR.cc/2025/Conference/Submission1085/Authors" ], [ "ICLR.cc/2025/Conference/Submission1085/Authors" ], [ "~Haonan_Wang1" ], [ "ICLR.cc/2025/Conference/Submission1085/Reviewer_b7X3" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1085/Authors" ], [ "ICLR.cc/2025/Conference/Submission1085/Reviewer_1YMG" ], [ "ICLR.cc/2025/Conference/Submission1085/Reviewer_P74f" ], [ "ICLR.cc/2025/Conference/Submission1085/Authors" ], [ "ICLR.cc/2025/Conference/Submission1085/Reviewer_P74f" ], [ "ICLR.cc/2025/Conference/Submission1085/Authors" ], [ "ICLR.cc/2025/Conference/Submission1085/Authors" ], [ "ICLR.cc/2025/Conference/Submission1085/Authors" ], [ "ICLR.cc/2025/Conference/Submission1085/Authors" ], [ "ICLR.cc/2025/Conference/Submission1085/Reviewer_1YMG" ] ], "structured_content_str": [ "{\"title\": \"General Response\", \"comment\": [\"We appreciate all the reviewers for their constructive comments and insightful suggestions. Our research has been further refined and enriched thanks to their feedback. We have uploaded the revised manuscript, incorporating the reviewers' helpful comments. Below, we summarize the major changes made in the revised manuscript:\", \"More Recent LMMs for Evaluation: We have added two more LMMs, Qwen2-VL and InternVL2, which are more recent. The results show that VAR consistently improves the performance of these models, indicating that the proposed method is applicable to various LMMs. The results are presented in Table 1, 2, and 3. (Reviewer 1YMG, b7X3)\", \"More Discussions on Visual Attention Sink: We have discussed the characteristics of visual attention sinks, including token properties and emerging mechanisms, in more detail. We have also added the relationship between sink tokens in different models. Research on sink tokens in ViT [1] was incorporated into the related work (Section 2). The discussion is presented in Section 4.2 and Appendix A.2. (Reviewer KaLi)\", \"Reevaluating Token Masking Experiment in Section 4.2: Recognizing that the token masking experiment setup was misaligned with our intended objectives, we revised the setup and reevaluated the experiment. We also clarified the experimental settings and results. The updated results are presented in Figure 3(b), with further details available in Appendix C.3. (Reviewer P74f, b7X3)\", \"More Experiments and Discussions on Hyperparameters: We conducted additional experiments to validate the hyperparameter tuning process and demonstrate the robustness of VAR to hyperparameter choices. The results are summarized in Section 6.3 and detailed in Appendix B.2. (Reviewer 1YMG, KaLi)\", \"Validation of Visual Sink Tokens' Irrelevance to the Main Subject: We added quantitative validation of the trend showing that visual sink tokens are unrelated to the main subject of the image, using segmentation datasets, in Appendix A.2. (Reviewer b7X3)\", \"Clearer Explanations and Presentations:\", \"Figure 3 and Figure 4 were integrated to provide a more coherent presentation of the experimental results.\", \"The method for visualizing the attention maps in Figure 1 was clarified, and visual attention maps between all text tokens and visual tokens were included in Figure 13 to demonstrate the consistency of the visual attention sink across all text tokens. (Reviewer P74f)\", \"The target tokens of VAR were explicitly detailed in Section 5. (Reviewer P74f)\", \"Additional future research directions were included in Section 7. (Reviewer b7X3)\", \"[1] Darcet, Timoth\\u00e9e, et al. \\\"Vision Transformers Need Registers.\\\" The Twelfth International Conference on Learning Representations. 2024.\"]}", "{\"title\": \"Response to Reviewer P74f (3/3)\", \"comment\": \"# Q4. Inference Process of Attention Redistribution\\n\\n> Q4. During inference, is the attention redistribution performed layer by layer? Specifically, does the forward pass first obtain original results, identify image-centric attention heads, then update attention weights, and proceed with attention weight redistribution in the next layer?\\n\\nYes, the attention redistribution is performed layer by layer. A few more steps are added to the calculation of attention weights for each layer to identify image-centric attention heads and redistribute attention weights.\"}", "{\"title\": \"RE: Public Comment by Haonan Wang\", \"comment\": \"Dear Hanoan Wang,\\n\\nThank you for your interest in our work and for providing valuable discussion points. We sincerely appreciate your comments. Apologies for the late response, as public comments are unavailable during the anonymous review period.\\n\\nWe acknowledge the importance of attention sinks, as highlighted in the works you mentioned [1][2]. Specifically, we believe their importance can be understood in two aspects.\\n\\n1. The first aspect concerns their direct influence on the model\\u2019s decision-making process. Our experiments indicate that this direct impact is negligible, as the visual sink token is typically located in the background, and its value vector has a small norm. The strong performance of SoftMax-off-by-One in Xiao et al. [2] further supports this observation.\\n2. The second aspect pertains to the effect of removing the attention sink. Eliminating the attention sink can significantly impact the model\\u2019s performance due to its extremely high attention weights. While the visual sink token also exhibits high attention weights relative to other visual tokens, these weights remain lower than those of the `[BOS]` token in text (though the fundamental reason for this remains unclear). Consequently, the r emoval of the visual sink token has a smaller overall effect on the model\\u2019s performance compared to text attention sinks.\\n\\nIn summary, the visual sink token's first aspect, similar to text attention sinks, has a negligible impact on the model's response due to its minimal direct influence. However, the second aspect\\u2014its effect when removed\\u2014is less significant for visual sink tokens compared to text attention sinks, owing to their lower attention weights.\\n\\nThank you once again for your insightful comments.\\n\\nBest regards,\\nThe Authors\"}", "{\"title\": \"Further Response to Authors (3/3)\", \"comment\": \"## **About fixed position of visual tokens pruning**\\n\\nAdditionally, the author\\u2019s response raises a new question. The author mentioned that **in deeper layers, visual sink tokens are nearly fixed**, which means pruning visual tokens in these layers is also fixed. However, in the random visual tokens experiment, the pruned visual tokens vary dynamically across layers. Does this discrepancy make the comparison unfair, potentially leading to misleading results? **Pruning tokens from fixed positions in each layer versus dynamically pruning tokens from different positions inherently causes different levels of disruption to the information flow.**\\n\\nMore specifically, if visual tokens are pruned from fixed positions in each layer, wouldn\\u2019t the impact on model performance be more limited? I conducted a simple experiment myself: **I randomly selected 25% of the visual tokens and pruned these same fixed positions across all layers.** The resulting POPE test score was **82.4**, which seems comparable to the results shown in the visual sink tokens experiment in the figure-3(b).\\n\\nDoes this suggest that the observed performance degradation might not stem from differences between sink tokens and normal tokens, **but rather from the distinction between pruning fixed versus dynamic positions across layers**?\"}", "{\"title\": \"Final Response to Authors (2/2)\", \"comment\": \"## **2. About the Proportion of Performance Degradation**\\n\\nRegrettably, the author's response did not resolve my confusion on this issue. If my understanding is correct, the authors aim to demonstrate that in the *random visual tokens* experiment, when $\\\\phi(x) > 40$, pruning visual tokens at fixed positions results in a more significant performance degradation than in the *visual sink tokens* experiment, where $10 < \\\\phi(x) < 20$. However, in the *visual sink tokens* experiment, when $10 < \\\\phi(x) < 20$, the pruned tokens should already correspond to **normal visual tokens**, which should effectively equate to dynamic pruning. Therefore, the authors need to provide an explanation for the inconsistency in the degradation rates presented in Figure 3 (b). This discrepancy cannot be sufficiently accounted for by the results of pruning at fixed positions, especially considering that there appears to be a significant difference between our experimental results for fixed-position pruning.\"}", "{\"title\": \"Further Response to Reviewer P74f (1/3)\", \"comment\": \"We sincerely appreciate the reviewer's insightful and intriguing questions. We welcome the opportunity to address these concerns and engage in further discussions about our research. Below, we provide detailed responses to each question. For clarity, we address Q3 first, followed by Q2.\\n\\n# Response to Q3. Stability Characteristics of Visual Sink Tokens\\n\\n> In the proposed method, the visual sink tokens at each layer are dynamic. However, for each output metric, the corresponding sink tokens remain constant. I would like to confirm one last time the conclusion the author is attempting to draw: \\u201c*For a single input sample, the sink tokens corresponding to all output tokens remain constant, while they are dynamic across layers.*\\u201d Is this correct?\\n\\nYes, the reviewer's interpretation is indeed accurate. Sink tokens are dynamic across layers but remain constant within a single layer. We would like to clarify one additional point: as shown in Figure 8 and discussed in Appendix A.2, sink tokens emerge in the early layers and typically persist until the final layer. Therefore, the term \\\"dynamic\\\" does not imply frequent changes from layer to layer; rather, sink tokens rarely change once they emerge in the early layers. We hope this clarification resolves the confusion.\"}", "{\"title\": \"Final Evaluation Decision\", \"comment\": \"I kindly request the AC to carefully review both my latest response and the author's final reply, as significant disagreements appear to persist until the very end. Specifically, I find the current explanation regarding Figure 3(b) and the results of fixed position experiments in *Further response to reviewer p74f \\u2161 (3/3)* difficult to accept. Consequently, I must maintain my current rating. If there has indeed been a misunderstanding, I sincerely apologize in advance.\"}", "{\"summary\": \"In this paper, the authors introduce the phenomenon of visual attention sinks and, through visual analysis, propose that visual attention sink tokens are typically unrelated to the main visual subject information. Furthermore, they demonstrate through experiments how to automatically identify visual sink tokens. Based on this finding, the authors propose a method that utilizes attention budgets to redistribute attention weights. Extensive experiments have shown that this method optimizes both the general visual capabilities of models and the issue of hallucination.\", \"the_paper_makes_two_main_contributions\": \"1. It introduces the phenomenon of visual attention sinks and analyzes the behavior of LVLMs.\\n\\n2. The proposed method enhances the performance of LVLMs with low cost.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The writing in this paper is very clear, smoothly transitioning from the introduction of the visual attention sink phenomenon to its analysis and the subsequent methodological approach.\\n\\n2. The concept of visual attention sink proposed in this paper is indeed a direction worthy of deeper exploration within the realm of LVLMs.\\n\\n3. The experiments in this paper cover a comprehensive range of content and consistently achieve stable improvements across various tasks.\\n\\n4. The method proposed in this paper is an optimization based on the attention mechanism, making it generalizable and applicable to most LVLMs for experimentation.\", \"weaknesses\": \"1. The phenomenon of visual attention sink appears to be similar to the summary tokens proposed in OPERA[1]. Summary tokens imply that higher attention weights are assigned to tokens without semantic information, such as ',' and '\\\\n'. The visual sink token seems to suggest that within visual tokens, there are also some tokens that play a similar role to summary tokens.\\n\\n2. The analysis of the visual attention sink phenomenon lacks experimental validation. For more specific explanations, please refer to Question 1.\\n\\n3. Although the author's evaluation experiments cover a comprehensive set of benchmarks, there is a lack of evaluation across more LVLMs and a detailed analysis of the experimental results. For more specific explanations, please refer to Question 2.\\n\\n[1] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation\", \"questions\": \"1. Regarding the questions of visual attention sink:\\n\\n 1.a. I have previously experimented with removing the <bos> token, and the attention sink phenomenon still persisted in the first token. Therefore, I am curious whether new tokens will become visual sink tokens after redistributing the attention weights.\\n\\n 1.b. The author has visualized tokens with high attention weights in the text, and indeed, for these cases, the visual sink tokens are unrelated to the main subject of the image. However, I believe it is still necessary to prove the trend of visual sink tokens being unrelated to the main subject of the image on a large-scale dataset based on segmentation data.\\n\\n 1.c. In Table 4.a, the selection of random tokens does not seem to be related to $\\\\tau$, so I want to confirm whether two masked methods with the same $\\\\tau$ value mean that the same number of tokens are masked.\\n\\n2. Regarding the questions of experimental results:\\n\\n 2.a. The author only chose LLaVA and VILA for evaluation, but the training data for LLaVA is much smaller compared to high-performance open-source models. Therefore, it is necessary to add one or two such models for evaluation, such as Qwen2-VL and InternVL2.\\n\\n 2.b. LLaVA-HD has several times more image tokens compared to LLaVA, so I speculate that the author's model should perform better on this backbone. However, from Table 1, it appears that the improvement of the proposed method on LLaVA-HD is not as significant as on LLaVA.\\n\\nIf the author can address these concerns, I will consider raising the score. Additionally, I believe that further analysis on how to utilize this phenomenon during model training could enhance the impact of the article.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 1YMG (2/2)\", \"comment\": \"# Q1. More Experiments with Various LMMs\\n\\n> Q1. Do newer LMMs with anyres and with visual encoder tuned jointly also have the characteristic of visual attention sinks? Do the observation and analyses generalize across LMMs with different language models and sizes?\\n\\nWe have added two more LMMs, Qwen2-VL [1] and InternVL2 [2], which are more recent. Qwen2-VL is a anyres model with another language model (Qwen), and InternVL2 is a anyres model with a visual encoder tuned jointly. We observed that the visual attention sinks are also observed in these models. Vision encoder tuning may not affect the visual attention sink phenomenon, as the emergence of visual attention sinks is an intrinsic property of LLMs. We have added the investigation results about the hidden states of these models in the Appendix A.1. Also, we have conducted the same experiments with the proposed method on these models and it consistently improves the performance on these models. The results indicate that visual attention sink works well on various LMMs. We have added the results of the experiments on these models in the main paper (Table 1, 2, 3).\\n\\n[1] Wang, Peng, et al. \\\"Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution.\\\" 2024.\\n\\n[2] OpenGVLab Team. \\\"InternVL2: Better than the Best\\u2014Expanding Performance Boundaries of Open-Source Multimodal Models with the Progressive Scaling Strategy\\\". 2024.\\n\\n# Q2. Qualitative Results\\n\\n> Q2. In Fig 8, it looks like LLaVA + VAR has more visual sink tokens qualitatively.\\n\\nThe background tokens in Fig 8 are not visual sink tokens, but noises. Although they have high attention weights, they do not have massive activation in specific dimensions and do not consistently appear in fixed locations. Each attention head does not attend perfectly to the relevant visual tokens, and some attention weights are allocated to the noise. Some may argue that our method potentially amplifies noise, which could adversely affect performance. However, the majority of the attention budget is allocated to important visual tokens, ensuring the effectiveness of our approach.\\n\\n# Q3. Citation\\n\\n> Q3. Please include the reference of LLaVA-1.5-HD-13B.\\n\\nWe have added the reference of LLaVA-1.5-HD-13B in the manuscript. Thank you for pointing it out.\"}", "{\"summary\": \"This paper studies the phenomenon of visual sink tokens in large multi-modal models. That is, the model allocates high attention scores to visual tokens irrelevant to the corresponding text, and these visual tokens consistently appear in fixed locations. This paper finds that visual sink tokens have high activations in specific dimensions, which provides a way to distinguish them from normal tokens. Furthermore, the paper proposes to recycle the attention to meaningful tokens, helping to improve model performances on a wide range of tasks.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1. The presentation is really good, first analyzing the property and effects of visual sink tokens, then providing a solution based on these investigations. The figures are also nice to explain relevant concepts and methods.\\n2. While the phenomenon of sink tokens has been discussed in language models[1] and vision models[2], this paper further extend it to multi-modal models.\\n3. The proposed method is simple yet effective. The model can be modified without further training.\\n\\n[1] Xiao, Guangxuan, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. \\\"Efficient Streaming Language Models with Attention Sinks.\\\" In The Twelfth International Conference on Learning Representations.\\n\\n[2] Darcet, Timoth\\u00e9e, Maxime Oquab, Julien Mairal, and Piotr Bojanowski. \\\"Vision Transformers Need Registers.\\\" In The Twelfth International Conference on Learning Representations.\", \"weaknesses\": \"1. Insufficient references. There is a work [1] on sink tokens in vision models that is not cited or discussed.\\n2. Insufficient discussion on sink tokens. Sink tokens have been observed in different kinds of models, including language models, vision models, and multi-modal models. What's the relationship between sink tokens in different kinds of models? Are they just similar phenomenons, or are there special properties of multimodal models? Moreover, previous works claim that sink tokens exist because of excessive attention scores, so why does it work to recycle these attention scores? From the experiment perspective, the paper is also encouraged to apply previous methods to multi-modal models.\\n3. Heavy hyper-parameter tuning for specific tasks. It seems that different tasks are highly sensitive to the selection of $\\\\rho$. Then is there a train/val/test dataset split? If not, are these hyper-parameters tuned according to the test set? Will these hyper-parameters overfit the test set?\\n\\n[1] Darcet, Timoth\\u00e9e, Maxime Oquab, Julien Mairal, and Piotr Bojanowski. \\\"Vision Transformers Need Registers.\\\" In The Twelfth International Conference on Learning Representations.\", \"questions\": \"1. As mentioned in the weakness, the paper should discuss the relationship between sink tokens in language models and vision models, from the perspective of token properties, emerging mechanisms and proposed methods.\\n2. How does the method determine which layer to apply VAR? The supplementary shows that there are less visual sink tokens in the first layer. However, the experiments say that VAR is not applied to the last layer. That does not sound reasonable.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Authors (2/2)\", \"comment\": \"## **Q3. Stability Characteristics of Visual Sink Tokens**\\n\\nI sincerely appreciate the author\\u2019s clarification regarding the scope of attention redistribution and the results presented in Figure 13, which demonstrate that sink tokens are consistently assigned higher attention scores. This has resolved most of my confusion. However, I still find it somewhat surprising if this method is applied to all text tokens.\\n\\nIn the proposed method, the visual sink tokens at each layer are dynamic. However, for each output metric, the corresponding sink tokens remain constant. I would like to confirm one last time the conclusion the author is attempting to draw: *\\u201cFor a single input sample, the sink tokens corresponding to all output tokens remain constant, while they are dynamic across layers.\\u201d* Is this correct?\"}", "{\"comment\": \"Thanks for the authors' response. I have also read reviews of other reviewers. Most of my concerns have been addressed. I'm already on the positive side, so I will maintain my original score.\"}", "{\"title\": \"Response to Reviewer KaLi (2/2)\", \"comment\": \"# W3. Heavy Hyper-Parameter Tuning of $\\\\rho$\\n\\n> W3-1. Heavy hyper-parameter tuning for specific tasks. It seems that different tasks are highly sensitive to the selection of $\\\\rho$.\\n\\nWe appreciate the reviewer for raising this question. We acknowledge that the optimal value of $\\\\rho$ is task-dependent. However, reasonable $\\\\rho$ values (e.g., $\\\\rho \\\\geq 0.5$) consistently improve the performance of the proposed method on various tasks. Therefore, we argue that the proposed method is still applicable to various tasks in plausible ranges of $\\\\rho$ values. We have discussed the hyperparameter $\\\\rho$ more thoroughly in Sec 6.3. and Appendix B.2.\\n\\n> W3-2. Then is there a train/val/test dataset split? If not, are these hyper-parameters tuned according to the test set? Will these hyper-parameters overfit the test set?\\n\\nMost recent LVLM benchmarks do not include dataset splits because they primarily focus on evaluating a model's inference performance. Therefore, we use the test set (which is the benchmark itself) to determine hyperparameters. As the reviewer mentioned, tuning hyper-parameters on the test set can lead to overfitting. To mitigate this concern, we have further validated that the same optimal hyperparameter values can be obtained from partial samples of the benchmark (i.e., only 10% of the samples) or another benchmark in the same task type. The results indicate that the hyperparameters are not overfitted to the specific benchmark. We have included the results of the experiments with 10% of the samples in the benchmark in Appendix B.2.\\n\\n# Q2. Determination of The Layer to Apply VAR\\n\\n> Q2. How does the method determine which layer to apply VAR? The supplementary shows that there are less visual sink tokens in the first layer. However, the experiments say that VAR is not applied to the last layer. That does not sound reasonable.\\n\\nWe apologize for the confusion. Early layers and last layer are known to have special roles in the model, rather than processing interaction between related tokens [1]. Early layers aggregate information in an n-gram-like manner [2, 3], while the last layer refines the token prediction [1, 4]. Based on this prior knowledge, we applied VAR to the layers where it is expected to be effective. In the early layers, the number of sink tokens is already limited and VAR has little impact. Therefore, we do not need to manually restrict VAR to the early layers. In contrast, we manually disable the application of VAR in the last layer. \\n\\n[1] Lad, Vedang, Wes Gurnee, and Max Tegmark. \\\"The Remarkable Robustness of LLMs: Stages of Inference?.\\\" 2024.\\n\\n[2] Ferrando, Javier, and Elena Voita. \\u201cInformation Flow Routes: Automatically Interpreting Language Models at Scale.\\u201d Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. 2024.\\n\\n[3] Gurnee, Wes, et al. \\\"Finding Neurons in a Haystack: Case Studies with Sparse Probing.\\\" Transactions on Machine Learning Research. 2024.\\n\\n[4] Sharma, Pratyusha, Jordan T. Ash, and Dipendra Misra. \\\"The Truth is in There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction.\\\" The Twelfth International Conference on Learning Representations. 2024.\"}", "{\"title\": \"Further Response to Reviewer P74f (3/3)\", \"comment\": \"> It\\u2019s worth mentioning that I am not familiar with the methods described in [1][2], but I really familiar with [3]. In that study, the authors modified the attention weights before the softmax operation rather than afterward.\\n\\nWe appreciate the reviewer for pointing it out. To the best of our knowledge, in [3], the authors set the attention weights to zero *after* the softmax operation in the analysis section (Section 2.2) and propose a attention re-weighting strategy by modifying the attention weights *before* the softmax operation in the application section (Section 3.1). If we have misunderstood the paper, we sincerely apologize for the confusion.\\n\\n[1] Mohebbi, Hosein, et al. \\\"Quantifying Context Mixing in Transformers.\\\" Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. 2023.\\n\\n[2] Jin, Zhuoran, et al. \\\"Cutting Off the Head Ends the Conflict: A Mechanism for Interpreting and Mitigating Knowledge Conflicts in Language Models.\\\" Findings of the Association for Computational Linguistics. 2024.\\n\\n[3] Wang, Lean, et al. \\\"Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning.\\\" Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023.\\n\\n> Additionally, regardless of whether we refer to the updated or original version of the paper, the results in Figure 3 (previously Figure 4) remain puzzling. In Figure 3(a), we clearly observe many tokens with $\\\\tau > 40$, while the range $10 < \\\\tau < 20$ contains relatively few tokens. If, as the author claims, the performance drop in Figure 3(b) occurs because normal visual tokens are pruned when $\\\\tau < 15$, leading to significant degradation, then pruning random visual tokens when $\\\\tau > 40$ should presumably result in even more normal visual tokens being pruned. However, at this stage, the performance degradation is smaller than when pruning visual sink tokens in the range $10 < \\\\tau < 20$. How is this possible?\\n\\nWe are truly grateful for this insightful question, and we appreciate the opportunity to address it. The reason for the puzzling results in Figure 3 is that \\\"random visual tokens\\\" experiment is more robust to the token masking process than the \\\"visual sink tokens\\\" experiment.\\n\\nFirst, we would like to provide a detailed description of the experimental design to explain it. For each layer $\\\\ell$, we select the visual tokens that satisfy the condition $\\\\phi (\\\\boldsymbol{x} ^{\\\\ell-1} _ j) \\\\geq \\\\tau$ and mask them (depicted as \\\"visual sink tokens\\\" in Figure 3(b)). Similarly, we randomly select the same number of visual tokens and mask them (depicted as \\\"random visual tokens\\\" in Figure 3(b)). In the \\\"visual sink tokens\\\" experiment, although we select visual tokens for each layer, the selected tokens are almost consistent across layers because the hidden states do not change significantly from layer to layer (due to the residual connections). In contrast, in the \\\"random visual tokens\\\" experiment, the selected tokens differ for each layer because the selection is random. If the masked token contains important information, the model has almost no chance of acquiring that information in the \\\"visual sink tokens\\\" experiment. However, in the \\\"random visual tokens\\\" experiment, the model can recover the information from other layers.\\n\\nThis difference in the robustness of the two experiments causes the \\\"random visual tokens\\\" experiment to maintain higher performance than the \\\"visual sink tokens\\\" experiment, even when a larger number of normal visual tokens are masked. Additionally, since random visual tokens also include visual sink tokens, there is a possibility of selecting visual sink tokens in the \\\"random visual tokens\\\" experiment. Therefore, directly comparing the two graphs at different $\\\\tau$ values may be challenging. Instead, we would like to suggest considering the \\\"random visual tokens\\\" experiment as a reference for the \\\"visual sink tokens\\\" experiment at the same $\\\\tau$ value.\\n\\nWe sincerely hope that our responses have helped to clarify any remaining uncertainties. If we have misinterpreted any aspect of the reviewer's question, please notify us, and we will quickly resolve it. Lastly, we would like to express our gratitude to the reviewer for their valuable feedback and time spent reviewing our work.\"}", "{\"title\": \"Response to Reviewer b7X3 (2/3)\", \"comment\": \"# W2 / Q1. Lack of Experimental Validation for Visual Attention Sink\\n\\n> 1.a. I have previously experimented with removing the <bos> token, and the attention sink phenomenon still persisted in the first token. Therefore, I am curious whether new tokens will become visual sink tokens after redistributing the attention weights.\\n\\nThank you for the insightful and interesting question. Due to the autoregressive nature of LLMs, the former token does not depend on the latter token. Since the visual tokens appear before the text tokens and VAR only redistributes the attention weights while processing the text tokens, the visual sink tokens are preserved after the attention weights are redistributed. Also, new visual sink tokens do not emerge after redistributing the attention weights.\\n\\nOn the other hand, we agree that removing the visual sink tokens and checking whether new visual sink tokens emerge is an interesting direction. Therefore, we have conducted an experiment to remove the visual sink tokens and check whether new visual sink tokens emerge, as the reviewer has experimented with <bos> token. We find that new visual sink tokens do not emerge after removing the visual sink tokens. In fact, except for the first position, the sink tokens in LLMs (e.g., ',' and '\\\\n') also do not persist in the same position when we remove them. Therefore, our observation is coherent with the results in LLMs.\\n\\n> 1.b. The author has visualized tokens with high attention weights in the text, and indeed, for these cases, the visual sink tokens are unrelated to the main subject of the image. However, I believe it is still necessary to prove the trend of visual sink tokens being unrelated to the main subject of the image on a large-scale dataset based on segmentation data.\\n\\nWe agree that the validation on the segmentation dataset can show stronger evidence for the trend of visual sink tokens being unrelated to the main subject of the image. We have conducted an additional experiment to validate whether the visual sink tokens are unrelated to the main subject of the image. We use Pascal-VOC and MS-COCO datasets, which are widely used large-scale datasets for object segmentation. We compare the portion of visual sink tokens that are not related to the main subject of the image with the portion of all visual tokens that are not related to the main subject of the image. The results are shown below.\\n\\n| Token Type | Pascal-VOC | MS-COCO |\\n| :--------: | :--------: | :-----: |\\n| Visual Sink Tokens | 90.5% | 93.7% |\\n| All Visual Tokens | 82.9% | 90.5% |\\n\\nThe results indicate that visual sink tokens are more likely to be located in the background regions compared to randomly selected visual tokens, supporting the conclusion that visual sink tokens are unrelated to the main subject of the image. We have added these results to the revised manuscript (Sec 4 and Appendix A.2).\\n\\n> 1.c. In Table 4.a, the selection of random tokens does not seem to be related to $\\\\tau$, so I want to confirm whether two masked methods with the same $\\\\tau$ value mean that the same number of tokens are masked.\\n\\nMaybe the reviewer mentioned Figure 4(a) (revised as Figure 3(b)). Yes, the number of tokens to be masked is the same for both methods for fair comparison. We have clarified this in the caption of Figure 3.\"}", "{\"title\": \"Further Response to Authors (2/3)\", \"comment\": \"## **About proportion of performance degradation**\\n\\nOK, I believe this is the most critical part. If I have correctly understood the author\\u2019s explanation, the claim is that during the pruning of random visual tokens, some visual sink tokens are also included. As a result, when $\\\\phi > 40$, the number of pruned normal visual tokens is not significantly higher than the number pruned in the visual sink tokens experiment when $10 < \\\\phi < 20$.\\n\\nHowever, I had already considered the efficiency of pruning when raising this question. Since I lack access to specific data, I could only infer from Figure 3(a) that the ratio of normal visual tokens to visual sink tokens is approximately 4:1, while the ratio of tokens with $10 < \\\\phi < 20$ to those with $\\\\phi > 40$ is at least 5:1. Therefore, in the random visual tokens experiment, when $\\\\phi > 40$, the number of pruned normal visual tokens should be at least **four times** the number pruned in the visual sink tokens experiment when $10 < \\\\phi < 20$.\\n\\nYet, in the experiment, both scenarios show the **same proportion of performance degradation**, particularly when $\\\\phi > 45$, where the degradation becomes inexplicably low. This inconsistency is quite puzzling to me.\"}", "{\"title\": \"Response to Reviewer P74f (1/3)\", \"comment\": \"Thank you for your detailed and thoughtful review. First, we apologize for the lack of clarity in the experimental setup details, descriptions, and incomplete results that may have made the reading uncomfortable. The review was very helpful for improving our paper, making many parts more clarified or concrete. We provide a detailed response to the reviewer's comments below. If there are any remaining concerns, please inform us, and we will address them promptly.\\n\\n# Q1. Detailed Clarification of Figure 1\\n\\n> Q1. In Figure 1, does the attention map show the attention scores between visual tokens and a specified text token, or between visual tokens and the output token?\\n\\nThe attention map in Fig 1 shows the attention scores between visual tokens and a specified text token. We have clarified this in the caption of the revised manuscript.\\n\\n# W1 / Q2. Concerns Regarding Interpretation of Figure 4(a) Results\\n\\n> W1. The results in Figure 4(a) are essential for validating the effectiveness of the sink token filtering strategy, yet they are challenging to interpret. If the randomly selected tokens are drawn from all input tokens, comparing them with visual sink tokens becomes less fair, making it difficult to substantiate the filtering's effectiveness. Conversely, if the random tokens are sampled exclusively from visual tokens, the experimental results appear unreasonable. Even without visual input, the model\\u2019s accuracy on the POPE benchmark should remain above 50%, yet the figure shows a final result of 0. Furthermore, when $\\\\tau < 10$, the tokens being pruned are predominantly standard visual tokens. Despite this, a significant difference in F1-Score persists between the two pruning strategies, contradicting the results seen when $\\\\tau > 10$. Without a clear explanation of the experimental findings in Figure 4(a), the credibility of the sink token filtering strategy is undermined.\\n\\n> Q2. In the experiment shown in Figure 4(a) on the POPE benchmark, are the \\\"random tokens\\\" randomly selected from only the visual tokens? If not, and they are instead selected from all tokens, wouldn\\u2019t this comparison be unfair? If the tokens are indeed chosen only from visual tokens, how is it possible that the model's predictions drop to zero when masking all visual tokens? On the POPE dataset, LLaVA can maintain over 50% accuracy even without image input, so this experimental result is difficult to understand.\\n\\nThank you for raising these questions. We will address them separately in three parts. Note that Figure 4(a) is changed to Figure 3(b) in the revised manuscript.\\n\\n## A. Random Token Selection: Visual Tokens or All Tokens?\\n\\nThe \\\"random tokens\\\" in the experiment shown in Fig 4(a) are randomly selected from only the visual tokens. As the reviewer pointed out, we agree that masking all input tokens including text tokens is unfair. We have clarified that in the figure, caption and main text of the revised manuscript. \\n\\n## B. F1 Score of 0 in Figure 4(a) (revised as Fig 3(b))\\n\\nWe acknowledge that an F1 Score of 0 seems contradictory with common sense. The reason for this is the attention masking method we used. There are two mainstream methods for attention masking: before softmax (set the attention mask to -inf) [1, 2, 3] and after softmax (set the attention weight to 0) [4, 5, 6]. Both are widely used in literature. We chose the latter method for convenience. When masking a small number of tokens, this method does not cause significant issues because the sum of attention weights is close to 1. However, when masking a large number of tokens, the sum of attention weights is not preserved, leading to model impairment and outputting [UNK] (Unknown) tokens. Consequently, the model cannot respond, resulting in an F1 Score of 0.\\n\\nSince our intention was to prevent the model from seeing specific tokens, not to impair the model, we re-ran the experiment using the former method, masking before the softmax. The former method can preserve the sum of attention weights, thereby preventing model impairment. We have replaced the results in Fig 4(a) with the new results and described the method in Appendix C.3. With this change, the model maintains about 50% F1 Score.\"}", "{\"title\": \"Further Response to Reviewer P74f (2/3)\", \"comment\": \"# Response to Q2. Detailed Explanation of Figure 3\\n\\n> The author\\u2019s explanation of the initial experimental results is quite surprising. I had speculated whether the author might have adjusted the attention scores *after* the softmax operation. If this were the case, **it would essentially disrupt the input distribution of the model entirely**, particularly given the high attention scores of certain tokens. It is difficult to believe such adjustments would have no effect on the results.\\n\\nBased on our understanding, the reviewer is concerned about how the model's performance remains unaffected despite the attention weights of the visual sink token being set to zero *after* the softmax operation. If our understanding is correct, we recognize that the reviewer might think this way, and we acknowledge that it could seem somewhat unintuitive. There are two reasons why the model's performance remains stable despite the zeroed attention weights.\\n\\n### 1) Sink tokens have high attention weights, but small vector norms\\n\\nPrevious studies found that sink tokens have high attention weights but small vector norms [1, 2]. The multi-head attention (MHA) mechanism in transformers can be expressed as follows:\\n\\n$$\\n\\\\text{MHA}^{\\\\ell, h} (\\\\boldsymbol{x}^{\\\\ell-1} _ i) = \\\\sum _ {j \\\\leq i} \\\\alpha^{\\\\ell, h} _ {i, j} \\\\boldsymbol{x} _ j^{\\\\ell-1} \\\\boldsymbol{W} _ {OV} ^ {\\\\ell, h}.\\n$$\\n\\nAlthough the attention weights of the sink tokens (i.e., $\\\\alpha ^ {\\\\ell, h} _ {i, j} ~ (i \\\\in \\\\mathcal{I} _ {\\\\textsf{txt}}, j \\\\in \\\\check{\\\\mathcal{I}} ^ \\\\ell _ \\\\textsf{vis})$) are high, the vector norms of the sink tokens (i.e., $\\\\Vert \\\\boldsymbol{x} _ j ^ {\\\\ell-1} \\\\boldsymbol{W} _ {OV} ^ {\\\\ell, h} \\\\Vert$) are small. We compare the final contributions of the visual sink tokens $\\\\Vert \\\\alpha ^ {\\\\ell, h} _ {i, j} \\\\boldsymbol{x} _ j ^ {\\\\ell-1} \\\\boldsymbol{W} _ {OV} ^ {\\\\ell, h} \\\\Vert$ ($j \\\\in \\\\check{\\\\mathcal{I}} ^ \\\\ell _ \\\\textsf{vis}$) with those of the visual non-sink tokens ($j \\\\in \\\\mathcal{I} _ \\\\textsf{vis} \\\\setminus \\\\check{\\\\mathcal{I}}^\\\\ell _ \\\\textsf{vis}$) below.\\n\\n| Token Type | Visual Sink Tokens | Visual Non-Sink Tokens |\\n|:----------:|:------------------:|:----------------------:|\\n| Contribution $\\\\Vert \\\\alpha ^ {\\\\ell, h} _ {i, j} \\\\boldsymbol{x} _ j ^ {\\\\ell-1} \\\\boldsymbol{W} _ {OV} ^ {\\\\ell, h} \\\\Vert$ | $7.95 \\\\times 10 ^ {-4}$ | $3.74 \\\\times 10 ^ {-3}$ |\\n\\nThe final contributions are calculated in 100 samples and averaged. As shown in the table, the final contributions of the visual sink tokens are smaller than those of the visual non-sink tokens. Therefore, even though the sink tokens have high attention weights, they have less influence compared to other visual tokens due to their small vector norms.\\n\\n### 2) Transformer architecture is generally robust to ablations.\\n\\nTransformer architectures are known to be robust to various ablations, such as the removal of attention heads or layers [3, 4, 5]. The reasons for this robustness are not yet fully understood, but it is believed that transformers possess a highly redundant structure enabled by residual connections [4] and activate self-repair mechanisms when certain components are removed [6, 7]. Therefore, if visual sink tokens do not carry important information, the model can potentially compensate for the changes in hidden states caused by zeroing out the attention weights of the sink tokens.\\n\\nTo summarize, the stability of the model's performance despite zeroing out the attention weights of the visual sink tokens can be attributed to the small vector norms of the sink tokens and the robustness of the transformer architecture.\\n\\nWe have reuploaded the response to Q2 due to an implementation error in calculating the final contributions of visual sink and non-sink tokens. We apologize for the inconvenience.\\n\\n[1] Kobayashi, Goro, et al. \\\"Attention is Not Only a Weight: Analyzing Transformers with Vector Norms.\\\" Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. 2020.\\n\\n[2] Sun, Mingjie, et al. \\\"Massive Activations in Large Language Models.\\\" Conference on Language Modeling (COLM), 2024.\\n\\n[3] Michel, Paul, Omer Levy, and Graham Neubig. \\\"Are Sixteen Heads Really Better than One?\\\" Advances in Neural Information Processing Systems. 2019.\\n\\n[4] He, Shwai, et al. \\\"What Matters in Transformers? Not All Attention is Needed.\\\" 2024.\\n\\n[5] Lad, Vedang, Wes Gurnee, and Max Tegmark. \\\"The Remarkable Robustness of LLMs: Stages of Inference?\\\" 2024.\\n\\n[6] McGrath, Thomas, et al. \\\"The Hydra Effect: Emergent Self-repair in Language Model Computations.\\\" 2023.\\n\\n[7] Rushing, Cody, and Neel Nanda. \\\"Explorations of Self-Repair in Language Models.\\\" Proceedings of the 41st International Conference on Machine Learning. 2024.\"}", "{\"summary\": \"The paper explores the phenomenon of visual attention sinks in large multimodal models (LMMs). It identifies that these models often allocate high attention weights to irrelevant visual tokens during text-image processing, similar to \\\"attention sinks\\\" observed in language models. This issue results in diminished attention to crucial visual information, which can hinder the model\\u2019s multimodal understanding. To address this, the authors propose a method called Visual Attention Redistribution (VAR), which reallocates attention from irrelevant \\\"sink\\\" tokens to relevant visual tokens. The VAR approach first identifies image-centric attention heads that naturally focus on visual content, then redistributes the excess attention from sink tokens to strengthen attention on the actual image content.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Insightful Identification of Visual Attention Sink: The authors make an important discovery by identifying the phenomenon of visual attention sinks in LMMs, showing that these models often allocate high attention weights to irrelevant visual tokens. This observation highlights a key inefficiency in multimodal attention mechanisms and contributes to a deeper understanding of model behaviors.\", \"Effective Attention Redistribution Technique: The proposed Visual Attention Redistribution (VAR) method is both innovative and practical. By reallocating attention from sink tokens to relevant visual tokens, VAR improves focus on key image content, enhancing multimodal model performance without additional training or inference overhead, making it easily applicable across different LMMs.\", \"Broad Experimental Validation Across Tasks: The paper rigorously validates VAR across a range of multimodal tasks, including visual question answering, visual hallucination reduction, and spatial understanding tasks. This broad evaluation demonstrates the robustness and adaptability of the method, showcasing its effectiveness in various vision-language scenarios.\"], \"weaknesses\": [\"The results in Figure 4(a) are essential for validating the effectiveness of the sink token filtering strategy, yet they are challenging to interpret. If the randomly selected tokens are drawn from all input tokens, comparing them with visual sink tokens becomes less fair, making it difficult to substantiate the filtering's effectiveness. Conversely, if the random tokens are sampled exclusively from visual tokens, the experimental results appear unreasonable. Even without visual input, the model\\u2019s accuracy on the POPE benchmark should remain above 50%, yet the figure shows a final result of 0. Furthermore, when $\\\\tau < 10$, the tokens being pruned are predominantly standard visual tokens. Despite this, a significant difference in F1-Score persists between the two pruning strategies, contradicting the results seen when $\\\\tau > 10$. Without a clear explanation of the experimental findings in Figure 4(a), the credibility of the sink token filtering strategy is undermined.\", \"The scope of the attention redistribution strategy remains ambiguous. Specifically, which tokens are targeted by this strategy? In practice, attention redistribution could apply to both input instruction tokens and output tokens, but the implementation should align with the analytical findings. If the strategy is applied only to specific tokens, it is crucial to specify how these tokens are selected; if applied to all output tokens, the analysis should demonstrate stable consistency in the attention assigned to sink tokens. However, this consistency is not currently evident in the analysis.\"], \"questions\": \"1. In Figure 1, does the attention map show the attention scores between visual tokens and a specified text token, or between visual tokens and the output token?\\n\\n2. In the experiment shown in Figure 4(a) on the POPE benchmark, are the \\\"random tokens\\\" randomly selected from only the visual tokens? If not, and they are instead selected from all tokens, wouldn\\u2019t this comparison be unfair? If the tokens are indeed chosen only from visual tokens, how is it possible that the model's predictions drop to zero when masking all visual tokens? On the POPE dataset, LLaVA can maintain over 50% accuracy even without image input, so this experimental result is difficult to understand.\\n\\n3. Regarding the selection of image-centric attention heads, is the VAR strategy applied to all text tokens, or only to specific text tokens, or only to output tokens? If the experiment aligns with the analysis that VAR strategy only applied to specific object token, like \\\"clock\\\" in \\\"Is there a clock in this image?\\\", how are these particular tokens identified? If applied to all text tokens, is the visual token sink phenomenon consistent across all text tokens, which seems unlikely?\\n\\n4. During inference, is the attention redistribution performed layer by layer? Specifically, does the forward pass first obtain original results, identify image-centric attention heads, then update attention weights, and proceed with attention weight redistribution in the next layer? \\n\\nIf the author can address my second and third questions, I will raise my rating.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Final Response to Reviewer P74f\", \"comment\": \"First of all, we would like to express our sincere gratitude to the reviewer for investing time and effort in providing detailed feedback on our work. As this is our final response, we have also included a general response, particularly addressing the discussions between the reviewer and us. We kindly ask the reviewer to refer to the general response, as it provides additional context for the discussion. Below, we address the reviewer\\u2019s specific comments.\\n\\n> Unfortunately, my own experimental results do not support this claim. My experiments indicate that pruning 25% of visual tokens at fixed positions does not lead to significant performance degradation. In my implementation, I modified the `attention_mask` parameter in the `llama_model` module to mask out a portion of the visual tokens. Furthermore, in the author's experimental results, masking only approximately 15% of tokens resulted in a performance drop close to 50%, which is nearly equivalent to random guessing. Could the authors provide more details about their experimental setup?\\n\\nWe appreciate the reviewer for sharing the details of their experiment. Despite our efforts, we were unable to replicate the reviewer's results because it was unclear which specific queries and keys were masked and at what positions. Appendix C.3 provides a detailed description of our experimental setup, specifically explaining how we masked the visual tokens (i.e., specific queries and keys). In the final version of the paper, we will further refine this description to offer a more intuitive understanding of the masking process.\\n\\n> Regrettably, the author's response did not resolve my confusion on this issue. If my understanding is correct, the authors aim to demonstrate that in the random visual tokens experiment, when $\\\\phi(x) > 40$, pruning visual tokens at fixed positions results in a more significant performance degradation than in the visual sink tokens experiment, where $10 < \\\\phi(x) < 20$. However, in the visual sink tokens experiment, when $10 < \\\\phi(x) < 20$, the pruned tokens should already correspond to **normal visual tokens**, **which should effectively equate to dynamic pruning**. Therefore, the authors need to provide an explanation for the inconsistency in the degradation rates presented in Figure 3 (b). This discrepancy cannot be sufficiently accounted for by the results of pruning at fixed positions, especially considering that there appears to be a significant difference between our experimental results for fixed-position pruning.\\n\\nAs mentioned in our previous response (Further Response to Reviewer P74f (3/3)), visual tokens\\u2014whether visual sink tokens or normal visual tokens\\u2014remain nearly consistent across layers because the hidden states do not change significantly from layer to layer (due to the redundancy from residual connections). Therefore, when $10 < \\\\phi(x) < 20$ in the \\u201cvisual sink token\\u201d experiment, the masked normal visual tokens are similar to the **fixed masking setting, not dynamic masking**. Given this, we believe that our explanation in \\\"*Further Response to Reviewer P74f \\u2161 (3/3)*\\\" sufficiently addresses the reviewer\\u2019s concern.\\n\\nIt is unfortunate that we do not have more time for further discussion on this issue. However, we would like to emphasize that **the primary goal of the visual sink token experiment is to demonstrate that the visual sink token has a negligible impact on the model's performance**. This conclusion is supported by the results in Figure 3(b) and our discussion with the reviewers (further discussion can be found in the final general response). We hope that the reviewer finds our explanation satisfactory within the context of the entire paper.\\n\\nFinally, we sincerely thank the reviewer for their active participation in the discussion and for providing valuable insights throughout this process. It has been a meaningful and rewarding experience for us, and we truly appreciate it.\"}", "{\"metareview\": \"### Paper Summary:\\nThis paper introduces and analyzes the \\\"visual attention sink\\\" phenomenon in Large Multimodal Models (LMMs), where irrelevant visual tokens receive disproportionately high attention weights. The authors propose Visual Attention Redistribution (VAR) to reallocate attention from sink tokens to more relevant visual information, improving model performance without additional training.\\n\\n### Strengths:\\n1. Novel phenomenon identification [1YMG]: \\n> \\\"The identification of visual attention sinks draws parallels with attention sinks in language models, providing a novel insight into the functioning of LMMs.\\\"\\n\\n2. Simple yet effective solution [P74f]:\\n> \\\"The proposed Visual Attention Redistribution (VAR) method is both innovative and practical. By reallocating attention from sink tokens to relevant visual tokens, VAR improves focus on key image content.\\\"\\n\\n3. Comprehensive validation [b7X3]: \\n> \\\"The experiments in this paper cover a comprehensive range of content and consistently achieve stable improvements across various tasks.\\\"\\n\\n### Weaknesses:\\n1. Hyperparameter sensitivity [1YMG]:\\n> \\\"The reviewer's biggest concern is about the determination of hyperparameter \\u03c1. According to the paper and Fig 7(b), it is benchmark-dependent.\\\"\\n\\n2. Reproducibility concerns [P74f]:\\n> \\\"Pruning 25% of visual tokens at fixed positions does not lead to significant performance degradation... Could the authors provide more details about their experimental setup?\\\"\\n\\n### Justification:\\nWhile P74f raised important concerns about experimental reproducibility, these focus primarily on one validation approach rather than the core contribution. The phenomenon is well-documented through multiple approaches, and the solution shows practical benefits. As noted by P74f:\\n> \\\"Most other aspects of this work appear quite solid\\\"\\n\\nTwo other expert reviewers rated the paper above acceptance threshold, acknowledging its contribution to understanding LMM behavior. However, the authors should add detailed experimental protocols and acknowledge limitations in token masking experiments. \\n\\nAs a final remark, the identification of the attention sink phenomenon itself, in my opinion, represents a valuable contribution to the field. A paper can merit acceptance even if readers do not fully agree with the authors' proposed explanation, as the explanation why / how modern deep models work is constantly changing. Since no fundamental flaws were found in the technical work, I recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The key debate centered on token masking experiments (Figure 3b). P74f's experiments showed different results from authors:\\n> \\\"I randomly selected 25% of the visual tokens and pruned these same fixed positions across all layers. The resulting POPE test score was 82.4\\\"\", \"authors_defended_their_methodology_and_added_evidence\": \"> \\\"Pruning involves excluding visual tokens before they enter the layer, while masking is a post-processing step that eliminates their impact after attention calculation\\\"\\n\\nDespite disagreement on this experiment, the core claims are supported by multiple other validations:\\n1. Quantitative validation on widely-adopted datasets (Pascal-VOC, MS-COCO) showing sink tokens appear in background regions\\n2. Consistent improvements across recent models (Qwen2-VL, InternVL2)\\n3. Analysis showing token contributions align with theoretical predictions\"}", "{\"title\": \"Final Response to Authors (1/2)\", \"comment\": \"## **1. About the Fixed Position of Visual Tokens Pruning**\\n\\nIn my expectation, **fixed masking** should have a smaller impact on the model's performance compared to **dynamic masking**. As a result, I hypothesized that the experimental results for the *visual sink tokens* scenario would show a relatively smaller performance degradation, meaning that the blue line in Figure 3(b) should shift upward. However, the author's experiments suggest the opposite: fixed masking has a more significant negative impact on the model's performance than dynamic masking, which implies that the blue line in Figure 3(b) would shift further downward.\\n\\nUnfortunately, my own experimental results do not support this claim. My experiments indicate that pruning 25% of visual tokens at fixed positions does not lead to significant performance degradation. In my implementation, I modified the `attention_mask` parameter in the `llama_model` module to mask out a portion of the visual tokens. Furthermore, in the author's experimental results, masking only approximately 15% of tokens resulted in a performance drop close to 50%, which is nearly equivalent to random guessing. Could the authors provide more details about their experimental setup?\"}", "{\"title\": \"Further response to reviewer p74f \\u2161 (3/3)\", \"comment\": \"# Response to #3: About fixed position of visual tokens pruning\\n\\n> Additionally, the author\\u2019s response raises a new question. The author mentioned that **in deeper layers, visual sink tokens are nearly fixed**, which means pruning visual tokens in these layers is also fixed. However, in the random visual tokens experiment, the pruned visual tokens vary dynamically across layers. Does this discrepancy make the comparison unfair, potentially leading to misleading results? **Pruning tokens from fixed positions in each layer versus dynamically pruning tokens from different positions inherently causes different levels of disruption to the information flow.**\\n\\n> More specifically, if visual tokens are pruned from fixed positions in each layer, wouldn\\u2019t the impact on model performance be more limited?\\n\\n> Does this suggest that the observed performance degradation might not stem from differences between sink tokens and normal tokens, **but rather from the distinction between pruning fixed versus dynamic positions across layers**?\\n\\nWe appreciate the insightful question about the impact of fixed versus dynamic masking on model performance. We conducted additional experiments to investigate the impact of fixed versus dynamic masking on model performance. Specifically, we compared the performance degradation of fixed masking and dynamic masking. The number of tokens masked in these experiments is the same as the number of tokens masked in the \\u201crandom visual tokens\\u201d experiment ($\\\\phi (x) > 40$ and $\\\\phi (x) > 45$). Thus, the dynamic masking experiment is expected to show performance degradation similar to the our \\u201crandom visual tokens\\u201d experiment in Figure 3(b). Since the discussion period is limited, we evaluated the F1 score in about 10% of the POPE benchmark. The results are shown below:\\n\\n| $\\\\phi (x)$ | Fixed Masking | Dynamic Masking |\\n|--------|---------------|-----------------|\\n| $ >40 $ \\t| 52.0 \\t| 61.3 \\t|\\n| $ >45 $\\t| 61.1 \\t| 77.6 \\t|\\n\\nThe results of the dynamic masking experiment are consistent with the \\u201crandom visual tokens\\u201d experiment in Figure 3(b). These results show that fixed masking has a greater impact on performance degradation than dynamic masking. Therefore, the comparison between the \\\"visual sink tokens\\\" experiment (similar to Fixed Masking) and the \\\"random visual tokens\\\" experiment (similar to Dynamic Masking) is biased in a way that disadvantages the \\\"visual sink tokens\\\" experiment. Even so, the sink token experiment consistently maintains performance compared to the \\u201crandom visual tokens\\u201d experiment for $\\\\tau > 20$. This result supports that the visual sink tokens are meaningless.\\n\\nThese results also align with the discussion in #2. Since the number of masked normal visual tokens for $\\\\phi (x) > 40$ is approximately three times the number of masked normal visual tokens for $10 < \\\\phi (x) < 20$, the performance degradation in fixed masking at $\\\\tau = 40$ is higher than that of $\\\\tau = 10$ in the \\\"visual sink tokens\\\" experiment. This holds true for fixed masking at $\\\\tau = 45$ and $\\\\tau = 15$ in the \\\"visual sink tokens\\\" experiment as well.\\n\\n> I conducted a simple experiment myself: **I randomly selected 25% of the visual tokens and pruned these same fixed positions across all layers**. The resulting POPE test score was **82.4**, which seems comparable to the results shown in the visual sink tokens experiment in the figure-3(b).\\n\\nWe appreciate the reviewer's effort in conducting the experiment. After performing our own \\\"Fixed Masking\\\" experiments, we are questioning the discrepancy between the reviewer's results and our own. Since we do not have complete details about the reviewer's experiment, we cannot provide a definitive explanation. However, we believe that the discrepancy may stem from the difference between pruning and masking. Pruning involves excluding visual tokens before they enter the layer, while masking is a post-processing step that eliminates their impact after the attention calculation has been completed within the layer. While pruning maintains performance even when a large number of tokens are removed [1], masking leads to performance degradation even when only a small number of tokens are excluded [2, 3]. If we have misunderstood the reviewer's experiment, we sincerely apologize and would appreciate it if the reviewer could provide more details about the experiment.\\n\\n[1] Chen, Liang, et al. \\\"An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models.\\\" European Conference on Computer Vision. 2024.\\n\\n[2] Geva, Mor, et al. \\\"Dissecting Recall of Factual Associations in Auto-Regressive Language Models.\\\" Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023.\\n\\n[3] Neo, Clement, et al. \\\"Towards Interpreting Visual Information Processing in Vision-Language Models.\\\" 2024.\"}", "{\"title\": \"Final General Response (2/2)\", \"comment\": \"We believe that the experiments we conducted should not confuse or hinder the reader's understanding of our work. We will provide more intuitive explanations and organize the experimental settings to make it easier for readers to understand the novel phenomenon of the visual sink token and to reproduce the experiments in the final version of the paper.\\n\\nFinally, we sincerely appreciate the reviewers for their constructive feedback, which has greatly improved the clarity and substance of our work. We are grateful for the opportunity to engage in this discussion and to enhance our research through this process.\\n\\nSincerely,\\n\\nThe Authors\\n\\n[1] Geva, Mor, et al. \\\"Dissecting Recall of Factual Associations in Auto-Regressive Language Models.\\\" Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023.\\n\\n[2] Neo, Clement, et al. \\\"Towards Interpreting Visual Information Processing in Vision-Language Models.\\\" 2024.\"}", "{\"title\": \"Response to Reviewer KaLi (1/2)\", \"comment\": \"We appreciate the reviewer's thoughtful feedback and valuable suggestions. We provide a detailed response to the reviewer's comments below.\\n\\n# W1 / W2 / Q1. Insufficient Discussion on Sink Tokens\\n\\n> W1. Insufficient references. There is a work [1] on sink tokens in vision models that is not cited or discussed.\\n\\n> W2. Insufficient discussion on sink tokens. Sink tokens have been observed in different kinds of models, including language models, vision models, and multi-modal models. What's the relationship between sink tokens in different kinds of models? Are they just similar phenomenons, or are there special properties of multimodal models? Moreover, previous works claim that sink tokens exist because of excessive attention scores, so why does it work to recycle these attention scores? From the experiment perspective, the paper is also encouraged to apply previous methods to multi-modal models.\\n\\n> Q1. As mentioned in the weakness, the paper should discuss the relationship between sink tokens in language models and vision models, from the perspective of token properties, emerging mechanisms and proposed methods.\\n\\nWe appreciate the reviewer for pointing out the insufficient discussion on the relationship between sink tokens in different kinds of models. Sink tokens in vision models, language models, and multimodal models are similar phenomenons, but those in multimodal models are slightly different. \\n\\n(**Token Properties**) High attention weights and massive activation of sink tokens are common properties in different models. We also find that visual sink tokens are more likely to be located in the background region (unrelated to the main objects). This characteristic is consistent with the findings in vision models [1] and language models [3]. Specifically, sink tokens in ViT are located in the background regions, and sink tokens in language models have limited semantic meanings. Therefore, we conclude that the tokens with less semantic meanings are more likely to be sink tokens in various models.\\n\\n(**Emerging Mechanisms**) In vision models and language models, sink tokens emerge as a result of massive training data [1, 2]. Since multimodal models are based on the pre-trained language models, visual attention sink phenomenon in multimodal models are more likely to be inherited from language models. The evidence is that sink dimensions in multimodal models are identical to those in base language models, as discussed in Appendix A.1.\\n\\n(**Proposed Methods**) As the reviewer mentioned, sink tokens exist because of excessive attention scores. However, the excessive attention scores are useless for the model's prediction, as we discussed in Sec 4.2 and 4.3. Recycling these attention scores is effective and safe because it compensates for the low attention weights on visual tokens and does not modify the original attention distribution except for the sink tokens. We tried to apply previous methods such as [3] to multimodal models, but there was no improvement in the performance. The Visual + Text setting in Table 5 can be considered as a soft version of [3] but it is not as effective as the proposed method (VAR).\\n\\nOverall, we agree that the paper should discuss the relationship between sink tokens in different kinds of models more thoroughly. We have added [1] to the related work and discussed the relationship between sink tokens in different models in Sec 4.2 and Appendix A.2. If there are any remaining concerns, please inform us, and we will address them promptly.\\n\\n[1] Darcet, Timoth\\u00e9e, et al. \\\"Vision Transformers Need Registers.\\\" The Twelfth International Conference on Learning Representations. 2024.\\n\\n[2] Gu, Xiangming, et al. \\\"When Attention Sink Emerges in Language Models: An Empirical View.\\\" 2024.\\n\\n[3] Yu, Zhongzhi, et al. \\u201cUnveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration.\\\" Forty-first International Conference on Machine Learning. 2024.\"}", "{\"title\": \"Response to Reviewer P74f (2/3)\", \"comment\": \"## C. Interpretation of the Results in Figure 4(a) (revised as Fig 3(b))\\n\\nBased on our understanding, the reviewer's concern is about why the performance difference between the visual sink token pruning and the random token pruning strategies persists when $\\\\tau < 10$. We believe that the concern is due to the contradictory results (i.e., F1 Score of 0) and the reviewer's intuition is indeed correct. As we have replaced the experiment as stated in B, we provide a more detailed explanation for the revised experimental results below.\\n\\nAs shown in Fig 3(a), when $\\\\tau$ decreases to 15, the standard visual tokens start to be pruned. However, most standard visual tokens are still not pruned and visual sink tokens still dominate among the pruned tokens. Therefore, the performance is maintained compared to the random visual token pruning. The difference in performance becomes smaller as $\\\\tau$ decreases, as more standard visual tokens are pruned. We have clarified the explanations of experimental results.\\n\\n[1] Cao, Shuyang, and Lu Wang. \\\"Attention Head Masking for Inference Time Content Selection in Abstractive Summarization.\\\" Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2021.\\n\\n[2] Geva, Mor, et al. \\\"Dissecting Recall of Factual Associations in Auto-Regressive Language Models.\\\" Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023.\\n\\n[3] Neo, Clement, et al. \\\"Towards Interpreting Visual Information Processing in Vision-Language Models.\\\" 2024.\\n\\n[4] Wang, Lean, et al. \\\"Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning.\\\" Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023.\\n\\n[5] Mohebbi, Hosein, et al. \\\"Quantifying Context Mixing in Transformers.\\\" Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. 2023.\\n\\n[6] Jin, Zhuoran, et al. \\\"Cutting Off the Head Ends the Conflict: A Mechanism for Interpreting and Mitigating Knowledge Conflicts in Language Models.\\\" Findings of the Association for Computational Linguistics. 2024.\\n\\n# W2 / Q3. Ambiguity in Scope of Token Selection for Attention Redistribution Strategy\\n\\n> W2. The scope of the attention redistribution strategy remains ambiguous. Specifically, which tokens are targeted by this strategy? In practice, attention redistribution could apply to both input instruction tokens and output tokens, but the implementation should align with the analytical findings. If the strategy is applied only to specific tokens, it is crucial to specify how these tokens are selected; if applied to all output tokens, the analysis should demonstrate stable consistency in the attention assigned to sink tokens. However, this consistency is not currently evident in the analysis.\\n\\n> Q3. Regarding the selection of image-centric attention heads, is the VAR strategy applied to all text tokens, or only to specific text tokens, or only to output tokens? If the experiment aligns with the analysis that VAR strategy only applied to specific object token, like \\\"clock\\\" in \\\"Is there a clock in this image?\\\", how are these particular tokens identified? If applied to all text tokens, is the visual token sink phenomenon consistent across all text tokens, which seems unlikely?\\n\\nWe apologize for the lack of clarity in the token selection for the attention redistribution strategy. The target tokens of the VAR strategy are **all text tokens**, including both input instruction tokens and output tokens. We have added Fig 13, which visualizes the visual attention maps between all text tokens and visual tokens, to demonstrate that the visual attention sink is consistent across all text tokens. Therefore, VAR does not have the post-processing step to identify specific tokens. As the text tokens related to the visual information have more image-centric heads (Appendix A.3), the VAR strategy is automatically applied more effectively to these tokens.\\n\\nMeanwhile, in most of the figures, we only show the text-to-image attention map for specific object tokens to understand the concept more intuitively. However, we acknowledge that this may cause confusion about the scope of VAR. We have added clarifications in the caption of Fig 1 and the main text in Sec 5 to emphasize that the VAR strategy is applied to all text tokens.\"}", "{\"comment\": \"Hi,\\n\\nI very much appreciate your diligence during the author-reviewer discussion period. I have a question regarding your fourth point: \\\"Visual sink tokens exhibit similar characteristics to text sink tokens, which are known to have a negligible impact on the model.\\\"\\n\\nContrary to this statement, previous works [1][2] indicate that text attention sinks can have a significant impact on model performance. Could you please clarify this point further? I agree that the value vector in LLM attention has a small norm, which aligns with observation on image side. However, there are some differences to consider\\u2014for example, removing the text attention before the softmax calculation can have a significant impact on the model's performance.\\n\\n[1] Ge, Suyu, et al. \\\"A little goes a long way: Efficient long context training and inference with partial contexts.\\\" arXiv preprint arXiv:2410.01485 (2024).\\n\\n[2] Xiao, Guangxuan, et al. \\\"Efficient streaming language models with attention sinks.\\\" arXiv preprint arXiv:2309.17453 (2023).\"}", "{\"comment\": \"Thank you for the detailed response, I have decided to raise my score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Further response to reviewer p74f \\u2161 (2/3)\", \"comment\": \"# Response to #2: About proportion of performance degradation\\n\\n> If I have correctly understood the author\\u2019s explanation, the claim is that during the pruning of random visual tokens, some visual sink tokens are also included. As a result, when \\n$\\\\phi > 40$, the number of pruned normal visual tokens is not significantly higher than the number pruned in the visual sink tokens experiment when $10 < \\\\phi < 20$.\\n\\n> However, I had already considered the efficiency of pruning when raising this question. Since I lack access to specific data, I could only infer from Figure 3(a) that the ratio of normal visual tokens to visual sink tokens is approximately 4:1, while the ratio of tokens with $10 < \\\\phi < 20$ to those with $\\\\phi > 40$ is at least 5:1. Therefore, in the random visual tokens experiment, when $\\\\phi > 40$, the number of pruned normal visual tokens should be at least **four times** the number pruned in the visual sink tokens experiment when $10 < \\\\phi < 20$.\\n\\n> Yet, in the experiment, both scenarios show the **same proportion of performance degradation**, particularly when $\\\\phi > 45$, where the degradation becomes inexplicably low. This inconsistency is quite puzzling to me.\\n\\nWe are sorry for any confusion caused by our explanation. Our primary claim in \\\"Further Response to Reviewer P74f (3/3)\\\" was that masking the same fixed tokens across layers leads to greater performance degradation compared to masking random tokens per layer, even when a larger proportion of normal visual tokens is pruned in the latter case. (We will discuss this in more detail in the next section.) We acknowledge that our additional claim regarding the potential selection of visual sink tokens in the \\\"random visual tokens\\\" experiment is not the primary reason why the performance degradation does not become significantly higher when $\\\\phi (x) > 40$. Indeed, when $\\\\phi (x) > 40$, the number of masked normal visual tokens is approximately **three times** the number of masked visual sink tokens for $10 < \\\\phi (x) < 20$, which aligns closely with the ratio inferred by the reviewer. We would like to explain why the performance degradation does not increase significantly when $\\\\phi (x) > 40$ in the next section.\"}", "{\"comment\": \"Thanks for the detailed analyses. The reviewer appreciates the efforts for the new experiments.\\n\\nThe authors have addressed the reviewer's major concern about the hyperparameter $\\\\rho$ by providing more experimental results. Specifically, they show that the value of $\\\\rho$ is quite stable across benchmarks within each type of task. The tuning method of taking 10% of data as validation set also resolves the reviewer's concern of directly tuning on the benchmark.\\n\\nThe authors also provide more experiments showing that the attention sink phenomenon can also be observed in newer LMMs with anyres as well as different families of LLMs.\\n\\nOverall, the authors have addressed the reviewer's major concerns. Based on the soundness and contribution of this paper, the reviewer recommend accepting this paper and has adjusted the score accordingly.\"}", "{\"title\": \"Further Response to Authors (1/3)\", \"comment\": \"I believe the author has fully addressed Q3. However, the conclusions drawn from Q3 have further implications for the experimental setup in Q2, and the issues in Q2 still remain.\\n\\n## **About setting attention weights to 0**\\n\\nI must admit that I am not very familiar with the characteristics of sink tokens, and I appreciate the author's response for providing me with substantial background knowledge on this topic. That said, I am still slightly puzzled. In [1], it was noted that setting the attention sink tokens proposed in the paper to zero after the softmax operation causes a significant drop in model performance. Similarly, in MLLMs, system tokens in deeper layers also exhibit characteristics of sink tokens, and pruning them likewise **leads to a dramatic performance decline.** My question is: do these two types of tokens align with the author\\u2019s definition of sink tokens? Do they exhibit the high activation values in specific dimensions as described? Furthermore, why does pruning them cause such a drastic drop in model performance?\\n\\nAdditionally, regarding the operation of setting attention to zero in [2], I revisited the paper and found that the authors did not clarify whether the experiments in Section 2.2 involved setting the attention to zero before or after the softmax operation. Upon reviewing their open-source code, specifically the file `icl.analysis.shallow_layer.py`, it appears that they set the attention to zero **before** the softmax operation.\\n\\nGiven the approaching deadline, **if the author has limited time, please feel free to temporarily disregard this question.** I believe resolving the other issue will help me adjust the rating to a positive score more quickly.\\n\\n[1] Xiao, Guangxuan, et al. \\\"Efficient streaming language models with attention sinks.\\\"\\u00a0*arXiv preprint arXiv:2309.17453*\\u00a0(2023). \\n[2]\\u00a0Wang, Lean, et al. \\\"Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning.\\\" Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023.\"}", "{\"title\": \"Further response to reviewer p74f \\u2161 (1/3)\", \"comment\": \"We thank you for taking the time to provide us with additional questions. We have addressed them below.\\n\\n# Response to #1: About setting attention weights to 0\\n\\n> In [1], it was noted that setting the attention sink tokens proposed in the paper to zero *after* the softmax operation causes a significant drop in model performance. Similarly, in MLLMs, system tokens in deeper layers also exhibit characteristics of sink tokens, and pruning them likewise **leads to a dramatic performance decline**. My question is: do these two types of tokens align with the author\\u2019s definition of sink tokens? Do they exhibit the high activation values in specific dimensions as described? Furthermore, why does pruning them cause such a drastic drop in model performance?\\n\\nWe appreciate the insightful question about attention sinks. The initial token of LLMs (the `<bos>` token) and certain system tokens in MLLMs (including the `<bos>` token) align with our definition of sink tokens, exhibiting high activation values in sink dimensions. A significant performance drop, as reported in [1], is observed when the initial sink tokens (particularly the `<bos>` token) are excluded from the attention window. To clarify, this scenario is technically equivalent to masking the attention weights of the sink tokens before the softmax operation, as the sink tokens are not factored into the softmax calculation.\\n\\nNotably, this performance decline is a distinct property of the `<bos>` token. As shown in Figure 12 of [1], the `<bos>` token has exceptionally high attention weights (ranging from 0.4 to 1.0). Consequently, masking the `<bos>` token significantly alters the attention distribution, leading to a substantial performance drop. In contrast, other sink tokens, including visual sink tokens in MLLMs, do not exhibit such high attention weights like <bos> token (though they still have higher attention weights compared to other tokens). As a result, pruning these tokens causes minimal changes to the attention distribution, and the associated performance drop is negligible compared to that of the `<bos>` token.\\n\\n> Additionally, regarding the operation of setting attention to zero in [2], I revisited the paper and found that the authors did not clarify whether the experiments in Section 2.2 involved setting the attention to zero before or after the softmax operation. Upon reviewing their open-source code, specifically the file `icl.analysis.shallow_layer.py`, it appears that they set the attention to zero **before** the softmax operation.\\n\\nWe apologize for the misunderstanding regarding the attention zeroing operation in [2]. Thank you for pointing out and clarifying the implementation details.\\n\\n\\n[1] Xiao, Guangxuan, et al. \\\"Efficient streaming language models with attention sinks.\\\" arXiv preprint arXiv:2309.17453 (2023).\\n\\n[2] Wang, Lean, et al. \\\"Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning.\\\" Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023.\"}", "{\"title\": \"Response to Authors (1/2)\", \"comment\": \"I believe the author has successfully addressed Q1 and Q4; however, the critical issue of Q2 remains unresolved, and there is still some confusion regarding Q3.\\n\\n## **Q2. Detailed Explanation of Figure 3 (Previously Figure 4)**\\n\\nThe author\\u2019s explanation of the initial experimental results is quite surprising. I had speculated whether the author might have adjusted the attention scores *after* the softmax operation. If this were the case, **it would essentially disrupt the input distribution of the model entirely**, particularly given the high attention scores of certain tokens. It is difficult to believe such adjustments would have no effect on the results.\\n\\nIt\\u2019s worth mentioning that I am not familiar with the methods described in [1][2], but I really familiar with [3]. In that study, the authors modified the attention weights *before* the softmax operation rather than afterward.\\n\\nAdditionally, regardless of whether we refer to the updated or original version of the paper, the results in Figure 3 (previously Figure 4) remain puzzling. In Figure 3(a), we clearly observe **many tokens with $\\\\tau > 40$**, while the range $10 < \\\\tau < 20$ contains relatively **few tokens**. If, as the author claims, the performance drop in Figure 3(b) occurs because normal visual tokens are pruned when $\\\\tau < 15$, leading to significant degradation, then pruning random visual tokens when $\\\\tau > 40$ should presumably **result in even more normal visual tokens being pruned**. However, at this stage, the performance degradation is **smaller** than when **pruning visual sink tokens** in the range $10 < \\\\tau < 20$. How is this possible?\\n\\nTo be frank, most other aspects of this work appear quite solid. However, the results presented in Figure 3 significantly undermine my confidence in the experimental findings. Unfortunately, even with the updated results, my concerns regarding this issue remain unresolved.\\n\\nGiven the extension of the discussion period, I will temporarily maintain my current rating while awaiting the author\\u2019s further response to Q2. I have noticed that the other reviewers have no objections to the results in Figure 3, and I welcome any discussions on this point with the other reviewers. If I have indeed misunderstood these results, I sincerely apologize in advance.\\n\\n[1] Mohebbi, Hosein, et al. \\\"Quantifying Context Mixing in Transformers.\\\" Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. 2023. \\n[2] Jin, Zhuoran, et al. \\\"Cutting Off the Head Ends the Conflict: A Mechanism for Interpreting and Mitigating Knowledge Conflicts in Language Models.\\\" Findings of the Association for Computational Linguistics. 2024. \\n[3] Wang, Lean, et al. \\\"Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning.\\\" Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023.\"}", "{\"title\": \"Response to Reviewer 1YMG (1/2)\", \"comment\": \"We thank the reviewer for the constructive review and insightful comments. We provide a detailed response to the reviewer's comments below. If there are any remaining concerns, please inform us, and we will address them promptly.\\n\\n# W1. Determination of Hyperparameter $\\\\rho$\\n\\n> W1. The reviewer's biggest concern is about the determination of hyperparameter $\\\\rho$. According to the paper and Fig 7(b), it is benchmark-dependent. Furthermore, a poorly chosen $\\\\rho$ can lead to performance worse than the baseline performance. This parameter greatly limited the applicability and generalizability of the proposed method.\\n\\n(**Benchmark-dependent $\\\\rho$**) We appreciate the reviewer for raising this question. To clarify, we use the same $\\\\rho$ value for all the benchmarks in the same task type (0.8 for general vision-language task, 0.5 for visual hallucination task, and 0.9 for vision-centric task). In the paper, we use the terms 'task' and 'benchmark' differently. A 'task' refers to the type of problem (e.g., general vision-language task, visual hallucination task, and vision-centric task) and a 'benchmark' refers to a specific dataset (e.g., VQAv2, GQA, LLaVA-W, MM-Vet). A single $\\\\rho$ value can improve the performance of various LMMs on all benchmarks for the same task type. The purpose of presenting Fig 7(b) (revised as 6(b)) is to obtain the plausible $\\\\rho$ value for all the benchmarks in the same task type using only a single benchmark per task. We will further explain the rationale and justification for the choices of $\\\\rho$ in W2.\\n\\n(**Worse performance with poorly chosen $\\\\rho$**) We acknowledge that a poorly chosen $\\\\rho$ can lead to performance worse than the baseline performance. However, in the reasonable range of $\\\\rho$ values, the proposed method robustly improves the performance of various LMMs on various benchmarks. For example, if $\\\\rho \\\\geq 0.5$, VAR can consistently improve the performance on MME (general vision-language task), POPE (visual hallucination task), and CV-Bench (vision-centric task). Therefore, we argue that, to some extent, the proposed method remains applicable across various benchmarks within plausible ranges of $\\\\rho$ values. We have discussed the hyperparameter $\\\\rho$ more thoroughly in Sec 6.3. and Appendix B.2.\\n\\n# W2. Tuning of Hyperparameter $\\\\rho$\\n\\n> W2. The tuning of $\\\\rho$ on the benchmarks to find the best value is problematic and is against the principle of machine learning.\\n\\nSince there is no train/validation set for most of the benchmarks, we alternatively determine $\\\\rho$ based on the single benchmark per task and use the determined $\\\\rho$ for all the benchmarks in the same task type. To validate that the tuning process of $\\\\rho$ is not benchmark-sensitive, we conducted two additional experiments. First, we used a \\u201cpseudo-validation set\\u201d by randomly sampling 10% of the samples from the benchmark and determined $\\\\rho$ using the partial samples. Second, we applied another benchmark in the same task type to determine $\\\\rho$. The results indicate that the same $\\\\rho$ value can be obtained from partial samples of the benchmark or another benchmark in the same task type. The results indicate that we can find the applicable $\\\\rho$ value with minimal tuning. We added the clarification about the tuning of $\\\\rho$ in the manuscript (Sec 6.3. & Appendix B.2.) and include the results of the experiments with 10% of the samples in the benchmark (Appendix B.2.). The results are also provided here for ease of reference.\\n\\n| $\\\\rho$ | 0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1 (baseline) |\\n|---|---|---|---|---|---|---|---|---|---|---|---|\\n| **General Vision-Language Task** | | | | | | | | | | | |\\n| MME (10%) | 144.00 | 144.27 | 145.82 | 147.28 | 149.33 | 151.33 | 151.58 | 151.85 | 152.59 | 152.20 | 148.50 |\\n| TextVQA (10%) | 57.9987 | 58.1770 | 58.2403 | 58.2501 | 58.2530 | 58.3042 | 58.3898 | 58.5275 | 58.6328 | 58.5320 | 58.1200 |\\n| **Visual Hallucination Task** | | | | | | | | | | | |\\n| POPE (10%) | 85.9012 | 85.9012 | 86.0124 | 86.1002 | 86.4020 | 86.5283 | 86.3606 | 86.3247 | 86.2029 | 85.9930 | 85.8700 |\\n| 1-CHAIRs (10%) | 54.9705 | 55.0330 | 55.3408 | 56.1380 | 56.3189 | 56.5543 | 56.2280 | 55.5402 | 55.0523 | 54.7548 | 54.7021 |\\n| **Vision-centric Task** | | | | | | | | | | | |\\n| CV-Bench-2D (10%) | 56.0034 | 56.0146 | 56.8823 | 56.8842 | 56.9140 | 56.9430 | 57.0327 | 57.1328 | 57.5475 | 57.6000 | 56.1300 |\\n| CV-Bench-3D (10%) | 58.2923 | 58.5538 | 58.5703 | 58.6111 | 58.6550 | 58.7530 | 58.7962 | 58.9041 | 58.9542 | 59.0000 | 58.2800 |\"}", "{\"title\": \"Final General Response (1/2)\", \"comment\": \"Dear AC and Reviewers,\\n\\nWe appreciate the time and effort the reviewers have dedicated to providing detailed feedback on our work. During the discussion period, we have received valuable suggestions, such as `(1)` additional experiments on the latest models and Anyres models, `(2)` a more thorough discussion of visual sink tokens with analysis on large-scale datasets, and `(3)` experiments on the robustness of hyperparameters. This feedback has significantly improved our paper, and we are pleased to note that two reviewers have increased their scores from 5 to 6 and 6 to 8, respectively. We are grateful to all the reviewers for giving us the opportunity to strengthen our work and for finding our research interesting and valuable. \\n\\nIn particular, we had a detailed discussion with Reviewer P74f regarding the interpretation of Figure 3(b). **The primary goal of this experiment is to demonstrate that the visual sink token has a negligible impact on the model's performance**. The key point is that the model remains stable even when the sink token is removed during the inference process, indicating that it does not play a significant role in generating image-related responses. Specifically, this is explained by **the stable performance of the model in the range of $\\\\tau = 20$ to $\\\\tau = 50$, as shown by the red line in Figure 3(b)**. Additionally, through our discussions with the reviewers, we have provided further evidence to support the claim that the visual sink token has a minimal impact on model performance:\\n\\n1. Removing visual sink tokens does not significantly affect performance, whether *before* or *after* the softmax calculation. (Response to Reviewer P74f (2/3))\\n2. The final contribution value of the visual sink token $\\\\Vert \\\\alpha ^ {\\\\ell, h} _ {i, j} \\\\boldsymbol{x} _ j ^ {\\\\ell-1} \\\\boldsymbol{W} _ {OV} ^ {\\\\ell, h} \\\\Vert$ is significantly lower than that of other visual tokens. (Further Response to Reviewer P74f (2/3))\\n3. Visual sink tokens are mainly located in semantically meaningless areas. (Response to Reviewer b7X3 (2/3), Appendix A.2)\\n4. Visual sink tokens exhibit similar characteristics to text sink tokens, which are known to have a negligible impact on the model (Response to Reviewer KaLi (1/2), Section 4.2, Appendix A.2).\\n\\nBased on these points, we conclude that the visual sink token has a negligible impact on model performance, which leads us to present Visual Attention Redistribution (VAR) based on this conclusion.\\n\\nHowever, in the discussion with Reviewer P74f, a disagreement arose regarding the experiment in Figure 3(b), which **diverged somewhat from the core focus of our experiment**. From our understanding, the reviewer's main concern is why the performance degradation in the \\\"visual sink token\\\" experiment (red line) between $10 < \\\\tau < 20$ is more significant than that in the \\\"random visual token\\\" experiment (blue line) for $\\\\tau > 40$, even though the number of masked tokens is larger in the latter case. Comparing the performance degradation between the two experiments in different ranges of $\\\\tau$ leads to a fuzzy interpretation, as the two experiments are not controlled for the same number of masked tokens, token positions, dynamic vs. fixed masking, and other factors (Further Response to Reviewer P74f (3/3) & Further Response to Reviewer P74f \\u2161 (3/3)). Nevertheless, based on our insights into the visual sink token and related works, we have made our best effort to clarify the interpretation of the experiment.\\n\\nIf we understand the reviewer's response correctly, the main reason for the disagreement is that the reviewer's own experiment results (fixed-position pruning) are not consistent with our experimental results. In the reviewer's experiment, pruning visual tokens did not lead to significant performance degradation, whereas in our experiment, masking resulted in a performance drop. Despite our best efforts, we were unable to replicate the reviewer's results. Furthermore, other studies that conducted masking experiments in similar settings reported significant performance/probability drops with only a few masked tokens [1, 2], which supports our findings.\\n\\nIn this regard, we kindly request that the AC consider the discussion with Reviewer P74f and our responses in the context of the entire paper. The reviewer also mentioned that \\\"*most other aspects of this work appear quite solid*\\\" (Response to Authors (1/2) of Reviewer P74f). We hope that the reviewer's concerns have been addressed in our final response, and we ask the AC to carefully consider the overall aspects of our work.\"}", "{\"title\": \"Response to Reviewer b7X3 (3/3)\", \"comment\": \"# W3 / Q2. More Experiments and Analyses with various LMMs\\n\\n> 2.a. The author only chose LLaVA and VILA for evaluation, but the training data for LLaVA is much smaller compared to high-performance open-source models. Therefore, it is necessary to add one or two such models for evaluation, such as Qwen2-VL and InternVL2.\\n\\nWe agree that more evaluation on various models, especially high-performance open-source models, is essential. We have added Qwen2-VL and InternVL2 to the main experiments in Table 1, 2, and 3. The results demonstrate that the proposed method consistently improves the performance on various models.\\n\\n> 2.b. LLaVA-HD has several times more image tokens compared to LLaVA, so I speculate that the author's model should perform better on this backbone. However, from Table 1, it appears that the improvement of the proposed method on LLaVA-HD is not as significant as on LLaVA.\\n\\nThe number of visual tokens in LLaVA-HD is larger than that in LLaVA, therefore there are more visual tokens to redistribute attention budget $\\\\boldsymbol{\\\\Omega}$. For evidence, we have compared the ratio of the number of visual tokens to the number of sink tokens in LLaVA and LLaVA-HD. The ratio represents the average number of visual tokens that need to have attention weights allocated per sink token.\\n\\n| Model | Ratio |\\n| :--------------: | :------------: |\\n| LLaVA-1.5-13B | 122.6 (\\u00b1 27.3) |\\n| LLaVA-1.5-HD-13B | 258.3 (\\u00b1 60.0) |\\n\\nThe results show that the ratio in LLaVA-HD is larger than that in LLaVA, which implies that allocated attention budget per visual token is relatively low in LLaVA-HD. Therefore, the improvement of the proposed method on LLaVA-HD is not as significant as on LLaVA.\\n\\nHowever, we can allocate more attention weights from sink tokens to visual tokens by adjusting the hyperparameter $p$. Hyperparameter $p$ controls the portion of the attention weights from sink tokens into the attention budget. To show that the performance on LLaVA-HD can be further improved by allocating more attention weights from sink tokens, we have conducted an additional experiment by adjusting the hyperparameter $p$. If there is a margin to improve the performance on LLaVA-HD by allocating more attention weights from sink tokens, the optimal $p$ value would be larger than the default value $p=0.6$. The results are shown below.\\n\\n| p | 0 (Baseline) | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | *0.6* | 0.7 | **0.8** | 0.9 | 1 |\\n| :-----------------------: | :----------: | :-----: | :-----: | :-----: | :-----: | :-----: | :-------: | :-----: | :---------: | :-----: | :-----: |\\n| MME (General VQA) | 1500.10 | 1501.28 | 1501.28 | 1501.28 | 1505.20 | 1505.20 | *1505.20* | 1513.37 | **1513.37** | 1510.84 | 1505.20 |\\n| POPE 10% (Hallucination) | 87.10 | 87.10 | 87.11 | 87.35 | 87.50 | 87.55 | *87.70* | 87.71 | **87.72** | 87.50 | 87.43 |\\n| CV-Bench (Vision-Centric) | 64.20 | 64.58 | 64.60 | 65.05 | 65.14 | 65.25 | *65.30* | 66.33 | **66.50** | 66.28 | 65.35 |\\n\\nThe results indicate that the optimal $p$ value is $0.8$, which is larger than the default value $p=0.6$. This result supports our claim that the relatively low improvement on LLaVA-HD is due to the large number of visual tokens and can be further improved by adjusting the hyperparameter $p$. Note that we still report the results with the default $p$ value $p=0.6$ in the main paper for consistency. \\n\\n# + Visual Attention Sink on Training\\n\\n> Additionally, I believe that further analysis on how to utilize this phenomenon during model training could enhance the impact of the article.\\n\\nWe agree that further analysis on how to utilize this phenomenon during model training is an interesting direction for research. If we can remove the visual sink tokens during training, the model can focus more on the important tokens. Introducing register tokens or attention bias terms [2] during training can be a potential solution. However, since the base LLM already has the attention sink phenomenon, it is not enough to retrain the LVLM with off-the-shelf LLMs. Rather, we may need to train the LLM from scratch with the register tokens, which is computationally expensive for our current resources. We have added this direction as future work in Sec 7.\\n\\n[1] Darcet, Timoth\\u00e9e, et al. \\\"Vision Transformers Need Registers.\\\" The Twelfth International Conference on Learning Representations. 2024.\\n\\n[2] Sun, Mingjie, et al. \\\"Massive Activations in Large Language Models.\\\" Conference on Language Modeling (COLM), 2024.\"}", "{\"title\": \"Response to Reviewer b7X3 (1/3)\", \"comment\": \"We thank the reviewer for their constructive feedback and helpful observations. Our detailed responses to the comments are outlined below. If there are any remaining concerns, please inform us, and we will address them promptly.\\n\\n# W1. Comparison of Visual Attention Sinks and Summary Tokens in OPERA\\n\\n> W1. The phenomenon of visual attention sink appears to be similar to the summary tokens proposed in OPERA[1]. Summary tokens imply that higher attention weights are assigned to tokens without semantic information, such as ',' and '\\\\n'. The visual sink token seems to suggest that within visual tokens, there are also some tokens that play a similar role to summary tokens.\\n\\nThank you for the insightful and interesting comment. The visual sink tokens and the summary tokens may seem similar in terms of having high attention weights, but they are inherently different in nature. The summary tokens, or the anchor tokens in OPERA [1] was originally suggested by [2]. According to [2], the anchor tokens (1) gather the information of demonstrations to form semantic representations for deeper layers in shallow layers and (2) the information from label words is extracted to form the final prediction in deep layers. Therefore, high attention weights to the summary tokens emerge in the middle to later layers. In contrast, the visual sink tokens are observed from the early layers to the late layers. We also tried to investigate the summary tokens (e.g., \\u2018,\\u2019 and \\u2018.\\u2019) on some samples of the CHAIR dataset in LLaVA-1.5-7B. However, there is no massive activation on sink dimensions $\\\\mathcal{D}_{\\\\textsf{sink}} = \\\\lbrace 1415, 2533 \\\\rbrace$, indicating that the summary tokens and visual sink tokens are different. Furthermore, visual sink tokens are almost useless, as discussed in Sec 4.2 and Sec 4.3, while the summary tokens have a crucial role to transfer the information of the whole sentence.\\n\\n[1] Huang, Qidong, et al. \\\"OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\\n\\n[2] Wang, Lean, et al. \\\"Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning.\\\" Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023.\"}", "{\"summary\": \"This paper investigates \\\"visual attention sinks\\\" in LMMs. The authors pointed out that LMMs often allocate disproportionate attention to certain visual tokens, termed \\\"visual sink tokens,\\\" regardless of their relevance to the corresponding text. This paper proposes Visual Attention Redistribution (VAR), a method that identifies and reallocates attention from these sink tokens to more pertinent visual information, enhancing model performance across various vision-language tasks. The authors demonstrate VAR's effectiveness without requiring additional model training.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The identification of visual attention sinks draws parallels with attention sinks in language models, providing a novel insight into the functioning of LMMs.\\n1. Overall well written and easy to follow. Fig 1 and Fig 2 provide a clear and strong motivation. Fig 3-5 also provide a step-by-step motivation and the design choices of the proposed method.\\n1. The method improves performance across multiple benchmarks, including general vision-language tasks, visual hallucination tasks, and vision-centric tasks.\\n1. The paper evaluates VAR on a wide range of benchmarks and conducts a series of quantitative and qualitative analyses, validating the effectiveness of the proposed VAR method.\", \"weaknesses\": \"1. The reviewer's biggest concern is about the determination of hyperparameter $\\\\rho$. According to the paper and Fig 7(b), it is benchmark-dependent. Furthermore, a poorly chosen $\\\\rho$ can lead to performance worse than the baseline performance. This parameter greatly limited the applicability and generalizability of the proposed method.\\n\\n1. The tuning of $\\\\rho$ on the benchmarks to find the best value is problematic and is against the principle of machine learning.\", \"questions\": \"1. Do newer LMMs with anyres and with visual encoder tuned jointly also have the characteristic of visual attention sinks? Do the observation and analyses generalize across LMMs with different language models and sizes?\\n\\n1. In Fig 8, it looks like LLaVA + VAR has more visual sink tokens qualitatively.\\n\\n1. Please include the reference of LLaVA-1.5-HD-13B.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7tpMhoPXrL
Forget Vectors at Play: Universal Input Perturbations Driving Machine Unlearning in Image Classification
[ "Changchang Sun", "Ren Wang", "Yihua Zhang", "Jinghan Jia", "Jiancheng Liu", "Gaowen Liu", "Sijia Liu", "Yan Yan" ]
Machine unlearning (MU), which seeks to erase the influence of specific unwanted data from already-trained models, is becoming increasingly vital in model editing, particularly to comply with evolving data regulations like the "right to be forgotten''. Conventional approaches are predominantly model-based, typically requiring retraining or fine-tuning the model's weights to meet unlearning requirements. In this work, we approach the MU problem from a novel input perturbation-based perspective, where the model weights remain intact throughout the unlearning process. We demonstrate the existence of a proactive input-based unlearning strategy, referred to forget vector, which can be generated as an input-agnostic data perturbation and remains as effective as model-based approximate unlearning approaches. We also show that multiple given forget vectors (e.g., each targeting the unlearning of a specific data class) can be combined through simple arithmetic operations (e.g., linear combinations) to generate new forget vectors for unseen unlearning tasks (e.g., targeting the unlearning of an arbitrary subset across all classes). An additional advantage of our proposed forget vector approach is its parameter efficiency, as it eliminates the need for updating model weights. We conduct extensive experiments to validate the effectiveness of forget vector and its arithmetic for MU in image classification against a series of model-based unlearning baselines.
[ "Machine Unlearning", "Image Classification", "Universal Input Perturbations" ]
https://openreview.net/pdf?id=7tpMhoPXrL
https://openreview.net/forum?id=7tpMhoPXrL
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zO5i8lCaCt", "XFF2L0pDPg", "RnJNT4cAX4", "RAZIyDffXG", "Pn1UV2qM38", "L7sgmYSevQ" ], "note_type": [ "official_review", "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1730364820344, 1731651974184, 1730693506908, 1730665110476, 1730668610578, 1730611304420 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5198/Reviewer_bvme" ], [ "ICLR.cc/2025/Conference/Submission5198/Authors" ], [ "ICLR.cc/2025/Conference/Submission5198/Reviewer_u2rZ" ], [ "ICLR.cc/2025/Conference/Submission5198/Reviewer_TiLm" ], [ "ICLR.cc/2025/Conference/Submission5198/Reviewer_iqGW" ], [ "ICLR.cc/2025/Conference/Submission5198/Reviewer_HvW2" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a novel input perturbation-based method for machine unlearning. It generate an input-agnostic data perturbation named forget vector using an optimization problem, requiring no model parameter tuning. The experiment results demonstrate its effectiveness compared to model-based MU methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The method is the first attempt to use input-based MU method, offering new insight.\\nIt is easy to understand.\\nComputational overhead of input-based MU is much less than model-based MU.\", \"weaknesses\": \"For method: Although the input-based method is efficient compared to model-based methods, I think the core of MU is to let the model forget specific information instead of deceiving the model to predict wrong information on forget data. Therefore, somehow I think the paper do not achieve real forgetting.\", \"for_experiment\": \"The results in Table 2 and Table 3 show that the test accuracy of input-based MU is much lower than other methods, which is a serious drawback of input noise based method. Moreover, the prominent advantage of input-based MU is reflected in MIA metric while other metrics are comparable or even worse than model-based methods. Maybe the reason is that MIA is sensitive to input noise. This is easy to understand because a sample with noise is more likely to be identified as out-of-distribution data when comparing to a clean sample. Therefore, the performance of input-based MU is open to discussion.\", \"questions\": \"see weakness\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper introduces an innovative approach to machine unlearning (MU) by focusing on input perturbations, using \\\"forget vectors\\\". They can facilitate unlearning without modifying the model\\u2019s parameters, which contrasts with traditional model-based unlearning methods that often require retraining or fine-tuning. The forget vectors are designed as input-agnostic perturbations, applicable across different data instances.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The paper presents a unique shift from model-based MU techniques to an input-perturbations methodology. This approach does not involve updating model weights, making the approach computationally efficient.\", \"The ability to create new forget vectors through arithmetic combinations of pre-learned vectors showcases the method\\u2019s flexibility and adaptability.\"], \"weaknesses\": [\"While unlearning is highlighted as important, the paper would benefit from demonstrating the method in a real-world application, such as mitigating bias or removing harmful data to better showcase its utility.\", \"The experiments are limited to small datasets and simpler models. It would be insightful to understand how this method performs on more complex datasets and larger-scale models.\", \"Although the focus is on image classification, the paper could discuss potential extensions or insights into how this approach might generalize to other tasks.\", \"There are some minor typo issues in lines 243, 405, and 310.\"], \"questions\": \"See weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The proposed testing approach for Machine Unlearning (MU), referred to as \\\"Forget Vectors,\\\" introduces an input-based perturbation strategy to achieve effective data removal without altering the model's parameters. Instead of conventional model-based retraining, Forget Vectors provide a parameter-efficient, scalable approach that applies universal perturbations to inputs, achieving data unlearning while maintaining model performance on non-forgotten data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"\\u2022\\tParameter Efficiency: Forget Vectors circumvent the need for parameter updates, significantly reducing computational requirements compared to model retraining or fine-tuning.\\n\\u2022\\tNovel Unlearning Objective: The modified objective function introduces a perspective on adversarial examples, extending their utility to enable unlearning. \\n\\u2022\\tPreserves Model Integrity: By focusing on input perturbations, the approach retains the model\\u2019s original weights, thus ensuring model integrity and reducing risks associated with full retraining.\", \"weaknesses\": \"\\u2022\\tGDPR Compliance Concerns: The paper\\u2019s reliance on approximate unlearning without theoretical guarantees presents a significant shortfall. While approximate unlearning may be practical, it falls short in scenarios where data privacy and regulatory compliance are non-negotiable. Without provable guarantees, it is questionable whether this method can satisfy GDPR requirements for data erasure. This gap undermines the core purpose of Model Unlearning in privacy-centered contexts, where the \\\"right to be forgotten\\\" demands more than a probabilistic assurance.\\n\\u2022\\tScalability to Other Domains: The Forget Vector approach is developed and validated primarily for image classification tasks, potentially limiting its application in NLP or other non-visual domains where input perturbations may be less effective.\\n\\u2022\\tDependence on MIA (Membership Inference Attack) Testing via Ulira: While the paper uses MIA testing as a metric for unlearning effectiveness, the effectiveness of MIA testing itself is not sufficiently robust for privacy guarantees. Additionally the use of U-LiRA [1] is recommended.\\n\\u2022\\tSensitivity to Data Shifts: From the paper the effectiveness of unlearning decreases under certain data shifts, which may hinder the reliability of Forget Vectors in dynamic data environments or adversarial settings.\", \"questions\": \"1.\\tHow might the Forget Vector approach be adapted or expanded to suit domains beyond image classification, such as NLP or time-series data?\\n2.\\tAre there any potential risks of unintended consequences (e.g., degradation of model utility) when applying compositional unlearning for arbitrary data subsets? Can an adversary compromise the utility by choosing the \\\"right\\\" subset?\\n\\n[1] Kurmanji, Meghdad, Peter Triantafillou, Jamie Hayes, and Eleni Triantafillou. \\\"Towards unbounded machine unlearning.\\\" Advances in neural information processing systems 36 (2024).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, the authors introduce an input perturbation technique called the \\\"forget vector,\\\" which enables machine unlearning (MU) without altering model weights. The authors claim that the Forget vectors are versatile and can be combined through arithmetic operations to unlearn new, unseen tasks, such as removing specific subsets of data across classes. With experiments in image classification, they have shown that the proposed approach is parameter-efficient, significantly reducing the need for model reconfiguration while maintaining MU effectiveness.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Strengths:\", \"The idea of forgetting vectors for machine unlearning is new and interesting.\", \"The main idea of the paper is clearly presented in most parts.\"], \"weaknesses\": [\"Weakness:\", \"The paper claims to address class forgetting and random sample forgetting in machine unlearning. It is not clear if it can address the 'sample unlearning' presented in [1]. Experiments and comparisons with the baseline are recommended to show such unlearning.\", \"The paper did not cite and compare with many established methods in MU area. For example, Deep Unlearning [2] performs MU without iterative fine-tuning or retraining. This and similar methods are not cited as related works and are not compared in the paper.\", \"The scale of the experiments is very limited. In this paper authors only showed unlearning when the model is trained on up to 10 classes. However, [2] already showed that class unlearning is possible with 1k classes from the Imagenet dataset. It is not clear if the proposed method is scalable to such a large number of classes or not.\", \"No calculation, discussion, and comparison of computational cost are presented in the paper.\", \"[1]Meghdad Kurmanji, Peter Triantafillou, and Eleni Triantafillou. Towards unbounded machine unlearning.\"], \"arxiv_preprint_arxiv\": \"2302.09880, 2023.\\n[2] Kodge, S., Saha, G., Roy, K.: Deep unlearning: Fast and efficient training-free\\napproach to controlled forgetting. arXiv preprint arXiv:2312.00761 (2023) . https://openreview.net/pdf?id=BmI5p6wBi0\", \"questions\": [\"It is not clear how Compositional Unlearning works. Please revise this part to improve the clarity of the paper.\", \"Can the method unlearn some 'specific samples' from the pre-trained model?\", \"Provide a comparison with SoTA works (see above). I suggest experiments and comparisons when random samples, specific samples, and a specific class are unlearned from the 1k ImageNet dataset.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Instead of model-based MU approach, this paper proposes a perturbation-based method called forget vectors. Forget vectors are shown to be as effective as previous model unlearning, with better parameter efficiency, and more flexibilities by leveraging vector properties (for random data unlearning). The motivation seems to be preventing the performance degradation of the model that is been unlearned.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Sec 4 in insightful, establishing preliminary experiment and evaluation settings, as well as gaining fundamental observations.\\n\\n2. The idea is novel to me, which I interpret as applying learnable mask vectors to the data. The intuition also aligns with recent studies on low-rank learning.\\n\\n3. The paper writing is good, easy to follow.\", \"weaknesses\": \"1. The CIFAR-10+ResNet18 and IN-10+VGG16 have roughly the same parameters/data pixels ratio (e.g., for the first setting: 11M/(32x32x3x50k)$\\\\approx$0.07). But they can have different behaviors since Table 1, why? Is it more data-dependent? It also makes me curious about more over-parameterized settings as well as larger scale experiments, as I am not sure if a generalizable conclusion across or related to scales (in terms of model and data size) can be drawn.\\n\\n2. I find compositional unlearning insightful which may benefit future work, but the experiments are lacking. It is really crucial to demonstrate the compositional unlearning's ability under distribution shift, e.g., applying learned vectors from one data domain to another, so that the authors can prove the claims. From the given equations, I am unsure about whether combining $K$ trained class-specific perturbation vectors in one domain can apply to another data domain effectively. Are the dimension of $w_i$ 1? If so, that is even worse because we only tune $K$ parameters for shifting domain, which lacks theoretical guarantees on its learnability. Maybe comparing three settings: 1) fine-tuning to another domain, 2) training forget vectors in another domain from scratch, 3) zero-shot, can help us analyze these aspects, and it might also depends on how far two domains are.\", \"questions\": \"Please refer to the weakness section, where I expect two sets of experiments, and weakness 2) is more important than 1) to me. I am concerned about whether 1) this approach scales up, 2) this approach is not domain-specific. The idea is good, I find it interesting, thanks for the hard work.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7tOc6h8bea
Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better, Even Mid-Generation
[ "Rohin Manvi", "Anikait Singh", "Stefano Ermon" ]
Inference-time computation is a powerful paradigm to enhance the performance of large language models (LLMs), with Best-of-N sampling being a widely used technique. However, this method is computationally expensive, requiring both (1) an external reward model and (2) the generation of multiple samples. In this work, we introduce a new generative self-evaluation scheme designed to adaptively reduce the number of generated samples while maintaining or even improving performance. We use a generative reward model formulation, allowing the LLM to predict mid-generation the probability that restarting the generation will yield a better response. These predictions are obtained without an external reward model and can be used to decide whether or not to generate more samples, prune unpromising samples early on, or to pick the best sample. This capability is very inexpensive as it involves generating a single predefined token. Trained using a dataset constructed with real unfiltered LMSYS user prompts, Llama 3.1 8B's win rate against GPT-4 on AlpacaEval increases from 21\% to 34\% with 16 samples and math performance on GSM8K improves from 84\% to 91\%. By sampling only when the LLM determines that it is beneficial to do so and adaptively adjusting temperature annealing, we demonstrate that 74\% of the improvement from using 16 samples can be achieved with only 1.2 samples on average. We further demonstrate that 50–75\% of samples can be pruned early in generation with minimal degradation in performance. Overall, our methods enable more efficient and scalable compute utilization during inference for LLMs.
[ "LLMs", "inference-time", "inference-time efficiency", "Best-of-N", "self-evaluation" ]
Reject
https://openreview.net/pdf?id=7tOc6h8bea
https://openreview.net/forum?id=7tOc6h8bea
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vaMWgeqn5G", "ng4IPLHNtV", "lSL8lOHP3d", "kvW4TOgDZ4", "jSZbCbxnG4", "iW4W9a3Xuu", "iTR8GznsfM", "gteHpWIsMk", "cJLGI7BjT7", "anSS47zjnL", "afjCLhKH1X", "aWxRh9dxic", "XjltuYLZLl", "Nbbd8uG91A", "JvnNJxGpeI", "JdXpUxPMce", "I6HlCYb1o9", "GHxowNwGFs", "FR7IrYouUd", "EU19LM665b", "9pTaNOEeW1", "9NSXOyvsEK", "8vMLdxr41R", "6v2mMZojH8", "5GpwHrkQQP", "4TiSZFRbho", "3Koc9sJ7vT" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732909914408, 1732909982403, 1732699705688, 1732564176503, 1733216677532, 1732690325096, 1735466705284, 1737524266349, 1732392104349, 1732393084886, 1732393558889, 1732394639257, 1730396765624, 1732668823835, 1732724890708, 1732660384720, 1732757820693, 1732644245774, 1732394736976, 1730694712832, 1732389878798, 1733155653353, 1730569633335, 1730452919897, 1732776323553, 1732567577263, 1732724916657 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13535/Authors" ], [ "ICLR.cc/2025/Conference/Submission13535/Authors" ], [ "ICLR.cc/2025/Conference/Submission13535/Reviewer_Pqum" ], [ "ICLR.cc/2025/Conference/Submission13535/Reviewer_pws6" ], [ "ICLR.cc/2025/Conference/Submission13535/Authors" ], [ "ICLR.cc/2025/Conference/Submission13535/Authors" ], [ "ICLR.cc/2025/Conference/Submission13535/Area_Chair_eAVL" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13535/Authors" ], [ "ICLR.cc/2025/Conference/Submission13535/Authors" ], [ "ICLR.cc/2025/Conference/Submission13535/Authors" ], [ "ICLR.cc/2025/Conference/Submission13535/Authors" ], [ "ICLR.cc/2025/Conference/Submission13535/Reviewer_Pqum" ], [ "ICLR.cc/2025/Conference/Submission13535/Authors" ], [ "ICLR.cc/2025/Conference/Submission13535/Authors" ], [ "ICLR.cc/2025/Conference/Submission13535/Reviewer_Pqum" ], [ "ICLR.cc/2025/Conference/Submission13535/Reviewer_PBQS" ], [ "ICLR.cc/2025/Conference/Submission13535/Reviewer_6jZH" ], [ "ICLR.cc/2025/Conference/Submission13535/Authors" ], [ "ICLR.cc/2025/Conference/Submission13535/Reviewer_pws6" ], [ "ICLR.cc/2025/Conference/Submission13535/Authors" ], [ "ICLR.cc/2025/Conference/Submission13535/Reviewer_Pqum" ], [ "ICLR.cc/2025/Conference/Submission13535/Reviewer_PBQS" ], [ "ICLR.cc/2025/Conference/Submission13535/Reviewer_6jZH" ], [ "ICLR.cc/2025/Conference/Submission13535/Authors" ], [ "ICLR.cc/2025/Conference/Submission13535/Authors" ], [ "ICLR.cc/2025/Conference/Submission13535/Authors" ] ], "structured_content_str": [ "{\"title\": \"Adding PRM Experiment\", \"comment\": \"This is a very helpful suggestion! This allows us to compare our method to a strong value model instead of a domain-specific PRM. Following what you described, we finetuned ArmoRM (the same reward model we used to create our preference data) to predict the reward given truncated generations. This is a very strong baseline, as ArmoRM was already trained on 1 million preferences and we further finetuned it to predict the reward directly instead of having to predict $P(Win \\\\cup Tie)$ as our model is trained to do. Compared to Best of N, we found that this baseline does allows for some saving in FLOPs, with the pruning of less desirable responses. However, this gain is completely overshadowed by the additional cost of having to process all tokens a second time due to the lack of KV cache. This results in our method still saving almost **2x** more FLOPs. You can find an updated performance vs FLOPs graph **[here](https://imgur.com/a/pdQ878w)**. We will add these results to our paper in the final revision. We would be most grateful if you would consider upgrading your score, given that your concerns have now been addressed.\"}", "{\"comment\": \"> There are competitive methods out there, as highlighted by Pqum, that have not been considered nor compared against in this paper. Such comparison is important for the type of paper presented here.\\n\\nThank you for this comment. We now include the baseline the reviewer suggested to the paper. We copy the response below for your convenience:\\n\\nThis is a very helpful suggestion! This allows us to compare our method to a strong value model instead of a domain-specific PRM. Following what you described, we finetuned ArmoRM (the same reward model we used to create our preference data) to predict the reward given truncated generations. This is a very strong baseline, as ArmoRM was already trained on 1 million preferences and we further finetuned it to predict the reward directly instead of having to predict $P(Win \\\\cup Tie)$ as our model is trained to do. Compared to Best of N, we found that this baseline does allows for some saving in FLOPs, with the pruning of less desirable responses. However, this gain is completely overshadowed by the additional cost of having to process all tokens a second time due to the lack of KV cache. This results in our method still saving almost **2x** more FLOPs. You can find an updated performance vs FLOPs graph **[here](https://imgur.com/a/pdQ878w)**. We will add these results to our paper in the final revision. \\n\\nWe would be most grateful if you would consider upgrading your score, given that your concerns have now been addressed.\"}", "{\"comment\": \"Thank you the additional comment;\\n\\n\\\"learning an accurate PRM is challenging\\\" -> actually I believe that taking a given RM trained on offline dataset, and then finetuning it on the online mig-generation would be a competitive and simple baseline. Without this ablation, I am afraid we lack real information about the practicality of using the LLM to self evaluate.\"}", "{\"title\": \"Reviewer Response\", \"comment\": \"Thank you for the detailed responses and the overall clarification of the contributions of this work.\\n\\nHaving read the responses from the authors, it is now my understanding that the main contribution of this work is to propose a self-evaluation method for early pruning and adaptive sampling. The trade-off of adaptive sampling is that it introduces additional latency due to serial processing. Although very helpful, it seems that FLOPS, although a common metric for efficiency, may only highlight the strength in terms of the amount of computation necessary, but does not accurately capture the latency aspect. \\n\\nThe authors also mention that they test on MATH 500, but would you please point me to where I can find the new experimental results?\\n\\nBased on this discussion, I will raise my score to a 6.\"}", "{\"title\": \"Response to Reviewer Pqum\", \"comment\": \"Thank you for your continued engagement and feedback. To address your concerns about the nature of our contributions, we encourage you to refer to the newly added section \\\"How are we improving it?\\\" in the revised \\\"Overall Response.\\\" This section clearly outlines our approach, distinguishing our key contributions.\\n\\nWe sincerely thank you for your time during this discussion period.\"}", "{\"comment\": \"We would like to express our sincere thanks and appreciation for your feedback. We would be most grateful if you would consider upgrading your score, given that your concerns have now been addressed.\"}", "{\"metareview\": \"In this paper, the authors propose a new strategy for efficient inference-time improvement of LLMs. Specifically, the authors exploit adaptive sampling (selecting the N in best-of-N as a function of the prompt) + early pruning (pruning unpromising mid-generations) upon the learned generative reward model. The authors conducted empirical comparison on GSM8k and MATH 500 to demonstrate the benefits in terms of FOLPS vs. Performance.\\n\\nThe paper is well-motivated and easy-to-follow. The major issues raised by the reviewers mainly lies in about the novelty and significance. Specifically, using additional reward model to guide the inference-time generation and pruning is not novel, e.g., [1, 2, 3]. Moreover, the empirical study is limited. Without discussion and comparison to these methods on more tasks, it is difficult to justify the claimed benefits. \\n\\nI suggest the authors to take the comments from the reviewers into account to emphasize the differences and benefits to the existing methods, especially the generative reward effect in accelerating inference in practice. \\n\\n\\n[1] Mudgal, Sidharth, Jong Lee, Harish Ganapathy, YaGuang Li, Tao Wang, Yanping Huang, Zhifeng Chen et al. \\\"Controlled decoding from language models.\\\" arXiv preprint arXiv:2310.17022 (2023).\\n\\n[2] Sun, Haotian, Yuchen Zhuang, Wei Wei, Chao Zhang, and Bo Dai. \\\"BBox-Adapter: Lightweight Adapting for Black-Box Large Language Models.\\\" arXiv preprint arXiv:2402.08219 (2024).\\n\\n[3] Chakraborty, Souradip, Soumya Suvra Ghosal, Ming Yin, Dinesh Manocha, Mengdi Wang, Amrit Singh Bedi, and Furong Huang. \\\"Transfer Q Star: Principled Decoding for LLM Alignment.\\\" arXiv preprint arXiv:2405.20495 (2024).\", \"additional_comments_on_reviewer_discussion\": \"The authors provides detailed answers and additional experiments, which partially addressed the questions from the reviewers. However, there are still concerns remained from reviewer.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer pws6\", \"comment\": \"We thank the reviewer for their comments and for engaging with the paper. To address the concerns, we add a more complex reasoning domain, Math 500 and add additional clarity to the computational efficiency of our algorithms with the measure of the number of FLOPs. We are happy to clarify any questions you may have. Please let us know if your concerns are addressed and if so we would be grateful if you would be willing to raise your score. We would be happy to discuss if you have any concerns.\\n\\n**Q1:** Are the performance gains marginal for Alpaca Eval?\\n\\n**A1:** Thank you for your question. Please refer to the sections \\u201cWhat are we improving?\\u201d and \\u201cHow much are we improving it by?\\u201d in the overall response for a detailed explanation.\\n\\nSpecifically, for AlpacaEval, we employed self-evaluations and early pruning to significantly reduce the cost (in terms of computation, memory, and energy consumption) of Best-of-16 by approximately a factor of 4. Additional details on how these improvements are measured can be found in the overall response.\\n\\n**Q2:** Can you quantify the actual computational efficiency and latency of your methods compared to Best-of-N, considering that your method requires serial processing while Best-of-N can be executed in parallel? How does your method perform on datasets that require more complex reasoning?\\n\\n**A2:** Thank you for this suggestion. The section \\u201cHow much are we improving it by?\\u201d in the overall response contains the necessary details that we will add to our final revision.\\n\\nIn summary, self-evaluations and early pruning reduce costs by roughly a factor of 4 without incurring additional latency. Meanwhile, adaptive sampling achieves cost reductions of approximately 6 to 8 times, though it introduces some additional latency as a tradeoff. Further details about these measurements can be found in the overall response.\\n\\nRegarding complex reasoning tasks, we evaluated the method on MATH 500, a much more challenging dataset, and observed results consistent with GSM8K. This evaluation is also discussed in the \\u201cHow much are we improving it by?\\u201d section.\\n\\n**Q3:** Can you study the effect of self-evaluation training in isolation by conducting Best-of-N after self-evaluation training to separate the effect of this training from the adaptive inference method?\\n\\n**A3:** To observe the effect of self-evaluation training, please refer to Table 1. The table highlights the difference in performance between zero-shot (LLM-as-a-Judge), reward modeling (Bradley-Terry), and our capability-aware self-evaluations. To clarify, here we utilize the same samples from Llama 3.1 8B-Instruct per question and evaluate solely the reward modeling/self-evaluation capabilities of the model. Here, we show that Best-of-16 performance improves from 24.4% to 33.8% on AlpacaEval and improves from 86.7% to 91% on GSM8K when comparing LLM-as-a-Judge to capability-aware self-evaluation. Additionally, we see 33.2% to 33.8% and 87.7% to 91% in the same domains when comparing a Bradley Terry reward model to capability-aware self-evaluation. In short, the self-evaluation training allows us to provide a significant performance benefit over traditional Best-of-N with a reward model or using the zero-shot ability of the model in the LLM as a Judge formulation.\\n\\n**Q4:** What is the performance of Llama 3.1 8B Instruct when using Best-of-16 on AlpacaEval and GSM8K?\\n\\n**A4:** As shown in Table 1, with Llama 3.1 8b Instruct, capability-aware self-evaluation gets a Best-of-16 performance of 33.8% on Best-of-16 on AlpacaEval and 91% on Best-of-16 with GSM8K. For the learned reward model, the Best-of-16 on AlpacaEval is 33.2% and the Best-of-16 with GSM8K is 87.7%.\\n\\n**Q5:** How does the proportion of ties (the epsilon for deciding ties) in the preference dataset construction for self-evaluation training affect the performance of the model?\\n\\n**A5:** This is an excellent question. Many responses in an on-policy preference dataset will be semantically similar as they come from the same model as seen in prior work (Tajwar et al, 2024). Therefore, relaxing the Bradley-Terry Model to account for ties can make the prediction problem easier. Intuitively, with a large proportion of ties, we increase the semantic distance between the winning and losing response, making self-evaluation easier. This is the reason our model can routinely make confident predictions such as P(Win or Tie) = 0.99, indicating that the probability of a significant improvement is very low.\"}", "{\"title\": \"Response to Reviewer PBQS\", \"comment\": \"We sincerely thank the reviewer for their comments and for engaging with the paper. To address the concerns, we add a more complex reasoning domain, Math 500 and add additional clarity to the computational efficiency of our algorithms with the measure of the number of FLOPs. We are happy to clarify any questions you may have. Please let us know if your concerns are addressed and if so we would be grateful if you would be willing to raise your score. We would be happy to discuss if you have any concerns.\\n\\n**Q1:** How does the proposed method compare with the baselines in terms of compute time and FLOPs, considering the cost of probing the model mid-generation?\\n\\n**A1:** Thank you for this question. Please see the sections \\u201cWhat are we improving?\\u201d and \\u201cHow much are we improving it by?\\u201d in the overall response for detailed insights.\\n\\nIn brief, self-evaluations and early pruning reduce costs (computation, memory, and energy consumption) by a factor of 4, without introducing any additional latency. Adaptive sampling further reduces costs by approximately a factor of 6 to 8, though it comes with the tradeoff of additional latency. Additional details on how these improvements are measured are included in the overall response.\\n\\n**Q2:** Since pruning longer sentences improves performance, wouldn't it be better to fully generate sentences before selecting them? Why is pruning mid-generation necessary?\\n\\n**A2:** This is a good question. Please refer to the sections \\u201cWhat are we improving?\\u201d and \\u201cHow much are we improving it by?\\u201d in the overall response.\\n\\nTo clarify, our approach does not aim to improve the maximum theoretical performance of Best-of-N but rather to make Best-of-N far more cost-effective. Specifically, early pruning with self-evaluations reduces costs by a factor of 4. Generating a larger number of responses and pruning some early\\u2014rather than generating fewer full responses\\u2014is shown to be computationally more efficient. This method can be viewed as improving downstream task performance under fixed compute, memory, or energy budgets. The details of these measurements can be found in the overall response.\\n\\n**Q3:** Why does your method hit a plateau while the Best-of-N method still shows improvement over more samples? Would Best-of-N surpass your method when evaluated at 32 and 64 fully generated samples?\\n\\n**A3:** Thank you for raising this point. Please refer to the section \\u201cWhat are we improving?\\u201d in the overall response for more details.\\n\\nAs shown in Figure 1, we plot accuracy against the number of samples, with a fixed maximum sample budget. Best-of-16 serves as the oracle or upper-bound performance for a given reward model. Our methods, such as Adaptive Sampling and Pruning, do not increase the theoretical maximum performance of Best-of-N or other search algorithms but instead significantly reduce the costs associated with achieving that performance (in terms of computation, memory, and energy consumption). We will evaluate the performance at 32 and 64 samples and will include these results either by the end of the rebuttal period or in the final version of the paper.\\n\\n**Q4:** Can you move important ablation studies from the Appendix to the main paper for better understanding of your method's parameters? Can you restructure your figures to present one observation per figure instead of combining many into Figure 1 to improve readability? Could you revise the writing to make it more succinct and avoid long, complex sentences?\\n\\n**A4:** Thank you for these suggestions. We will take the following steps to improve clarity and readability in the final revision of the paper:\\n\\n1. We will add clarity as you have suggested by reducing text and complex sentences in the methods section of the paper.\\n2. Additionally, we will move Figure 1 to the experiments section of the paper and break it into two separate figures to improve readability. \\n3. Finally, we will move key Tables and Figures from the appendix to give better understanding of the method's parameters. \\n\\nWe would be happy to make further changes that lead to a better understanding of this paper.\"}", "{\"title\": \"Response to Reviewer 6jZH\", \"comment\": \"We thank the reviewer for their comments and for engaging with the paper. To address the concerns, we add a more complex reasoning domain, Math 500 and add additional clarity to the computational efficiency of our algorithms with the measure of the number of FLOPs. We are happy to clarify any questions you may have. Please let us know if your concerns are addressed and if so we would be grateful if you would be willing to raise your score. We would be happy to discuss if you have any concerns.\\n\\n**Q1:** Why did you choose the LMSYS dataset for fine-tuning, and how does the selection of the fine-tuning dataset affect the generalization of the model? Would using a different or more complex dataset improve performance?\\n\\n**A1:** Thank you for your question. We selected the LMSYS dataset for fine-tuning because it consists of real-world prompts. Our goal was to enable the model to adaptively allocate samples across a wide range of domains, and the broad distribution of prompts in LMSYS supports this objective.\\n\\nWe evaluated the fine-tuned model on diverse domains, including AlpacaEval, GSM8K, and now Math 500, which demonstrates its generalization capabilities. Since our performance already closely approaches the theoretical upper bound set by the underlying reward model used in constructing the preference dataset, we believe that training on a different dataset would likely yield minimal improvement. However, fine-tuning on more specific, complex datasets could enhance performance on similar in-distribution tasks, and this is something we may explore in a future revision.\\n\\n**Q2:** Can you evaluate your method on more diverse and complex datasets, especially those requiring longer reasoning chains, to demonstrate its generalizability on reasoning tasks?\\n\\n**A2:** To address this concern, we added the Math 500 (Hendrycks et al, 2020) as an additional domain as seen in our new figure which can be found [here](https://imgur.com/a/rhisaLK). This domain is more complex, requiring longer reasoning chains to perform harder math contest problems. We find that the results are relatively consistent with what we see on GSM8K.\\n\\n**Q3:** Can you show the distribution of generated token lengths?\\n\\n**A3:** Certainly. Below are the token length statistics for the evaluated datasets:\\n\\nFor AlpacaEval 2.0, the mean response length is 453 tokens with a standard deviation of 305 tokens. For GSM8K, the mean response length is 251 with a standard deviation of 170 tokens. For MATH 500, the mean response length is 537 tokens with standard deviation of 337 tokens. We will add these metrics and a histogram of the generated tokens to our appendix by the end of the rebuttal period or for the final version of the paper.\\n\\n**Q4:** The paper maps the number of samples or tokens directly to computational cost, but in practice, batch inference can make multiple samples cost similar to one sample. Can you address how your method impacts computational cost and latency in real-world LLM serving scenarios?\\n\\n**A4:** Thank you for this important question. Please refer to the sections \\u201cWhat are we improving?\\u201d and \\u201cHow much are we improving it by?\\u201d in the overall response. In those sections, we provide concrete metrics and explanations on the cost-effectiveness of our approach, including its impact on computational cost in practical settings.\\n\\n**Q5:** Considering that inserting a prefill (self-evaluation prompt) is not free and may introduce more latency than naive batch decoding, can you clarify how your method affects end-to-end latency?\\n\\n**A5:** We appreciate your question. Please see the section \\u201cHow much are we improving it by?\\u201d in the overall response. There, we provide concrete metrics and a detailed explanation of the tradeoff between the cost of a reward model and the self-evaluation process, as well as their respective impacts on end-to-end latency.\"}", "{\"title\": \"Response to Reviewer Pqum\", \"comment\": \"We thank the reviewer for their comments and for engaging with the paper. To address the concerns, we add a more complex reasoning domain, Math 500 and add additional clarity to the computational efficiency of our algorithms with the measure of the number of FLOPs. We are happy to clarify any questions you may have. Please let us know if your concerns are addressed and if so we would be grateful if you would be willing to raise your score. We would be happy to discuss if you have any concerns\\n\\n**Q1:** Are the improvements over baselines small?\\n\\n**A1:** Thank you for your question. Please see the sections \\u201cWhat are we improving?\\u201d and \\u201cHow much are we improving it by?\\u201d in the overall response.\\n\\nIn brief, self-evaluations and early pruning reduce costs (computation, memory, and energy consumption) by a factor of 4 without introducing additional latency. Adaptive sampling further reduces costs by approximately a factor of 6 to 8, though it comes with the tradeoff of increased latency. Additional details on these measurements are included in the overall response.\\n\\n**Q2:** Why doesn't the paper mention process-based reward models?\\n\\n**A2:** Thank you for bringing this up and for the helpful reference! Note, we do not intend to improve the performance of the given inference-time method, which we fix in this work to be Best-of-N. We simply aim to make it more cost-effective by removing the need for an external reward model as well as adaptively reducing the amount of computation needed depending on the query.\\n\\nIn this work, we learn reward models with an on-policy preference dataset, utilizing prompts from a large open-source general dataset, LMSYS. In contrast, PRMs in the context of reasoning require the need for ground truth outcome supervision. For domains such as Math and Code, supervision can be provided by symbolically checking the final answer in the context of math or executing test cases in code. A standard paradigm explored in works such as Snell et al, 2024 and OmegaPRM (Luo et al 2024) is to learn a PRM through MC rollouts. These are two complementary approaches for training reward models, with different assumptions and different dataset compositions. However, the inference time optimizations we propose such as adaptive sampling + pruning, could be readily applied to any reward model, allowing for savings in FLOPs during inference. We will try to incorporate a PRM for the final version of the paper to showcase the efficacy of this approach. We will additionally add these works to the related works section of the paper and will map out how our method can be extended to other inference-time methods very naturally.\\n\\n**Q3:** How does the need to generate sequentially affect real-time deployment, and can you address this limitation more thoroughly? Is the idea of exponentially increased batch size sufficient to mitigate this issue?\\n\\n**A3:** Thank you for raising this point. Please refer to the sections \\u201cWhat are we improving?\\u201d and \\u201cHow much are we improving it by?\\u201d in the overall response.\\n\\nIn summary, self-evaluations and early pruning reduce costs (computation, memory, and energy consumption) by a factor of 4 without additional latency. Adaptive sampling further reduces costs by a factor of 6 to 8 but introduces approximately 2x additional latency, which can be mitigated using exponentially increasing batch sizes. More details on these measurements are available in the overall response.\\n\\n**Q4:** Can you discuss how your training method effectively trains the model to estimate the quantile of the generation according to the external reward model? \\n\\n**A4:** If $\\\\epsilon = 0$ in the construction of the preference dataset (i.e ties only if the response is identical), capability aware self evaluation reduces to estimating the quantile of the given response. However, practically this is difficult to model (the Best-of-N performance suffers), which is one of the reasons we introduced a non-zero epsilon as a relaxation of this interpretation. Intuitively, it is easier to predict with high confidence if the model can\\u2019t do significantly better rather than if the model can\\u2019t do better by any amount. This relaxation is fine as we do not strictly care about the quantile of a response, but instead if a new response could be significantly better than the current generated response. This is the reason our model can routinely make confident predictions such as P(Win or Tie) = 0.99, indicating that the probability of a significant improvement is very low.\"}", "{\"summary\": \"The paper proposes a new strategy to improve the inference of LLMs, involving two key components: adaptive sampling (selecting the N in best-of-N as a function of the prompt) and early pruning (pruning unpromising mid-generations). This relies on a new ability from LLMs; given a prompt and a potentially unfinished generation, they can learn to detect whether they will be better with an additional sampling round. This ability is trained explicitly in supervised way, with feedbacks obtained from an external RM; the key point being that this RM is then not required during deployment. This strategy improves performances on math and gsm8k benchmarks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is clearly written, properly highlighting the different ideas and results.\", \"Improving inference strategies for LLms is an important topic for the ICLR community, with potential significant impact.\", \"The two contributions, the discovered \\\"trainable self-evaluations\\\" and its applications (adaptive sampling and early pruning) make sens.\", \"The experiments, althought succinct (see weakness), show convincing performances on standard benchmarks.\"], \"weaknesses\": [\"The main limitation of the work is that the proposed ideas are arguably small improvements over existing baselines. For example, the paper does not mention \\\"process-based reward models\\\" or the \\\"reward-guided decoding\\\" literature (such as MCTS), that seeks the same objective. The only benefit of the paper would be to include those abilities inside the LLM itself, thus removing the need for an external value network.\", \"In terms of practicality, the need to generate sequentially is a key limitation in real-time deployment, thus hindering some of the efficiency benefits. Though, I acknowledge that the idea of \\\"exponentially increased batch size\\\" is an interesting but limited answer.\", \"With your training, and given a prompt and generation, you actually train the model to estimate the quantile of this generation according to the external reward model; a discussion on this topic would be needed. Then similarly, it would be interesting to consider extension where, be given N>2 generations, your model learns to detect whether the generation is actually the best-of-N.\", \"Some ablations are missing. For example, could you provide the performances of applying temperatre annealing to basic Best-of-N? What if you train an external RM to evaluate the quantile on mid-generations? Or about a simple strategy that would detect, be given the prompt, its complexity, and then allocate automatically a prompt-aware compute.\", \"I fully agree that \\\"We need to keep the underlying policy or model unchanged when learning to self-evaluate.\\\" (l.232) Though I disagree with the next sentence: \\\"To do so, we need the model\\u2019s prior distribution over tokens to be similar to what we expect after training\\\" (l.233). Could you clarify this?\", \"From my understanding, finetuning for self evalaution should decrease other abilities because of catastrophic forgetting.\", \"## Nit\", \"The fig1 actually shows the opposite of what the legend \\\"Rewards are not interpretable\\\" states; indeed, actually the reward provides the lowest score to the most unconfident generation.\", \"Related work: some papers that may be worth discussing\", \"\\\"Don't throw away your value model! Generating more preferable text with Value-Guided Monte-Carlo Tree Search decoding\\\"\", \"\\\"ARGS: Alignment as Reward-Guided Search\\\"\", \"\\\"Efficient Controlled Language Generation with Low-Rank Autoregressive Reward Models\\\"\", \"\\\"Critic-Guided Decoding for Controlled Text Generation\\\"\", \"The discussion on ties ends up being unecessary, as it does not impact performance. Thus, you might not want to state \\\"Accounting for ties in reward modeling is especially important\\\" (l.243).\"], \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Pqum\", \"comment\": \"We thank the reviewer for their continued engagement!\\n\\n> You require sequential generations while the standard BoN can be distributed\\n\\nTo clarify, we propose two approaches: **early pruning** and **adaptive sampling**. Early pruning does not require any sequential processing of samples, while adaptive sampling does. Specifically:\\n\\n1. **Early Pruning**: This approach reduces FLOPs by **4x** to match the accuracy of Best-of-16 with **no additional latency**. It reduces computational overhead by halting the generation of less promising samples **during parallel generation**. For example, after generating 128 tokens for 16 samples in parallel, the 12 least promising samples can be pruned, effectively halving the total number of tokens generated and achieving significant computational savings.\\n\\n2. **Adaptive Sampling**: This approach achieves a larger reduction in FLOPs by **6-8x** to match the accuracy of Best-of-16, albeit with additional latency due to sequential generations. However, the average latency can be as low as 1.1 sequential generations, thanks to further optimizations such as the exponentially annealed temperature schedule, that we propose, resulting in limited additional overhead over a batch of questions.\\n\\n> PRM as a baseline\\n\\nWe note that the Process Reward Model (PRM) can readily be replaced with the self-evaluation function we propose. The inference speedups we present (adaptive sampling + pruning) remain valid and transferable with this function.\\n\\nThe key difference between a PRM and our mid-generation self-evaluation approach lies in the dataset composition used to train these functions. PRMs (e.g., Snell et al., 2024; Wang et al., 2024, Math Shepherd) are trained on policy rollouts evaluated for correctness via outcome verification. In contrast, our self-evaluation frameworks collects the same on-policy rollouts but utilizes relative preference feedback instead of absolute feedback. This decision to train on general queries from preference data (e.g., LMSYS) enables our single model to generalize across a wide range of domains.\\n\\nIn many domains, learning an accurate PRM is challenging due to the need for highly accurate outcome supervision models. Additionally, for some domains such as creative writing or general question answering, such supervision can be ambiguous and difficult to collect effectively. Moreover, inference with an external PRM function is computationally expensive, requiring additional FLOPs and memory, particularly for mid-generation execution.\\n\\nNevertheless, for the rebuttal/final version of the paper, we will attempt to train a PRM for math contest problems using domain-specific datasets, such as PRM800K or Math Shepherd. Additionally, we are open to rephrasing the title to temper any overstated claims.\"}", "{\"title\": \"Following up!\", \"comment\": \"Thank you for your review! Please let us know if further detail is needed or if the new experiments address your concerns.\"}", "{\"comment\": \"I would like to thank the authors for their rebutal. However\\n- I don\\u2019t think you can say that your method \\u00ab\\u00a0reduces by a factor of 4 without introducing additional latency\\u00a0\\u00bb as you actually require sequential generations while the standard BoN can be distributed.\\n- \\u00ab\\u00a0We will try to incorporate a PRM for the final version of the paper to showcase the efficacy of this approach\\u00a0\\u00bb: I believe this baseline is important to justify the paper title \\u00ab\\u00a0LLMs Can Predict if They Can Do Better, Even Mid-Generation\\u00a0\\u00bb .\\n\\nTherefore, I will keep my score (weak reject) but will be open to debate if there is a strong opinion from other reviewers. Best.\"}", "{\"title\": \"Response\", \"comment\": [\"I appreciate that you found the feedbacks from the reviewers helpful. I read through all of the reviews and response which clarified some of my questions and concerns. However, severe issues still remain:\", \"While the authors provide a comparison of the FLOPs in their rebuttal, this comparison is not discussed nor presented in the paper.\", \"The authors still do not provide a comparison of the compute time (at least not in a format that I can understand) between the baselines. Considering the angle of this paper and given the remark of pws6, this is an important metric to highlight.\", \"The authors could modify their paper during the rebuttal period to improve its clarify, yet they have not modified it and thus my concerns about clarity still remain in the current version.\", \"After reading the responses, I am now more confused about what is the contribution of this work. The contribution now seems a set of trick to improve the efficiency at inference-time. In which case, a careful ablation has to be made so that the reader understand the impact of each tricks.\", \"There are competitive methods out there, as highlighted by Pqum, that have not been considered nor compared against in this paper. Such comparison is important for the type of paper presented here.\", \"While I appreciate the effort put by the authors in the rebuttal, I believe that more work is needed to help the reader understand the impact of each tricks proposed in the paper and comparison against strong baseline, or using the strongest baseline in the literature as starting point, is important.\", \"Finally,I will update my confidence score from a 4 to a 3.\"]}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for your detailed comment. I think the reply addresses my comments.\"}", "{\"title\": \"Response to Reviewer Pqum (Continued)\", \"comment\": \"**Q5:** Could you consider extending the method to learn to detect whether a given generation is the best among N>2 generations?\\n\\n**A5:** Absolutely! Preference datasets usually only have two responses per query but this is a great suggestion, we could definitely extend this method to consider the probability that N>2 number of generations will result in at least one better generation. This is an interesting way of augmenting the dataset, as well as producing potentially more useful predictions to more efficiently allocate samples to a given query. We can try to incorporate this for the final version of the paper and will add this to the discussion section of our paper for completion.\\n\\n**Q6:** Can you provide ablation studies, such as applying temperature annealing to basic Best-of-N? \\n\\n**A6:** Thank you for the suggestion. We find that our annealing schedule method (annealing based on samples generated so far) seems to provide gains even without adaptive sampling at small Best-of-Ns (+2% on GSM8K for Best-of-2). However, benefits taper off at larger Best-of-Ns. This makes sense as our scheduling strategy only applies significant annealing to the first few samples. We will introduce this and additional ablations for the final revision of the paper.\\n\\n**Q7:** Could you explore a simple strategy that detects prompt complexity and allocates compute accordingly?\\n\\n**A7:** This is a great suggestion. However, practically, we had found limited success with this approach as estimating the prompt complexity is quite difficult to use to then predict the required amount of compute given only the query. This is what motivated the experiments with mid-generation self-evaluation (early pruning), allowing the model to perform a limited amount of inference (e.g 64 or 128 tokens) and then prune according to the complexity of the problem. We will include this as a formal baseline for the final revision of the paper. \\n\\n**Q8:** Could you clarify why you need the model's prior distribution over tokens to be similar after training to keep the underlying policy unchanged? Isn't there a risk of catastrophic forgetting when fine-tuning for self-evaluation?\\n\\n**A8:** Thank you for bringing this up! This is a small but important detail in practice. We need the model\\u2019s distribution over tokens to be similar before training, as this will minimize the amount the model needs to change to minimize the loss during training. This could be thought of as reducing the risk of catastrophic forgetting, but more concretely, we do not want the model to overgeneralize and learn to only output the target tokens (\\u201cYes\\u201d and \\u201cNo\\u201d). We found that If the self-evaluation prompt elicits these tokens by default (this should be verified before training), the model does not overgeneralize and we effectively avoid this problem. One way to check if this was done correctly is to make sure that the loss is already reasonable before fine-tuning.\"}", "{\"summary\": [\"The paper proposes to simple way to train a LLM to do self-evaluation (with additional tokens) and uses the logits for the trained additional tokens to decide whether to continue sampling or not. They show that they can approximately match the performance of Best-of-16 at, on average, 1/4 of the number of samples on GSM8K.\", \"The authors claim on the efficiency and latency of the proposed method, but since it requires serial, instead of parallel, generations when doing Best-of-N, the latency may actually increase by a non-negligible amount. This needs to be properly addressed in the paper as this is a paper about making Best-of-N efficient.\"], \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The authors propose a very simple and concrete way to adaptively scale generations for Best-of-N and show an noticeable improvement on GSM8K where they can approximately match the performance of Best-of-16 at 1/4 of the number of samples.\"], \"weaknesses\": [\"The performance gain is marginal for Alpaca Eval. Thus, more comprehensive evaluations on a broader range of benchmarks that perhaps require more complex reasoning will be beneficial. However, I fully acknowledge the limits of what can be done during the rebuttal.\", \"The paper/authors argue that their proposed adaptive inference-time compute is scalable and efficient based on the performance vs. number of samples comparison. They make an argument on latency. However, an important aspect that is underplayed is the actual computational efficiency when implemented. Although mentioned, because of the need to self-evaluate, the adaptive inference-time compute is allocated in serial. On the other hand, Best-of-N can be executed in parallel, which is its advantage over this method. Although inference speed is very dependent on the inference stack engine, it would be important to quantify this cost of serial processing.\", \"Some further analysis is warranted to analyze the effect of learning to self-evaluate alone. Interestingly, the authors note that a pretrained (and instruction tuned) model has a nontrivial ability to conduct self-evaluations and that a Bradley Terry reward model further improves it. It would be constructive to study the effect of self-evaluation training in isolation and essentially conduct Best-of-N after this self-evaluation training to separate the effect of this training on newly annotated data and the adaptive inference method.\"], \"questions\": [\"For Section 4.2 experiments on AlpacaEval and GSM8K, what is the performance of Llama 3.1 8B Instruct when using Best-of-16?\", \"The authors focus on showing the efficiency in terms of the reduction of the number of samples vs performance. However, an interesting experiment would be to, fix the amount of compute budget (e.g., FLOPS), and to see whether one can adaptively allocate higher # of samples for more difficult questions and fewer for easier questions, while using everything. That would essentially show the effectiveness of this adaptive compute allocation method compared to a naive Best-of-N that allocates the same amount of compute to any prompt.\", \"How does the proportion of ties (the epsilon for deciding ties) in the preference dataset construction for self-evaluation training change the performance of the model?\", \"I am not sure how realistic this is given the time constraint of the rebuttal, but a natural question that arises is how this changes for datasets that require more complex reasoning such as ArenaHard, Math Hard, etc.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Overall Response\", \"comment\": \"We thank all the reviewers for their helpful feedback and suggestions. We appreciate that the reviewers recognize that our solution is simple and elegant (pws6, 6jZH), provides substantial cost reduction (pws6, 6jZH, Pqum), and that our paper is well-written (6jZH, Pqum).\\n\\nIn this rebuttal we make the improvements our methods offer more precise, concrete, and easy to understand. This was not clear to many reviewers as we made improvements to multiple aspects of Best-of-N which we measured in different ways. \\n\\nReviewers asked for evaluation with a more difficult benchmark that requires more reasoning, so we added evaluations with the MATH 500 benchmark. We also implemented a process reward model (PRM) baseline for requested comparison to our mid-generation self-evaluations. Finally, we added and restructured our figures with concrete metrics (FLOPs, wall-time) and additional ablations for improved clarity.\\n\\n1. **What are we improving?**\\n\\nInference-time methods such as Best-of-N are expensive as they require multiple samples and an external reward model. The primary objective of our paper is to make this far more **cost-effective through the substantial reduction of total computation, memory usage, and energy consumption.** These metrics translate directly to cost in most modern LLM serving scenarios as queries are processed in continuous batches (Daniel et al, 2023) and minimal compute or memory is sitting idle. Our methods improve performance with fixed computation, memory, or energy.\\n\\nWe proposed methods that reduce cost without sacrificing latency (self-evaluations and early pruning) and methods that reduce more cost with some additional latency (adaptive sampling).\\n\\n2. **How are we improving it?**\\n\\nWe\\u00a0introduce a generative reward model formulation, allowing LLMs to predict mid-generation the probability that restarting the generation will yield a better response. These capability aware and mid-generation self-evaluations enable adaptive inference-time compute strategies. Specifically, these predictions are obtained without an external reward model and can be used to decide whether or not to generate more samples, prune unpromising samples early on, or to pick the best sample. \\n\\nWe propose a two strategies to take advantage of this capability, namely adaptive sampling and early pruning. Adaptive sampling takes advantage of the capability-aware aspect of the self-evaluations to determine if generating more samples is beneficial. Early pruning takes advantage of mid-generation self-evaluations to stop unpromising samples early in generation. \\n\\nNote that these new strategies we propose are primitives that could be combined or enhanced in various ways for potentially greater efficiency gains. To improve adaptive sampling specifically, we also proposed a novel annealing schedule as well as exponentially increasing batch sizes.\\n\\n3. **How much are we improving it by?**\\n\\nWe measure efficiency with respect to the number of FLOPs during inference (following Hoffman et al 2022). We also provide exact wall-clock times in a controlled environment using SGLang (Zheng et al.) on a consistent hardware setup (8\\u00d7A100 GPUs) in Figure 3B. Below, we will discuss the efficiency gains we get from each component we propose.\\n\\nFirstly, capability-aware self-evaluations alone cut FLOPs by 2x compared to using external reward models. This is because the KV cache can be shared during the evaluation process with infilling of our self-evaluation prompt (16 additional tokens).\\n\\nAdditionally, we see roughly a **4x reduction in total FLOPs** to match the performance of Best-of-16 by additionally leveraging early pruning. This is **without any additional latency**. Adaptive sampling achieves **roughly a 6-8x reduction in total FLOPs** with the tradeoff of roughly 2x latency (as batches are processed sequentially) as shown in Table 3 in our appendix. Using early pruning or adaptive sampling is a choice that can be made depending on constraints an application has on latency and computational efficiency. We provide the total FLOPs usage and downstream performance in the updated Figure 2 and Figure 3. **[Results](https://imgur.com/a/pdQ878w)** with the PRM baseline show that mid-generation self-evaluations are still roughly 2x more efficient simply due to KV cache.\\n\\nAlso, our proposed methods allow for savings in memory usage. Memory usage is determined by the number of active model parameters, as well as the size of the KV cache which is largely a function of the total number of tokens generated. Since we both remove the reward model (2x less total parameters to store) and reduce the total number of tokens generated (2x with early pruning and 4x with adaptive sampling), **the memory savings are roughly proportional to the savings in the total number of FLOPs**.\\n\\nSince we save total FLOPs and memory usage, **we also proportionally save energy consumption** which accounts for about 5-10% of the total cost of operation for modern GPUs.\"}", "{\"comment\": \"I would like to thank the authors for this interesting experiment. But as Reviewer PBQS mentionned, \\\"I am now more confused about what is the contribution of this work. The contribution now seems a set of trick to improve the efficiency at inference-time.\\\". Overall, I believe the paper's contributions lack clarity. Thus, I will not change my overall grade 5, but as mentionned earlier, I will not block the paper.\"}", "{\"summary\": \"A method for predicting whether the answer to a query being generated by a LLM is worth continuing or not.\\nThe method fine-tunes a pre-trained LLM to predict whether the current (partially) generated answer is worth continuing or not. The fine-tuning involves pre-pending the prompt 'Would you do better if you started over? (\\u201cYes.\\u201d or \\u201cNo.\\u201d)' to the answer of a query and SFT the model to predict \\\"Yes\\\" if the answer is prefered over the alternative answer.\\nBy deciding to abort the answer generation, the authors claim that the LLM may save compute at inference-time compared to a method that do not abort the answer generation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper tackles an important challenge and chose to do so using an open-weight model. Advances on open- weight models may benefit everyone and reducing the compute and inference time may increase the adoption of foundation models. The method itself was understandable (althought I believe that the writing of the paper could generally be made clearer and more succinct).\", \"weaknesses\": \"The paper proposes a method that improves inference-time compute, yet it never compares the compute-time and the FLOPs with baseline methods. Having to probe the model mid generation definitely incurs a cost, which is not being discussed in the paper. Moreover, the model finds that pruning longer sentence improves performance, in which case perhaps it would be better to fully generate the sentences before selecting them and thus the idea of pruning mid-generation is unclear.\\n\\nIn Figure 1, the x-axis stops at 16 Fully Generated Samples, where the method of the authors has hit a plateau and the Best-of-N method is still showing improvement. The authors should increase the amount of fully generated samples to observe and compare the effect of more generated sample of their method and the Best-of-N baseline.\\n\\nIn Table 1 the author compares baseline methods with the proposed method. However, it is not at all clear what are the parameters of the capability-aware method and whether it is comparable to the baselines. The authors have to clarify how they compare with the baseline methods. E.g. of questions I have (and this is a subset of the questions I have -- the authors should make sure that the revised version clearly explain the experimental methodology): How many fully generated samples are used? (Line 404 the authors mention Best-of-16 sampling, is this 16 fully generated samples?) If this is indeed 16 fully generated samples, then why should a pratitioner use that method instead of the best-of-N baseline which performs comparably well acccording to Figure 1.\\n\\nThe ablations that are important for understanding the parameters of the method are in the Appendix while they should be in the main paper.\\n\\nThe authors continually refer to Figure 1 throughout the paper, forcing the reader to continually scroll up and down. Instead of packing many important observations within one figure, the authors are encouraged to break down the observations into indivudal figure (i.e. one observation per figure).\\n\\nFinally, I encourage the authors to revise the writing making it more succinct and avoid long winded sentences.\", \"questions\": [\"How does the proposed method compares with the baselines in term of compute-time?\", \"How does the proposed method compares iwth the baselines in term of FLOPs?\", \"How does Figure 1 looks like when evaluating 32 and 64 fully generated samples?\", \"What are the parameters of the methods and the baselines used to produce table 1?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper shows that one of the challenging problem at inference-time computation scaling (Best-of-N) is to identify if a specific generation is \\\"good\\\", or if it is worth allocating more compute to continue its generation. For example, Best-of-N is considered computationally expensive because it requires external reward model to evaluate the quality of the generation, and it often requires multiple such generation samples to obtain a good quality result.\\n\\nTo address this problem, the paper introduces capability-aware self-evaluation, a technique that allows LLM to self-predict if the current generation is promising, or in other words, if restarting the generation will yield better result or not. In the Best-of-N setting, the paper also introduces adaptive sampling and annealing to further improve the quality of the generation, and early prune unpromising samples using the self-evaluation methodology.\\n\\nIn the evaluation section, the paper presents a finetuned Llama3.1-8B-Instruct model trained on 30k preference pairs constructed from unfiltered LMSYS using a reward model, and shows an increasing win rate against GPT-4.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"**Elegant solution**. The evaluation section shows that the proposed fine-tuning method is able to generalize from LMSys to AlpacaEval and GSM8K. The solution is elegant and intuitive, and it works well.\\n\\nThe method shows a promising way to improve the quality of generation in Best-of-N setting by early pruning unpromising samples, and adaptively sampling and annealing the remaining samples. The proposed method is simple, intuitive, and shown to be effective.\\n\\n**Writing**. The writing is clear and the paper is well-organized. The paper is well-written and easy to follow.\", \"weaknesses\": \"**Finetuning dataset construction - what is a good dataset to finetune on?**. Why is LMSYS dataset chosen? Would another dataset other than LMSYS do better or worse? Would using a more complex dataset do better or worse? It is not as convincing if there is no ablation study on the dataset choice.\\n\\n**Limited evaluation**. Only 2 datasets are used to perform the evaluation. GSM8K is not a hard dataset to see performance gain (e.g. using hyperparam tuning). To show the generalizability on reasoning tasks, it would be better to evaluate on more diverse and complex datasets, especially on slightly longer reasoning chains to show the increasing size of sampling and prunning is effective. Showing the distribution of generated token lengths would be very useful to understand \\\"how far\\\" are we talking about.\\n\\n**Limited setting of inference time algorithm**. As mentioned in conclusion / limitation section, the paper only discuss Best-of-N setting. But many reasoning techniques such as self-refinement / chain-of-thought are not generating parallel requests. It would be better to evaluate on more diverse inference settings.\\n\\n**Naively mapping number of samples / tokens to computational cost**. The paper did not include any statement about computational cost in terms of number of tokens generated. In actual LLM serving with batch inference (e.g. vLLM, DeepSpeed), the computational cost of 16 samples performing decoding is roughly the same as 1 sample performing decoding. Thus, if the metric is \\\"end-to-end latency\\\", then the bottleneck is actually the straggler - meaning what is the longest request in the batch that needs to be finished before the generation is complete. The proposed method does not seem to address this problem. In addition, as mentioned in discussion section, introducing a prefill (inserting prompt into the middle of generation) is not free, and may actually introduce more latency than naive batch decoding setting. Although this is not the focus of the paper, it is important to properly address this point in writing as it is not very clear at reading.\", \"questions\": \"1. Does the selection of finetuning datasets matters, or to what extent it affect the generalization of the model?\\n2. What is the distribution of the generation length? \\n3. Would the proposed method still be effective when problem is more difficult or unseen?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to PBQS\", \"comment\": \"We appreciate the reviewer\\u2019s continued engagement and constructive feedback on our work. We are pleased to share an updated version of the paper, addressing the questions and concerns raised.\\n\\n> While the authors provide a comparison of the FLOPs in their rebuttal, this comparison is not discussed nor presented in the paper. [...]The authors could modify their paper during the rebuttal period to improve its clarify, yet they have not modified it and thus my concerns about clarity still remain in the current version.\\n\\nWe have now revised the paper to include the comparison of FLOPs in the main text. Specifically, we have restructured the writing and updated the figures to emphasize this as the primary form of comparison, aligning with prior works in inference optimization (e.g., Hoffman et al., *Training Compute-Optimal Large Language Models*). Additionally, we have improved the clarity of the experimental section, incorporating the reviewer\\u2019s suggestion to relocate Figure 1 (or equivalent) to this section. To further enhance presentation, we have fragmented Figure 1 which previously encompassed both pruning and adaptive sampling into two separate figures. We have additionally modified the writing of the experimental section to reflect this.\\n\\nWe welcome any additional formatting or content suggestions and are prepared to share an anonymous PDF reflecting any changes requested by the reviewer for the final version of the paper.\\n\\n> Regarding PRMs\\n\\nThe Process Reward Model (PRM) can be seamlessly replaced with our proposed self-evaluation function. The inference speedups presented in the paper\\u2014adaptive sampling and pruning\\u2014remain valid and transferable with this approach.\\n\\nThe distinction between PRMs and our mid-generation self-evaluation lies primarily in the dataset composition used for training. PRMs (e.g., Snell et al., 2024; Wang et al., 2024, Math Shepherd) rely on policy rollouts evaluated for correctness using outcome verification. By contrast, our self-evaluation framework employs relative preference feedback from the same on-policy rollouts. This choice, informed by preference data (e.g., LMSYS), enables our model to generalize across diverse domains with a single framework.\\n\\nWe are actively working on adding a PRM baseline for comparison before the end of the rebuttal period and will include these results in the revised version.\\n\\n\\n> The authors still do not provide a comparison of the compute time (at least not in a format that I can understand) between the baselines. Considering the angle of this paper and given the remark of pws6, this is an important metric to highlight.\\n\\nIn the paper, we study latency as the average number of batches or sequential calls to the language model that are made. This is an **overestimate** of the latency in many applications due to optimizations such as prefix-caching and batched generation. \\n\\nWe now include wall-time as an additional metric for measuring latency. We evaluate wall-time in a controlled environment using SGLang (Zheng et al.) on a consistent hardware setup (8\\u00d7A100 GPUs), reporting the average wall-time over questions. Figure 3(B) in the updated paper presents wall-clock time (in seconds) versus the percentage of maximum improvement. These results demonstrate that pruning incurs no additional latency even with a 4\\u00d7 increase in FLOPs, while adaptive sampling introduces additional latency that can be balanced with performance gains and FLOPs.\\n\\n> After reading the responses, I am now more confused about what is the contribution of this work. The contribution now seems a set of trick to improve the efficiency at inference-time. In which case, a careful ablation has to be made so that the reader understand the impact of each tricks.\\n\\nWe believe the contributions of our work are well-supported through extensive ablation studies, both in the Appendix and the main paper. First, Table 2 examines the effect of modeling probability for self-evaluation on an on-policy pairwise preference dataset. Next, in Table 1, we study different instantiations of reward modeling that can be used for guiding inference time search. For adaptive sampling, Table 4 assesses the effect of omitting the annealing schedule, Table 5, compares using a parametric reward model. For pruning, we study randomly pruning, pruning at a fixed token budget, and no pruning in Tables 7-9. \\n\\nWe are open to conducting additional ablation studies if the reviewer deems further analysis necessary. However, we believe the current ablations provide a comprehensive evaluation of the impact of each component in our proposed framework.\"}", "{\"title\": \"Response to Reviewer pws6 [1/1]\", \"comment\": \"We appreciate the reviewer\\u2019s thoughtful comments and continued engagement in the discussion.\\n\\n> The trade-off of adaptive sampling is that it introduces additional latency due to serial processing\\n\\nWhile adaptive sampling indeed introduces additional latency, we also propose an alternative solution, early pruning, a technique that incurs *no additional latency* while improving FLOPs efficiency and reducing the number of tokens generated. For latency-sensitive settings, early pruning may serve as an attractive alternative.\\n\\n> FLOPS, although a common metric for efficiency, may only highlight the strength in terms of the amount of computation necessary\\n\\nThis is an excellent point. FLOPs primarily reflect the computational requirements. To address this, we also report the average batches per query (for sequential queries) in our results, such as those in Table 3 (reproduced below for reference). Our method incorporates exponentially increasing batch sizes to minimize latency for challenging queries, enabling more efficient allocation of inference time compute. Additionally, we now propose warm-starting with an average batch size (rounded) for a desirable target accuracy, further reducing latency for subsequent queries.\\n\\n| Win-or-Tie Probability Threshold | 0.92 | 0.96 | 0.98 | 0.99 | 1.00 |\\n|----------------------------------|------|------|------|------|------|\\n| Average Samples Used | 1.2 | 1.9 | 3.7 | 6.8 | 16.0 |\\n| Average Batches Used (Latency) | 1.1 | 1.4 | 2.0 | 2.9 | 5.0 |\\n| GSM8K Pass@1 (%) | 89.2 | 89.9 | 90.8 | 90.8 | 91.0 |\\n| Percent of Maximum Improvement | 73.5 | 83.8 | 97.1 | 97.1 | 100.0 |\\n\\n> The authors also mention that they test on MATH 500 [...] please point me to where I can find the new experimental results\\n\\nThe updated figures, including total FLOPs usage and downstream performance, are available [here](https://imgur.com/a/rhisaLK). Please refer to the third row in this set of images for the relevant results.\"}", "{\"title\": \"Following up!\", \"comment\": \"Thank you for your review! Please let us know if further detail is needed or if the new experiments address your concerns.\"}" ] }
7t8aKBeATc
Variance-Reduced Normalized Zeroth Order Method for Generalized-Smooth Non-Convex Optimization
[ "Yuxing Peng", "Yuanyuan Liu", "Fanhua Shang", "Hongying Liu", "Zhouchen Lin" ]
The generalized smooth condition, $(L_{0}, L_{1})$-smoothness, has triggered people’s interest since it is more realistic in many optimization problems shown by both empirical and theoretical evidence. To solve the generalized smooth optimization, gradient clipping methods are often employed, and have theoretically been shown to be as effective as the traditional gradient-based methods\citep{Chen_2023, xie2024}. However, whether these methods can be safely extended to zeroth-order case is still unstudied. To answer this important question, we propose a zeroth-order normalized gradient method(ZONSPIDER) for both finite sum and general expectation case, and we prove that we can find $\epsilon$- stationary point of $f(x)$ with optimal decency on $d$ and $\epsilon$, specifically, the complexes are $\mathcal{O}(d\epsilon^{-2}\sqrt{n}\max\{L_{0}, L_{1}\})$ in the finite sum case and $\mathcal{O}(d\epsilon^{-3}\max\{\sigma_{1}^{2}, \sigma_{0}^{2}\}\max\{L_{0}, L_{1}\})$ in the general expectation case. To the best of our knowledge, this is the first time that sample complexity bounds are established for a zeroth-order method under generalized smoothness.
[ "Non-convex Optimization", "generalized smooth", "zero-order", "gradient-free", "$(L_{0}", "L_{1})$-smooth" ]
Reject
https://openreview.net/pdf?id=7t8aKBeATc
https://openreview.net/forum?id=7t8aKBeATc
ICLR.cc/2025/Conference
2025
{ "note_id": [ "qViPaVF7aU", "nEugmyBpKl", "g9zXkE6V7v", "d7xTTorfXQ", "ZD6pXhYulU", "S9OI9rplS1", "PF0sYeICf0", "FnULs0CA1m", "AcjCKsdqh7", "9iZET917ve" ], "note_type": [ "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "decision", "meta_review" ], "note_created": [ 1730482292734, 1730587288809, 1733224894914, 1733224917527, 1730706537999, 1730558574747, 1733224909801, 1733224900520, 1737524041988, 1733764623567 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10327/Reviewer_XunU" ], [ "ICLR.cc/2025/Conference/Submission10327/Reviewer_1T35" ], [ "ICLR.cc/2025/Conference/Submission10327/Authors" ], [ "ICLR.cc/2025/Conference/Submission10327/Authors" ], [ "ICLR.cc/2025/Conference/Submission10327/Reviewer_8hDH" ], [ "ICLR.cc/2025/Conference/Submission10327/Reviewer_sY4G" ], [ "ICLR.cc/2025/Conference/Submission10327/Authors" ], [ "ICLR.cc/2025/Conference/Submission10327/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10327/Area_Chair_cuLF" ] ], "structured_content_str": [ "{\"summary\": \"The paper studies zeroth-order optimization under the generalized $(L_0,L_1)$-smoothness condition. Previous works under this assumption focus on first-order methods, where clipped SGD achieves $\\\\epsilon^{-4}$ rate, the same rate as SGD under $L$-smoothness condition, and clipped SPIDER achieves $\\\\epsilon^{-3}$ rate, the same rate as SPIDER under average or individual smoothness condition. Here, the authors replace gradients in normalized SPIDER with their zeroth-order estimators and achieve $d\\\\epsilon^{-3}$ rate, which is the first result for zeroth-order optimization under generalized smoothness condition.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The topic on zeroth-order optimization is important in the optimization and machine learning community, due to many interesting applications such as black-box attack and reinforcement learning. While most previous works on zeroth-order methods assume the smoothness condition, there are many settings where this assumption fails to hold. Generalized smoothness condition provides one relaxation and fits several real-world applications. Research on generalized smoothness condition sets good examples on efforts to close the gap between theoretical setups and pratical applications.\", \"weaknesses\": \"1. There are so many typos and grammar issues in the paper. This was really annoying when I was reading the paper. I am not sure whether this comes from that this paper was written in a rush or are there any other reasons? If the paper was written in a rush, then I would reasonably question the correctness of the theoretical proofs. Here are some issues (I could not list all of them as there are so many). line21 in the abstract: complexes; line86-93: what do you mean by \\\"we both analyze, we both use\\\"; line98: can as effective as; line123 in table 2: estimator denotes represent the number; line194, 239: can't access to; line265: should be introduced; line308 in Algo. 1: defied in (5) and (4); line364: constans; line376: itration; line382, 408, 420: funtion; line408, 420: number of e function query; line 524: coordand rand. Some weird spacing: line40, distribution.In; line 42; line 162. Grammar: You cannot just list a sequence of sentences in English, e.g., \\\"sentence A, sentence B, sentence C\\\". They should be connected using conjunctions (and, but), relative clauses (which, where, that), or transitional words or phrases (however, moreover). This happens several times in the paper, e.g., line 76-79, 187-199, 202-203, 284-286, 347-354, 364-376. I completely understand the strict timeline for submission, where typos and grammar problems are unavoidable. However, such many issues are unaccepted. It disrupts readability and affects clarity of the paper. This is also unfair for other submissions where the authors spend lots of time on polishing the paper. As a reviewer, it is also frustrating to evalute papers that are far from publication standard.\\n\\n2. Some classical results on zeroth-order methods are missing. $\\\\mathbb{E}\\\\bar \\\\nabla f=\\\\nabla_\\\\mu f$ was first proved in [1] as far as I know. Classical analysis of zeorth-order methods were provided in [2] and [3]. [4] also studied variance reduced zeroth-order method, and [5] discussed the lower-bound.\\n\\n[1] Online convex optimization in the bandit setting: Gradient descent without a gradient. SODA 2005.\\n\\n[2] Random gradient-free minimization of convex functions. Nesterov and Spokoiny.\\n\\n[3] An optimal algorithm for bandit and zero-order convex optimization with two-point feedback. JMLR, 2017.\\n\\n[4] Zeroth-order stochastic variance reduction for nonconvex optimization. NeurIPS, 2018.\\n\\n[5] Optimal rates for zero-order convex optimization: The power of two function evaluations. IEEE, 2015.\\n\\n3. The authors claim their complexity on $\\\\epsilon$ and $d$ is optimal. However, there is no discussion on the lower-bound in the paper under Assumptions 1-3 for zeroth-order methods. Usually, the complexity of zeroth-order methods is $d$-times worse compared to first-order methods (see lower-bounds in [5] above), but that is for $L$-smoothess setting. The author should discuss why the same lower-bound applies to the generalized smoothness setting. Before that, it is only valid to claim the complexity being best-known upper-bound instead of the optimal.\\n\\n4. Another option for Coord estimator is to only sample a subset of all $d$ coordinates instead of computing every coordinate, as the current one is computational challenge for problem with large dimensions.\\n\\n5. Although the paper is indeed the first one on zeroth-order optimization with generalized smoothness condition, I am not surprised by any of its results. All the proof techniques exist before, and there are no fundamental difference by combining all related works. First, existing work already show $\\\\epsilon^{-3}$ rate using SPIDER under the generalized smoothness condition. Second, extensive literature provides examples how to extend from first-order methods to zeroth-order methods. The computation involved in Lemmas 1-3 is also standard, and there is nothing special. I can only rate the novelty as marginal.\\n\\n6. It is unclear why the current experiments verify the effectiveness of the proposed method. I am not sure why one has to apply zeroth-order methods on applications (10) and (9), where gradients are accessible and not hard to compute. These might be good examples for $(L_0, L_1)$-smoothness, but not for why zeroth-order methods should be applied. Why not just use first-order methods on the two applications? Zeroth-order methods are useful when gradient is not possible or hard to obtain, and the current experiments do not belong to this class.\\n\\n7. In the theoretical results, the complexity of the proposed zeroth-order method is $d\\\\epsilon^{-3}$, while that of first-order methods is $\\\\epsilon^{-3}$. This means zeroth-order method is $d$-times worse compared to first-order methods. Then why is it the case that in Figures 1(c) and 1(d), there is not much difference between zeroth- and first-order methods? What is the sample complexity here in the figure? Is it queries on $f$, or queries on $|S|$ number of $f$?\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Authors propose gradient-free methods for non-convex general smooth stochastic optimization problems. As particular case they consider sum-type problem structure.\\nBTW It seems that the Literature part could be completed by\", \"https\": \"//arxiv.org/pdf/2410.10800\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"I consider the paper contains quite enough mathematics and gives a positive answer for this question:\\n\\\"Can zeroth-order methods solve generalized (L0, L1)-smooth non-convex problems as efficiently as solving traditional smooth non-convex problems? In particular, what convergence rates can be achieved?\\\" Also authors demonstrate their results by numerical experiments.\", \"weaknesses\": \"1) rand estimator seems to be not optimal one, e.g. see Shamir's (O. Shamir. An optimal algorithm for bandit and zero-order convex optimization with two-point feedback. Journal of Machine Learning Research, 18(1):1703\\u20131713, 2017.) or Polyak-Tsybakov ones (if you have high-order smoothness) https://arxiv.org/pdf/2306.02159\\n2) I don't understand how $L_0$ and $L_1$ could be compared in $\\\\max\\\\{\\\\cdot\\\\}$ in formulas. They have principally different physical dimension.\\n3) There are no high-probability deviations bounds in the paper\\n4) The only improvement from my point of view is gradient-free generalization of full (stochastic gradient procedures), but nowadays this is quite a routine procedure.\", \"questions\": \"What is the main contribution from the mathematical point of view (indicate something that was not done by analogue)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your valuable feedback. We will incorporate your suggestions to further improve our paper.\"}", "{\"comment\": \"Thank you for your valuable feedback. We will incorporate your suggestions to further improve our paper.\"}", "{\"summary\": \"This paper provides variance reduction methods for zeroth-order non-convex optimization under generalized smoothness, including two types of gradient estimators and two cases of problem definitions. The methods achieve an iteration complexity of $\\\\mathcal{O}(\\\\Delta d\\\\epsilon^{-3}\\\\max\\\\\\\\{L_0,L_1\\\\\\\\})$ for finding an $\\\\epsilon$-stationary point.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper analyzes zeroth-order variance reduction methods under the setting of $(L_0,L_1)$-smoothness, which, to the best of my knowledge, has not been previously investigated.\", \"weaknesses\": \"My primary concern is the significance of the technical contribution. The analysis of gradient estimators (Lemma 1, 2 and 4) is almost identical to Lemma 1 and 3 from Liu et al. (2018), with smoothness replaced by generalized smoothness assumption. Consequently, I do not see substantial technical novelty as claimed after Lemma 3, though it is understandable that leading lemmas might not be novel. I also urge the authors to add relevant citations, such as counterparts of lemmas that hold for $L$-smooth or first-order strategies, for better reference.\\n\\nThe writing of the manuscript should be carefully checked, as the current version contains much ambiguity and is error-prone. See the Questions section for details.\", \"reference\": \"Sijia Liu, Bhavya Kailkhura, Pin-Yu Chen, Paishun Ting, Shiyu Chang, and Lisa Amini. Zeroth-order stochastic variance reduction for nonconvex optimization. In Advances in Neural Information Processing Systems 31, 2018.\", \"questions\": \"**Typos and minor suggestions:**\", \"line_21\": \"\\\"decency\\\" -> \\\"dependency\\\", \\\"complexes\\\" -> \\\"complexities\\\".\", \"line_101\": \"\\\"converge analysis\\\" -> \\\"convergence analysis\\\"\", \"line_141\": \"missing relevant citations for $(L_0,L_1)$-smoothness.\", \"line_168\": \"it is not typical to use $f(x)-f^\\\\star \\\\leq \\\\epsilon$ as criterion for $\\\\epsilon$-stationarity, as the objective is non-convex.\\n\\nLine 176 (Section 3.1): missing relevant citations for gradient estimators.\", \"line_181\": \"missing explanation for parameter $\\\\mu$.\", \"line_204\": \"please capitalize \\\"Assumption\\\", \\\"Lemma\\\" and \\\"Theorem\\\".\", \"line_229\": \"see the Weaknesses section, I assume it not difficult to derive these lemmas.\", \"line_238\": \"\\\"$\\\\mathbf{e}\\\\_{l}$\\\" -> \\\"$\\\\mathbf{e}\\\\_{\\\\ell}$\\\".\", \"line_265\": \"\\\"introduce\\\" -> \\\"introduces\\\".\", \"line_266\": \"\\\"proposed\\\" -> \\\"proposes\\\".\\n\\nLine 278 (Table 3): you use $c$ to refer to constant in $\\\\eta_k$ of ZONSPIDER, while in other places you use $c_2$.\", \"line_296\": \"\\\"initialize point $x_0$\\\" -> \\\"initial point $x_0$\\\".\", \"line_302\": \"should the calculation of $v_0$ be different in Option I and II?\\n\\nLine 307 & 308: \\\"defied\\\" -> \\\"defined\\\".\", \"line_307_312\": \"the current Algorithm 1 only considers general expectation case.\", \"line_334\": \"\\\"$\\\\Vert v_k-\\\\nabla f(x_k)\\\\Vert$\\\" -> \\\"$\\\\Vert v_k-\\\\hat{\\\\nabla} f(x_k)\\\\Vert$\\\".\", \"line_335\": \"\\\"Sipder\\\" -> \\\"SPIDER\\\".\", \"line_338\": \"is there a missing $\\\\sum_{i=1}^b$?\", \"line_341\": \"\\\"$k=q-1$\\\" -> \\\"$k=\\\\hat{k}+q-1$\\\".\", \"line_343\": \"\\\"$l=\\\\hat{k}$\\\" -> \\\"$k=\\\\hat{k}$\\\", and should the indices in LHS be $\\\\hat{k}+q-1$ rather than $k$?\\n\\nLine 356 (Lemma 7): not consistent with Lemma E.2 w.r.t. power of $d, b$ and constants.\", \"line_375\": \"\\\"itration\\\" -> \\\"iterations\\\".\", \"line_378\": \"\\\"deceacse\\\" -> \\\"decrease\\\".\", \"line_392\": \"\\\"in, since $f(x)$\\\" should be deleted.\", \"line_598\": \"duplicate reference of Ji et al. (2019).\\n\\nLine 1102 (last step of derivation): should be \\\"=\\\" due to arrangement.\", \"line_1111\": \"if this lemma holds for any $b>0$, you should explicitly write it out.\", \"line_1117\": \"you are analyzing the case of finite sum, however, \\\"$\\\\mathbb{E}[\\\\hat{\\\\nabla}f(x_k;\\\\xi)]=\\\\hat{\\\\nabla}f(x_k)$\\\" is for the case of general expectation.\\n\\nLine 1122 (first step of derivation): should be \\\"=\\\" due to expansion of $v_{k+1}$.\\n\\n\\n\\n**Questions:**\\n\\n* I notice that SPIDER has provided ZO variants in the arXiv version (Fang et al., 2018). Should you consider adding it to Table 1 for comparison?\\n* I recommend that the authors restate the main challenges and technical contributions. My general understanding is that the additional sum of gradient norms in Lemma 7 makes it hard to bound $\\\\mathbb{E}[\\\\Vert v_k-\\\\hat{\\\\nabla}f(x_k)\\\\Vert]$ by merely setting the parameters $c_2$ and $\\\\mu$.\\n* It is somewhat surprising to see ZO methods outperform FO methods in the experiment section. In previous work, it is usually observed that ZO methods require more gradient queries than FO methods (Gautam et al., 2024). Can you provide a possible explanation for this situation or clarify the differences?\", \"references\": \"Cong Fang, Chris Junchi Li, Zhouchen Lin, Tong Zhang. SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path Integrated Differential Estimator. ArXiv e-prints, arXiv:1807.01695, 2018.\\n\\nTanmay Gautam, Youngsuk Park, Hao Zhou, Parameswaran Raman, Wooseok Ha. Variance-reduced Zeroth-Order Methods for Fine-Tuning Language Models. In International Conference on Learning Representations, 2024.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": [\"The paper studies zeros-order methods for stochastic non-convex $(L_0, L_1)$-smooth optimization. The zeroth-order version of SPIDER, namely ZO-normalized-SPIDER, is presented using rand and coord estimators of gradients. New convergence rates are derived for finite-sum and expectation cases for non-convex $(L_0, L_1)$-smooth optimization. In my opinion, the novelty of the paper is limited. There are two ingredients to the proof:\", \"in bounding the gradient estimation (rand, coord) error. This part is simple and done by applying previous results and Assumption 1 directly.\", \"in bounding $\\\\mathbb{E}[\\\\|v_k - \\\\hat{\\\\nabla} f(x, \\\\xi)\\\\|^2]$ in $(L_0, L_1)$-smooth case, which is already done in Chen et al 2023.\"], \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"Based on the literature provided in the paper, this is the first result on zeroth-order methods for stochastic $(L_0, L_1)$-smooth nonconvex optimization.\", \"I belive the results are correct with minor errors.\"], \"weaknesses\": [\"The paper is poorly written; there are many typos, and some sentences are confusing and unfinished. The manuscript should be polished and revisited.\", \"Assumption A2(3) is a strong assumption that requires the $(L_0, L_1)$-smooth property to be satisfied for every sample.\", \"The paper does not present and compare with the results on convergence of the SPIDER method for $\\\\alpha$-symmetric functions provided in Chen et al 2023; they also used normalization, and the analysis in their work has the same challenges and tricks used in this paper. ZO-normalized-SPIDER is exactly Algorithm 2 (Spider). In Technical Novelty paragraph in Chen et al 2023, the same problem of bounding $\\\\mathbb{E}[\\\\|v_k - \\\\nabla f(x_k, \\\\xi)\\\\|^2]$ is solved.\"], \"please_address_the_following_concerns\": [\"line 048: A key observation where? This sentence is not complete.\", \"lines 194-195 the sentence there is incomplete.\", \"the error bound in Lemmas 1, 4 holds only for $\\\\mu$ is bounded by some constant, perhaps $\\\\frac{1}{L_1}$.\", \"sentence on lines 392-393 is not clear.\"], \"typos\": [\"line 041, these problems\", \"line 042, space between notable.\", \"lines 047-048, critical applications such as LSTM. Perhaps you want to say critical application such as training of LSTM models.\", \"line 098, it should be zeroth-order methods\", \"line 10,1 it should be convergence instead of converge\", \"lines 123-124,\", \">denotes represent\", \"perhaps one of these is a typo.\", \"lines 1628-1629 dut should be due\", \"line 230, approximate should be approximation\"], \"questions\": [\"I am also confused by the use of rand and cord in finite sum case. In Algorithm 1, is it required to use a full batch at each iteration to compute a coord estimator? What is $\\\\xi$ in the finite-sum case in Algorithm 1 (ZO-Normalized-SPIDER)? You didn't define it.\", \"-lines 043 044: faster than what?\", \">SGD, variance reduced methods ... which have demonstrated faster convergence\", \"line 184 should it be $\\\\frac{d}{\\\\mu}$ instead of $\\\\frac{n}{\\\\mu}$?\", \"line 203: ...the error... the error of what?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"I don't think paper has any ethics concerns\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your valuable feedback. We will incorporate your suggestions to further improve our paper.\"}", "{\"comment\": \"Thank you for your valuable feedback. We will incorporate your suggestions to further improve our paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"This paper studies zeroth-order nonconvex optimization problems under the generalized $(L_0,L_1)$-smoothness condition. All reviewers identified significant weaknesses in the current paper, leading to a unanimous rejection. First, the novelty is limited\\u2014while the $(L_0,L_1)$-smoothness condition has not been previously studied in the context of zeroth-order optimization, the techniques used are directly taken from existing work on first-order optimization for $(L_0,L_1)$-smooth functions, with key lemmas being identical to prior studies. Additionally, the paper is poorly written, with numerous typos and unfinished sentences, which not only hinder understanding and readability but also raise concerns about technical correctness. Finally, the paper fails to discuss many classical results on zeroth-order methods in the existing literature. I encourage the authors to carefully consider these reviewer comments and rigorously improve the paper in a future version.\", \"additional_comments_on_reviewer_discussion\": \"There is no detailed response submitted to address the reviewers' questions. No review has been changed.\"}" ] }
7rzA6aEASo
No Free Lunch from Random Feature Ensembles
[ "Benjamin Samuel Ruben", "William Lingxiao Tong", "Hamza Tahir Chaudhry", "Cengiz Pehlevan" ]
Given a budget on total model size, one must decide whether to train a single, large neural network or to combine the predictions of many smaller networks. We study this trade-off for ensembles of random-feature ridge regression models. We prove that when a fixed number of trainable parameters are partitioned among $K$ independently trained models, $K=1$ achieves optimal performance, provided the ridge parameter is optimally tuned. We then derive scaling laws which describe how the test risk of an ensemble of regression models decays with its total size. We identify conditions on the kernel and task eigenstructure under which ensembles can achieve near-optimal scaling laws. Training ensembles of deep convolutional neural networks on CIFAR-10 and a transformer architecture on C4, we find that a single large network outperforms any ensemble of networks with the same total number of parameters, provided the weight decay and feature-learning strength are tuned to their optimal values.
[ "Ensemble Learning", "Deep Ensembles", "Kernel Random Features Regression", "Representation Learning" ]
Reject
https://openreview.net/pdf?id=7rzA6aEASo
https://openreview.net/forum?id=7rzA6aEASo
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yR16PDhCIH", "sfMdjVST93", "sbGWKzeNyt", "pF3AfbPNX4", "o6TnO2Zv1j", "n75tFZYvsv", "cMQjZFzj6D", "c5q0LRaeRx", "bGaZTRfrr5", "Wkp6v5qYA0", "TFNcSIux0K", "Sbi6R2GOOa", "Qg5YV9OmEi", "NMGkxRzeND", "Lv8O45Agag", "KI4z0UPw6J", "F84hqVRe2O", "EGQIyetOuz", "E82jNYPPD1", "ChlXmCnlLW", "CLtZycos2S", "6hEAM5mj9F", "1SwQcNu6TH" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732165282888, 1732161591725, 1732165106732, 1733177723892, 1730244588326, 1733225357724, 1737523946368, 1732459794500, 1733183997144, 1734737821641, 1730944634077, 1732729434473, 1731400785833, 1732554123482, 1732162530399, 1732161734702, 1733178141687, 1730725701137, 1731427192151, 1732162087154, 1732892512446, 1732657355437, 1732161851410 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8931/Authors" ], [ "ICLR.cc/2025/Conference/Submission8931/Authors" ], [ "ICLR.cc/2025/Conference/Submission8931/Authors" ], [ "ICLR.cc/2025/Conference/Submission8931/Authors" ], [ "ICLR.cc/2025/Conference/Submission8931/Reviewer_6nYe" ], [ "ICLR.cc/2025/Conference/Submission8931/Reviewer_SSHY" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8931/Reviewer_SSHY" ], [ "ICLR.cc/2025/Conference/Submission8931/Reviewer_U2xD" ], [ "ICLR.cc/2025/Conference/Submission8931/Area_Chair_x1Lb" ], [ "ICLR.cc/2025/Conference/Submission8931/Reviewer_U2xD" ], [ "ICLR.cc/2025/Conference/Submission8931/Authors" ], [ "ICLR.cc/2025/Conference/Submission8931/Reviewer_knrL" ], [ "ICLR.cc/2025/Conference/Submission8931/Authors" ], [ "ICLR.cc/2025/Conference/Submission8931/Authors" ], [ "ICLR.cc/2025/Conference/Submission8931/Authors" ], [ "ICLR.cc/2025/Conference/Submission8931/Authors" ], [ "ICLR.cc/2025/Conference/Submission8931/Reviewer_SSHY" ], [ "ICLR.cc/2025/Conference/Submission8931/Reviewer_mDQV" ], [ "ICLR.cc/2025/Conference/Submission8931/Authors" ], [ "ICLR.cc/2025/Conference/Submission8931/Reviewer_SSHY" ], [ "ICLR.cc/2025/Conference/Submission8931/Reviewer_6nYe" ], [ "ICLR.cc/2025/Conference/Submission8931/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer Comments (Part 2)\", \"comment\": \"The cifar experiments in this work does not link to the theory, i.e. without neither regression nor random features.\\n> **Response**: Thank you for this comment, which prompts a clarification. First, we note that because RFRR is still a commonly used algorithm, our theoretical contributions to understanding RFRR may be of independent interest. Our theory of RFRR ensembles does link directly to the Binarized CIFAR10 and MNIST RFRR experiments, which are overlaid with theoretical loss curves in figures 1, 2, and 4. \\n\\n>We do not claim that our theoretical results directly explain our empirical findings in deep neural networks (fig. 5). Rather, we view the RFRR model as a toy model from which we might gain intuition that guides our numerical experiments. This perspective has guided many papers (for one example, see [5]). In this case, we find that RFRR ensembles obey the \\\"no free lunch from ensembles\\\" principle when the hyperparameter $\\\\lambda$ is tuned to its optimal value. In section 6, we find that deep ensembles trained with $\\\\mu P$ parameterization also obey the ``no free lunch from ensembles'' principle, when network hyperparameters are tuned to their optimal values. The difference is that the hyperparameters which must be tuned in deep networks are both weight decay and richness $\\\\gamma$. Please see our updated section 6, which makes the relationship between theory and deep feature-learning ensembles clearer than in our original submission. We agree that a theory of feature-learning ensembles is an important objective for future work, and have added a note on this to our discussion section.\", \"questions\": \"The CNN experiment on CIFAR10 contains two CNN layers and one linear layer. That is already too small for CIFAR10 tasks. Reducing the size of this small network would further hurt the performance. Does the same phenomena in this paper hold on larger networks? [2] shows the ensemble multiple small networks outperforms a large network.\\n\\n> **Response**: Thank you for this comment and for bringing the results of [2] to our attention. We are currently working on repeating the experiment in fig. 4A using a larger ResNet architecture that performs better on the CIFAR10 classification task. We will hopefully be able to share these before the end of the discussion period. As for the results of [2], they do not appear to be inconsistent with our claims. This is because [2] does not use the $\\\\mu P$ parameterization. Many of their results might be explained by the fact that in standard parameterization, a single wider network will be closer to the lazy learning regime than an ensemble of smaller networks. It is possible (and we suspect) that some of the benefits of ensembling that they report would be reversed if they used $\\\\mu P$ parameterization and optimized network performance over the richness parameter $\\\\gamma$ as well as weight decay in each experiment. We have updated section 6 to elaborate on the importance of disentangling network size and learning dynamics to make a proper comparison between networks of varying width.\\n\\n[2] Zhang, J., & Bottou, L. (2023, July). Learning useful representations for shifting tasks and distributions. In International Conference on Machine Learning (pp. 40830-40850). PMLR.\\n\\n[5] Preetum Nakkiran, Prayaag Venkat, Sham Kakade, and Tengyu Ma. Optimal regularization can mitigate double descent. arXiv preprint arXiv:2003.01897, 2020.\"}", "{\"title\": \"Updates to Rebuttal Version\", \"comment\": [\"We thank all the reviewers for their time and valuable feedback. Since our initial submission, we have made a number of updates to our manuscript which we will summarize here. We will also respond individually to each reviewer's comments.\", \"We have updated our discussion of the RFRR risk formula to include a reference to [1], which provides a rigorous backing for the risk estimate we use. We have also updated our terminology to refer to \\\"Random Feature Ridge Regression (RFRR)\\\" uniformly throughout the paper.\", \"We have added supplemental figures S3 and S6 confirming that the monotonic decreases in error described in theorems 1 and 2 for the MSE loss also hold for the binarized MNIST and CIFAR10 RFRR classification tasks at the level of the 0-1 test loss under majority vote and score-averaging.\", \"We have updated the methods we use to determine the \\\"source\\\" and \\\"capacity\\\" exponents for the binarized MNIST and CIFAR10 NNGP kernel regression tasks to the more robust methods described in [2], leading to a better fit between theory and experiment in fig. 4B.\", \"We have updated section 3.3 to elaborate on potentially complicating effects of ensembling which make Theorem 1 an interesting result, distinct from the $K=1$ case proven in [2].\", \"We have added a corrolary to section 4 which combines theorems 1 and 2 to guarantee that an ensemble of $K$ RFRR models with $N$ features each can only outperform a single RFRR model with $M$ features at optimal ridge if $NK>M$. We confirm this for tasks with power-law structure and the binarized CIFAR10 RFRR task in a new supplemental figure S2. This bound is tight in the overparameterized regime where $N \\\\gg P$.\", \"We have updated the text of section 6 (\\\"No Free Lunch from Feature-Learning Ensembles\\\") to emphasize the importance of using $\\\\mu P$ parameterization, to clarify the relationship between our theory and deep ensembles, and to simplify the discussion of our results. In particular, we propose three conditions under which the \\\"no free lunch from ensembles\\\" principle holds in networks trained with $\\\\mu P$ parameterization:\", \"In the lazy training regime ($\\\\gamma \\\\to 0$) when the weight decay is tuned to its optimal value.\", \"When weight decay and richness $\\\\gamma$ are jointly tuned to their optimal values.\", \"When richness $\\\\gamma$ is fixed and training is performed *online* (i.e. without repeating data).\", \"We have updated Figure 5.A to sweep over a larger range of richness values $\\\\gamma$, confirming that accuracy is monotonically decreasing with $K$ when weight decay and richness are jointly tuned to their optimal values.\", \"We have added experiments analogous to figure 5.A for CIFAR-10 classification but using the ResNet18 architecture. We find that for this larger architecture, the \\\"no free lunch from ensembles\\\" principle still holds.\", \"[1] Leonardo Defilippis, Bruno Loureiro, and Theodor Misiakiewicz. Dimension-free deterministic equivalents and scaling laws for random feature regression, 2024. URL https://arxiv. org/abs/2405.15699.\", \"[2] James B. Simon, Dhruva Karkada, Nikhil Ghosh, and Mikhail Belkin. More is better in modern machine learning: when infinite overparameterization is optimal and overfitting is obligatory. CoRR, abs/2311.14646, 2023. doi: 10.48550/ARXIV.2311.14646. URL https://doi.org/ 10.48550/arXiv.2311.14646.\"]}", "{\"title\": \"Response to Reviewer Comments (Part 1)\", \"comment\": \"Thank you for your review. We will respond in two parts due to character limits.\\n\\nIn this paper, Text, Theory and Experiments ... number of features.\\n> **Response**: Thank you for this comment, which we agree with. We have now clarified in the main text that we consider constraints on total parameter count. The performance of a single large model and ensembles of smaller models are usually compared with reference to the total parameter count, including in [2]. Total parameter count determines the memory required to store a learned model, which is a significant constraint in practice.\\n\\nThe theory discusses ... not worse than \\\"M/K regressors\\\"\\n\\n> **Response**: We thank the reviewer for calling this result to our attention. However, we believe our result is stronger.\\n\\n> Before giving our reasons for why our result is stronger, we are also aware of results showing equivalence between a single ridge-regularized regressor with access to the full inputs and an infinite ensemble of unregularized, subsampling or \\\"sketched\\\" regressors [3, 4]. An analogous (and true!) statement in the setting we describe would read that an infinite ensemble of random-feature ridge regresssion models with zero ridge is equivalent to the limiting (i.e. infinite-feature) kernel ridge regression model. It would follow that even infinitely large ensembles of random feature models could never outperform the limiting kernel regression model with optimal ridge. This corresponds to the limit $M, K \\\\to \\\\infty$ in Theorem 2. \\n\\n> However, the result we present is stronger because it applies at finite $M$. Rather than comparing a single regressor with access to the full (infinite) set of kernel features to an ensemble of subsampling regressors, we compare a regressor with access to a single large random projection of the kernel features to an ensemble of regressors with access to smaller random projections of the kernel features. Because of this distinction, we find that ensembles with $K > M/N$ can actually *outperform* a single regressor of size $M$. To demonstrate this, we plot error as a function of $K$ ensembles of RFRR models with $N$ features each in a new supplemental fig. S2. We see that an ensemble of RFRR models with $N$ features each can outperform a single RFRR model of size $M>N$ only if $K>K^*$. Theorems 1 and 2 together guarantee that the value $K^*$ required to achieve better performanced with the ensemble is at least as large as $M/N$. Similarly, if we fix $K$ and allow $N$ to grow, the value $N^*$ above which the ensemble outperforms the single large model of size $N$ as at least $M/K$. This bound on appears to be tight in the overparameterized regime where $P \\\\ll N$ (fig. S2.C, F). We state this result explicitly as the new \\\"Corrolary 1\\\" in the updated version of the manuscript.\\n\\n> One might hope to apply the result from section 9.1 of [1] or from [4] to prove our Theorem 2 by formally regarding the projection of $M$ random features as the full data, with correspondingly shifted ground-truth weights \\n$\\\\tilde{\\\\bar{\\\\mathbf{w}}} = \\\\text{argmin}\\\\_\\\\{\\\\mathbf{w}\\\\} \\\\left[ \\\\mathbb{E}\\\\_\\\\{\\\\mathbf{x} \\\\sim \\\\mu\\\\_\\\\{\\\\mathbf{x}\\\\}\\\\} (f_*(\\\\mathbf{x}) - \\\\mathbf{w}^\\\\top \\\\mathbf{Z} \\\\mathbf{\\\\theta}(\\\\mathbf{x}))^2 \\\\right]$ \\nand shifted label noise \\n$\\\\tilde{\\\\sigma}\\\\_\\\\epsilon^2 = \\\\sigma\\\\_\\\\epsilon^2 + \\\\mathbb{E}\\\\_\\\\{\\\\mathbf{x} \\\\sim \\\\mu\\\\_\\\\{\\\\mathbf{x}\\\\}\\\\} \\\\left[ \\\\left( f\\\\_\\\\ast(\\\\mathbf{x}) - \\\\tilde{\\\\bar{\\\\mathbf{w}}}^\\\\top \\\\mathbf{Z} \\\\mathbf{\\\\theta}(\\\\mathbf{x}) \\\\right)^2 \\\\right]$\\nWith $NK = M$, the ensemble of $K$ RFRR models of size $N$ could formally be regarded regarded as sampling mutually exclusive subsets of $N$ features from the pool of $M$. However, this requires the regressors to sample non-overlapping subsets of the features, which is not accomodated by the random bernoulli masks in [9, 3] or the uncorrelated sketching matrices in [4]. For these reasons, we believe that our Theorem 2 is not a trivial result.\\n\\n[1] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1), 1929-1958.\\n\\n[2] Zhang, J., & Bottou, L. (2023, July). Learning useful representations for shifting tasks and distributions. In International Conference on Machine Learning (pp. 40830-40850). PMLR.\\n\\n[3] Daniel LeJeune, Hamid Javadi, Richard Baraniuk Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:3525-3535, 2020.\\n\\n[4] Patil, Pratik, and Daniel LeJeune. \\\"Asymptotically free sketched ridge ensembles: Risks, cross-validation, and tuning.\\\" arXiv preprint arXiv:2310.04357 (2024). Available at: https://arxiv.org/abs/2310.04357.\"}", "{\"comment\": \"Thank you for clarifying your objections. We would be happy to shift the narrative focus of our paper to center random-feature models as well as older ensemble-based classification algorithms (i.e. random forests) as the primary motivations for our theoretical results. Importantly, feature-bagging (where each ensemble member is trained on a subset of available features) is a well-established practice for ensemble learning. We believe it will surprise a great many readers that this is provably detrimental to RFRR, especially in the context of Theorem 2 where subsampling (reducing $N$) has the added benefit of increasing ensemble size $K$. We cannot, however, provide a sound argument for why these results should be \\\"surprising\\\" *for RFRR models at optimal ridge* because, at the end of the day, they are true in this context.\\n\\nEven though our theoretical results do not explain the behavior of feature-learning ensembles, we believe that they are an important step toward understanding deep ensembles because of the correspondence between RFRR and deep networks in the lazy regime. In parcitular, our results allow us to pinpoint any violations of the \\\"more is better\\\" or \\\"no free lunch from ensembles\\\" principles in deep networks at optimal weight decay as a consequence of feature learning specifically. We will update the main text to clarify this motivation. Furthermore, there is a long history of research on random-feature models in analogy to deep networks. Another example of this fruitful correspondence is work on fine-grained bias-variance decompositions [1]. Could your criticism that we are asking questions motivated by observations made in deep networks in the context of RFRR not just as easily be applied to Simon et. al. (2023)?\\n\\nWe have also shown that if any of the assumptions of our Theorem 2 are broken, the result no longer holds:\\n- If there is not an optimal L2 regularization, then the ensemble of $K$ models of size $N$ might outperform a single model of size $M=NK$ (see non-monotonic curves in figure 2A).\\n- If $K>M/N$, even by a tiny amount, it is possible for an ensemble of $K$ models of size $N$ to outperform a single model of size $M$ even at optimal ridge (see the newly addes figure S2, panels C, F). We are not aware of any previous studies which have pointed this out. To drive this point home, we will add to the final version the asymptotic expansion of the risk $E_g^K$ at large $N$ and small ridge $\\\\lambda$: $ E_g^K = -\\\\frac{P {\\\\kappa_2^*}^2 \\\\operatorname{tf}_1'(\\\\kappa_2^*)}{P - \\\\operatorname{Df}_2(\\\\kappa_2^*)} + \\\\lambda F(\\\\kappa_2^*, P) + \\\\frac{P \\\\kappa_2^* \\\\operatorname{tf}_1(\\\\kappa_2^*)}{KN} + O (\\\\lambda^2, \\\\lambda/N, 1/N^2)$, where $\\\\operatorname{Df}_1(\\\\kappa_2^*) = P$. Here, you can see that at leading order, the error of an overparameterized ensemble depends on ensemble size $K$ and model size $N$ only through the total number of features $KN$.\\n\\nWe believe that this makes our result non-trivial, in that it could not possibly be made any stronger -- any relaxation of the assumptions would result in the theorem breaking.\\n\\n[1] Ben Adlam and Jeffrey Pennington. Understanding double descent requires a fine-grained bias-variance decomposition, 2020. URL https://arxiv.org/abs/2011.03321.\"}", "{\"summary\": \"This paper analyzes the ensemble of K ridge regressions on random features size M/K and one right regression on random features size M. And concludes that the later one is always the optimal under the optimal L2 regularization. This work also use some toy experiments (e.g. two to three layers CNN) to compare the ensemble of K small models and one large models.\", \"soundness\": \"1\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"The writing is clear. The paper is easy to read.\"], \"weaknesses\": \"- In this paper, Text, Theory and Experiments are not consistent.\\nIn text, this paper assumes a fixed \\\"computational budget\\\" (line 256), which is resonable, then links this assumption to \\\"a fixed total number of features\\\" (line 257). However, a fixed computational budget is not necessary leads to a fixed total number of features. \\n\\nThe theory discusses the ensemble case of regression on random features. Concludes that one right regression on random features size M is always the optimal under the **optimal L2 regularization**, compared with the ensemble of K ridge regressions on random features size M/K. However, that is trivial. Section 9.1 in [1] shows a dropout on regression equals to an ensemble of expensional number of small regressiors, equals to L2 regularization. Since **an ensemble of expensional number of small regressiors** is not worse that **M/K regressors** (same as the Eq 16 in this work), an \\\"optimal L2 regularization\\\" is of course not worse than \\\"M/K regressors\\\"\\n\\nThe cifar experiments in this work does not link to the theory, i.e. without neither regression nor random features. \\n\\n\\n\\n[1] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1), 1929-1958.\", \"typos\": \"line 141 f(x)\", \"questions\": \"- The CNN experiment on CIFAR10 contains two CNN layers and one linear layer. That is already too small for CIFAR10 tasks. Reducing the size of this small network would further hurt the performance. Does the same phenomena in this paper hold on larger networks? [2] shows the ensemble multiple small networks outperforms a large network.\\n\\n\\n[2] Zhang, J., & Bottou, L. (2023, July). Learning useful representations for shifting tasks and distributions. In International Conference on Machine Learning (pp. 40830-40850). PMLR.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the responses. Regarding the authors\\u2019 question whether my criticism could not as easily be applied to Simon et al. (2023): It is not contested that studying random feature models can provide valuable insights into feature-learning models (and vice versa), nor that results in one area might motivate research questions in the other (and vice versa). In that regard and in context of overparameterised models, Simon et al. (2023) provide relevant and rigorous contributions for random feature models. However, as these results are already established, my main objection in the initial review and follow-up comments is how much the ensemble perspective adds to the existing literature, specifically Simon et al. (2023), and whether a meaningful gap was addressed. Thus, my justification request in previous comments was targeted at the explicit context of the theorems, specifically RF-KRR (and not to provide \\u201ccounterexamples\\u201d for theorem 2). It is not immediately apparent to me how the suggested (future) adjustments in motivation from random forests fits in the general context of the paper (deep learning and RF-KRR).\\n\\nFor these reasons, I stay with my initial assessment. In my opinion, a major revision of the submission addressing the highlighted aspects would sharpen the contributions and provide more conclusive insights. However, if the other reviewers feel strongly about the contributions of this paper, I will not object.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"I appreciate the authors\\u2019 effort to address my objections and similar objections raised by other reviewers (in particular, __U2xD__ and __6nYe__) and believe that the additional experiments strengthen parts of the contributions in this submission. However, my initial reservations about some of the main aspects at the beginning of the paper persist, partly because I feel that my previous objections were not directly addressed:\\n\\n__Regarding the response to W1 (Novelty of Theorem 1) and question Q1__: In my opinion, some considerations and intuition from the feature-learning setting are mixed with the random feature setting. Similar points were raised by reviewers __U2xD__ and __6nYe__ to some extent. In particular, the objections and questions of weakness W1 and Q1 were explicitly stated in the context of random feature kernel ridge regression. However, the provided answer relies on insights from feature-learning models, like the decreased performance of \\u201cwider\\u201d deep feature-learning ensembles. While I agree that the response is sensible when talking about models that learn features, I disagree that this naturally extends to random feature models. In this regard, I struggle to see the justification of the hypothesis that \\u201c_[o]ne might therefore expect __reducing__ the size of each ensemble member to improve ensemble performance by increasing the diversity of the ensemble's predictors_\\\" when features are extracted at random for each ensemble member and the KRR is __regularised with the optimal ridge__. In this explicit context of RFRR, could the authors provide some examples/evidence that justifies this hypothesis and shows a gap to the results by Simon _et al._ (2023)?\\n\\n__Regarding the response to W2 (Significance of Theorem 2) and question Q2__: My (broad) objection here aligns with the more detailed comment provided by reviewer __6nYe__. Similar to what I wrote above, I believe the implicit hypothesis leading to theorem 2 requires more evident justification. The authors state that they \\u201c_[\\u2026] disagree, however, that we should necessarily expect error to increase as a result of this decrease in $N$ because $K$ is also increasing, which is beneficial to the error._\\u201d In the explicit context of RFRR regularised with optimal ridges and the premise of the submission ($N\\\\times K = M$ with fixed $M$), I am curious to understand the motivating results/evidence that leads to the disagreement and justifies the underlying hypothesis addressed by theorem 2? In essence, this is what I was trying to ask with question Q2. The provided answer where one deviates from the \\u201cfixed-parameter-count\\u201d premise was not contested.\\n\\nAlthough I agree that there is some extension of the established results and theorems, I still consider this part of the paper (presented as a main contribution) of limited novelty and significance for the above reasons.\"}", "{\"comment\": \"I have followed the discussion that the author and reviewers have been having. I still maintain my position that this work is a valuable contribution, despite its limitations wrt feature learning models. I believe that explaining feature learning models would be outside the scope of this work. I will be maintaining my score at 6.\"}", "{\"metareview\": \"The submission investigates whether ensembles can outperform single models when constrained to have the same total parameter count. This is undertaken from a theoretical point of view in the context of random feature ridge regression, and in the empirical context via neural networks. The results in both setting suggest that, when regularisation hyperparameters are optimal, the ensemble cannot outperform the single model. The reviewers generally agreed that the paper is easy to understand, even though the subject matter is quite complex. There were no concerns about the correctness of the theoretical contributions or the experimental setup. The main downsides that impacted my decision are the lack of theoretical novelty over several existing works, as pointed out by reviewer SSHY, and the mismatch between the experiments and theoretical analysis, as pointed out reviewer 6nYe.\", \"additional_comments_on_reviewer_discussion\": \"There was quite substantial discussion between the authors and the two reviewers who gave negative scores. This primarily centred around the originality of the results and connection of the theoretical analysis to the empirical investigation. Reviewer SSHY argued fairly convincingly that the theoretical results do not provide much additional insight over the results given in Simon et al. (2023) and Bordelon et al. (2024) papers cited in the submission. This, coupled with the observation of 6nYe that the theory does not align so well with the experiments, are the main factors in my decision.\"}", "{\"summary\": [\"This paper studies the tradeoff between ensemble size and total feature/parameter count for random feature ensembles\", \"In particular, they derive the following theorems (paraphrased informally):\", \"Theorem 1 - larger ensembles -> better performance (this contribution is minor)\", \"Theorem 2 - Given a fixed feature count, it is better to have a single large model than multiple smaller ones (this is the main contribution of the paper)\", \"Both theorems are verified experimentally binarized Cifar-10 datasets\", \"The paper then uses Thm 2 to derive scaling laws, assuming a standard model of the decay of kernel's eigenspectrum. They identify differing behaviour for \\\"simple\\\" and \\\"difficult\\\" datasets, and verify this experimentally\", \"Finally, the paper studies the performance of non-fixed feature classifiers using ensembles of CNNs trained on CIFAR-10. They control the feature learning by controlling the richness parameter\"], \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Presentation is clear and easy to follow\", \"Claims are verified experimentally and there is good agreement with the author's claims\", \"Section 5 (the section on scaling laws) I find particularly interesting as it shows how the fairly uninspiring theorems in the first two sections can be used to derive more non-trivial behaviours, such as the distinction between \\\"easy\\\" and \\\"difficult\\\" dataset scaling laws\", \"The use of the richness parameter to control the adherence to the assumptions in the theorems in section 6 in quite interesting, but I am unsure how applicable this work is necessarily to modern models (see weaknesses)\"], \"weaknesses\": [\"I think the authors should be very clear that the theorems do not directly \\\"explain\\\" the empirical results in section 6, except in the case of very small richness parameters, as their theorems do not cover feature learning. I think it would also be useful for the authors to discuss this limitation in the weaknesses section in concluding discussion\"], \"questions\": [\"Is it possible to plot the validation loss in figure 5b as opposed to the test loss?\", \"Could the authors discuss how for certain values for $\\\\gamma$ in figure 5a, it seems the increasing the ensemble count improves performance, namely for larger values of $\\\\gamma$? Is it possible that behaviour for models outside of the lazy regime do not follows the same \\\"no free lunch\\\" theorem?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for considering our rebuttal. First, to your concern about our CNN experiments, we remind you that we have repeated these experiments in ResNet18 ensembles with about 30 million parameters, finding similar results to the smaller CNN ensembles. Please see our revised Figure 5, panel b. We will also try one more time to explain why we think our results are nontrivial and interesting.\\n\\nWe have provided a sound argument for why our main result is not trivial in our last rebuttal. In particular, we have shown that if any of the assumptions of our Theorem 2 are broken, the result no longer holds:\\n- If there is not an optimal L2 regularization, then the ensemble of $K$ models of size $N$ might outperform a single model of size $M=NK$ (see non-monotonic curves in figure 2A).\\n- If $K>M/N$, even by a tiny amount, it is possible for an ensemble of $K$ models of size $N$ to outperform a single model of size $M$ even at optimal ridge (see the newly addes figure S2, panels C, F). The theorem would certainly not hold for an exponential number of small regressors.\\n\\nWe believe that this makes our result non-trivial, in that it could not possibly be made any stronger -- any relaxation of the assumptions would result in the theorem breaking. Whether or not this result is surprising is a subjective question. We see scientific merit in proving this statement, as intuitions often turn out to be wrong.\\n\\nFinally, our theory **does** directly explain our experiments in ReLU random-feature ensembles (figures 1, 2, 3, 4), as well as our experiments in deep networks in the lazy regime (small $\\\\gamma_0$ in figure 5). The only part of our experiments not covered by the theory are the rich-regime learning curves for deep network ensembles. Including these, we believe, strengthens our paper by showing how feature-learning effects breaking the assumptions of our theorems may complicate the behavior of ensembles. Our empirical observations in feature-learning ensembles are much more compelling against the backdrop of our analytical results for lazy networks than they would be on their own. The theory allows us to pinpoint any violations of the \\\"more is better\\\" or \\\"no free lunch from ensembles\\\" behaviors in deep ensembles as a pathology of feature learning.\"}", "{\"summary\": \"The present study aims to investigate the utility of training multilple shallower networks vs that of a larger network. Theory shows that only 1 large network is appropriate and experiments validate this approach.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper gives extensive theory and experimental proof that the theory works.\\n\\nWide variety of experiments. \\n\\nScaling laws are given for this class of models, which is always useful.\", \"weaknesses\": \"I'd like to see experiments be ran on imagenet validating these results there.\\n\\nNo results on conventional nerual network architectures.\", \"questions\": \"Would it be possible to train random feature ResNet18s or VGGs?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Regarding the response to W1 (Novelty of Theorem 1) and question Q1: In my opinion, some considerations... shows a gap to the results by Simon et al. (2023)?\\n\\n> **Response** Thank you for considering our responses, we appreciate the opportunity to clarify the motivation for and significance of our results. As you have correctly understood, our Theorem 1 reduces to the result of Simon et. al. in the special case $K=1$. The \\\"bigger is better\\\" result for an ensemble does not, however, follow directly from the $K=1$ case. Proving \\\"bigger is better\\\" for RFRR *ensembles* at optimal ridge is a novel contribution. Whether or not this result is surprising is a subjective question -- either way, there is scientific merit in proving it as intuitions often do not hold true when put to the test.\\n\\n> Your review asked us to motivate Theorem 1 as a surprising result. We have cited feature-learning ensembles as a class of models in which \\\"wider is better\\\" for $K=1$ does not extend to \\\"wider is better\\\" for ensembles (see figure 8 from [1]). Of course, this does not naturally extend to RFRR ensembles at optimal ridge -- if it did, theorem 1 would not be true! Nevertheless, the example of feature-learning ensembles demonstrates that even if \\\"wider is better\\\" holds for a single model, it doesn't necessarily hold for an ensemble of the many models of the same type. This furthers our argument that Theorem 1 is a novel result, which does not follow directly from the results of Simon et. al. (2023).\\n\\n> We also want to emphasize that theorem 1 is not our main result. We agree that this result may not surprise those familiar with the results of Simon et. al. (2023), even if it was not stated or proven there. Theorem 2 and Corollary 1, however, are much more compelling results, and the focus of all figures following figure 1. We have included theorem 1 and figure 1 because it is necessary for completeness, because it is necessary to corollary 1, and because Simon et. al. (2023) did not address the distinct case of ensembles.\\n\\nRegarding the response to W2 (Significance of Theorem 2) and question Q2: My (broad) objection here aligns with ... answer where one deviates from the \\u201cfixed-parameter-count\\u201d premise was not contested.\\n\\n> **Response** Plenty of papers have claimed that performance gains can be obtained by dividing the number of parameters in one neural network into an ensemble of smaller networks. Our hypothesis is that ensembling is not as useful as simply scaling up the size of a single predictor when network hyperparameters are tuned to their optimal values. In RFRR, there is only one hyperparameter to be tuned -- the ridge. In line with our hypothesis, we find that when this hyperparameter is tuned to its optimal value, the optimal strategy is to combine all parameters into a single large model. In feature-learning networks, there are more hyperparameters to be tuned, but again in line with our hypothesis we find that at optimal weight decay and richness the \\\"no free lunch from ensembles\\\" principle holds. We understand that our theory does not directly explain the behavior of feature-learning ensembles, and have updated our manuscript to make this clear. However, we believe that these results should be presented side by side because of their similar flavor, and because of the correspondence between random-feature models and deep ensembles in the lazy training regime.\\n\\n> Furthermore, we believe Theorem 2 to be a compelling result because it represents a fundamentally new type of monotonicity result that applies to the allocation of a fixed set of resources rather than the amount of available resources for a learning problem. The \\\"more is better\\\" results from Simon et. al. and our Theorem 1 compare networks or ensembles with a different total number of random features (or samples). It's not a surprise that when optimal regularization is used to mitigate over-fitting effects, having more resources is beneficial to generalization. Our theorem 2 instead informs the way that a fixed budget on resources is *used*. Saying that a fixed number of random features is better when used together in a single model than divided into an ensemble is a novel type of statement relative to saying that more random features *total* is better than fewer random features *total*. In fact, this type of comparison is only possible in the context of ensembles of predictors. We hope this clarifies the novelty of our contribution, and justifies our focus on Theorem 2 as the main result of our paper.\"}", "{\"title\": \"Response to reviewer comments.\", \"comment\": \"Weaknesses:\\n\\nW1. Novelty of Theorem 1 \\u201cMore is better for RF Ensembles\\u201d ... see also point W2.1.\\n\\n> **Response**: Thank you for this comment. While we agree that this result is in line with results from Simon et al. (2023) and Bordelon et al. (2024a) for single models, we believe that it might be surprising in the context of ensemble learning, where predictive variance has historically been viewed as beneficial. In random forests, for example, subsampling of data dimensions leads to improved performance even though it reduces the sizes of individual decision trees. In RFRR ensembles, each ensemble member is distinguished by the particular realization of its random features (i.e. the independently drawn $\\\\mathbf{v}\\\\_n\\\\^k \\\\sim \\\\mu\\\\_\\\\{\\\\mathbf{v}\\\\}$). As $N \\\\to \\\\infty$, the function learned by each estimator will converge to the same limiting kernel predictor, destroying the diversity of the ensemble. One might therefore expect *reducing* the size $N$ of each ensemble member to improve ensemble performance by increasing the diversity of the ensemble's predictors. Other empirical findings have shown that deep feature-learning ensembles do not always conform to the ``wider is better'' intuition observed for single models (see fig. 8 from [1]). We have updated the text of section 3.3 to discuss these points.\\n\\nW2. Significance ... section 4:\\nW2.1. A main... would be required.\\n\\nW2.2. In addition to ... premise above.\\n\\n> **Response** Thank you for your comments, which prompt an important clarification of the difference between figures 1 and 2. As you have correctly understood, in figure 1 we compare ensembles across widths $N$ while keeping ensemble size $K$ fixed. We have made a slight alteration to our numerics to make this clearer: the x-axes of figure 1B now correspond to the width $N$ of the ensemble members. In this case, the total parameter count $M = KN$ grows with $N$. In line with our expectations from Simon et al. (2023) and Bordelon et al. (2024a), we see that the optmial risk decreases with $N$. However, we argue that in the case of ensembles of predictors, this does not follow directly from their results for single models, and might be surprising given the common practice of feature-bagging (see our response to W1). In figure 2 we compare the ridge-optimized performance of RFRR models with fixed total parameter count $M=KN$ as the ensembe size $K$ is varied. You have correctly understood then that as $K$ increases, the size $N$ of each ensemble member decreases ($N = M/K$). We disagree, however, that we should necessarily expect error to increase as a result of this decrease in $N$ because $K$ is also increasing, which is beneficial to the error. Put plainly, the decrease in $N$ and increase in $K$ are competing effects. If variance is the main contribution to the error, one might expect to see a benefit to reducing model size if that means you can average over a larger number of predictors. Theorem 2 guarantees that this is never the case in RFRR, provided that the total parameter count is fixed and ridge is optimized. An ensemble of smaller models might do better than a single larger model, however, when the total parameter count of the ensemble is larger than the number of parameters in the single larger model. We have formalized this fact in the corollary added to section 4 and the surrounding discussion. Please also see the newly added figure S2.\", \"minor_remarks\": \"\\u201cRF KRR\\u201d ... dashed lines.\\n\\n> **Response**: Thank you for pointing out these errors, which have been corrected in the new version of the manuscript. We have replaced \\\"eq's\\\" with \\\"equations.\\\" The text of section 2 has been updated to clarify the relationship between $\\\\mathbf{\\\\psi}\\\\(\\\\mathbf{x}\\\\)$ and $g\\\\(\\\\mathbf{v}\\\\_n, \\\\mathbf{x}\\\\)$.\", \"questions\": \"Q1. Is there any evidence ... RF regression?\\n\\n> **Response**: We refer to our response to W.1 above.\\n\\nQ2. In connection to Theorem 2 i... stronger learners?\\n\\n> **Response**: We refer to our response to W.2 above. In particular, we reiterate that a larger ensemble of weak learners *can* outperform a small ensemble of stronger learners. This is only possible, however, when the total parameter count of the larger ensemble of weak learners is greater than the total parameter count of the smaller ensemble of strong learners.\\n\\n\\n[1] Nikhil Vyas, Alexander Atanasov, Blake Bordelon, Depen Morwani, Sabarish Sainathan, and Cengiz Pehlevan. Feature-learning networks are consistent across widths at realistic scales, 2023.\", \"url_https\": \"//arxiv.org/abs/2305.18411.\\n\\n[2] Leonardo Defilippis, Bruno Loureiro, and Theodor Misiakiewicz. Dimension-free deterministic equivalents and scaling laws for random feature regression, 2024. URL https://arxiv.org/abs/2405.15699.\"}", "{\"title\": \"Response to reviewer comments\", \"comment\": \"Weaknesses:\\nIn figure 5a, one of the highest performing richness parameters (although not the admittedly highest) does not show monotonic decrease with ensemble size. Given this does not fit with your other analysis, it would be useful to comment on it in the paper.\\n\\n> **Response**: Thank you for this comment, which we agree with. We have updated our discussion of the results of our deep ensemble simulations to discuss these exceptions to the \\\"no free lunch\\\" principle. We observe that monotonicity holds when error is jointly optimized over weight decay and richness. Please see the updated section 6.\", \"minor_points\": \"Both CIFAR10/MNIST and a CIFAR10/MNIST-derived binary classification task are included in the paper. What is the papers convention for differentiating between the binarized version and the original version? In places the paper appears to call the binary classification task \\u201ca CIFAR classification task\\u201d or \\u201cBinarized CIFAR10\\u201d but in others it then refers to this binary task just as CIFAR (I think this happens for example in line 370). Could the binary tasks have a name (for which CIFAR10 is a prefix perhaps) and then be referred to by that name to improve readability?\\n\\n> Thank you for catching this discrepancy. To clarify, all random-feature ridge regression experiments are performed on the binarized CIFAR10 and MNIST tasks. The deep ensemble experiment is on the standard CIFAR10 experiment. We do not do MNIST classification with the original labels in any of our experiments. We have updated the manuscript so that the binarized CIFAR10 and MNIST tasks are always referred to as such.\\n\\nThe same notation of learning rate and task eigen structure, and for richness and the data-size-scaled degree of freedom parameters. Giving separate notation would be preferable.\\n\\n118 \\u2013 \\u201c(citations)\\u201d should instead be the actual citation\\n\\n228 \\u2013 Figure 1: Are the non-red lines supposed to be dashed in this plot? otherwise does the legend contain a dashed line? Since none appear in the actual plot.\\n\\n340 \\u2013 Figure 3 \\u2013 could you draw the line for l* (= 1/(1+ alpha*(2*r \\u2013 1))) in red on the plots for figure 3 A?\\n\\n397 \\u2013 \\u201ca\\u201d is singular, but \\u201ctasks\\u201d is plural, should remove \\u201ca\\u201d?\\n\\n452 \\u2013 \\u201cthat at the\\u201d -> \\u201cthat the\\u201d\\n\\n> **Response**: Thank you for pointing out these typos and the error in the legend of figre 1. We gave fixed them in our updated manuscript. We appreciate your careful reading! We will plan to add the lines for $\\\\ell^*$ to the final version of figure 3A.\", \"questions\": \"In Figure 2c, ensembles (K>1) appear to be relatively robust to ridge parameter, where as K=1 is highly sensitive to it. Could this correspond to it potentially being more practically expedient to train small ensembles when optimally tuning the regularising parameter is expensive?\\n\\n> **Response**: This is an excellent point! We have added a comment on the potential robustness of ensembleing methods in situations where fine-tuning hyperparameters is not feasible to our discussion of figure 3 in section 4.\"}", "{\"comment\": \"To clarify our comment stating that if $K>M/N$, even by a tiny amount, it is possible for an ensemble of $K$ models of size $N$ to outperform a single model of size $M$ even at optimal ridge, we will add to the final version of our paper an asymptotic expansion of the risk $E_g^K$ at large $N$ and small ridge $\\\\lambda$:\\n\\n$$ E_g^K = -\\\\frac{P {\\\\kappa_2^*}^2 \\\\operatorname{tf}_1'(\\\\kappa_2^*)}{P - \\\\operatorname{Df}_2(\\\\kappa_2^*)} + \\\\lambda F(\\\\kappa_2^*, P) + \\\\frac{P \\\\kappa_2^* \\\\operatorname{tf}_1(\\\\kappa_2^*)}{KN} + O (\\\\lambda^2, \\\\lambda/N, 1/N^2)$$\\n\\nwhere $\\\\operatorname{Df}_1(\\\\kappa_2^*) = P$. Here, you can see that at leading order, the error of an overparameterized ensemble depends on ensemble size $K$ and model size $N$ only through the total number of features $KN$. It follows that when $KN>M$, even by a tiny amount, an ensemble of $K$ models of size $N$ can outperform a single larger model of size $M$ provided $P \\\\ll N$ and optimal or near-optimal performance can be achieved with a small ridge $\\\\lambda$.\\n\\n[1] Ben Adlam and Jeffrey Pennington. Understanding double descent requires a fine-grained bias-variance decomposition, 2020. URL https://arxiv.org/abs/2011.03321.\"}", "{\"summary\": \"This paper investigates the trade-off between employing a single large model versus an ensemble of smaller models, focusing on random feature kernel ridge regression (RF-KRR). In particular, this work studies ensembles of size $K$ with $N$ random features per ensemble member with a fixed total parameter count $M=N\\\\cdot K$. In this setting, the paper proves rigorously and shows empirically that optimal performance is achieved by $K=1$ for an optimally tuned ridge parameter while increasing $K$ degrades the optimal test risk. Additionally, this result \\u2013 referred to as \\u201c_no free lunch from random feature ensembles_\\u201d \\u2013 is shown for CNNs and transformers in experiments on image classification (CIFAR10) and language modelling (C4) in the regimes of lazy training and feature learning. Furthermore, scaling laws are derived, and conditions for achieving near-optimal scaling laws for these ensembles are identified.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The main contribution of the paper is to add the ensemble perspective to established results on random feature kernel ridge regressions. It contributes theoretical insights and proofs for two theorems on \\u201c_more is better for RF Ensembles_\\u201d and \\u201c_No Free Lunch From Random Feature Ensembles_\\u201d (pages 4-5) as well as scaling laws and empirical validation of adequate quality. Generally, the paper has a clear thread, with some minor adjustments required to the presentation (listed below in \\u201cweaknesses\\u201d).\", \"weaknesses\": \"Even though extending the analysis of RF-KRR to ensembles is new, the presented results align with the expectations from prior work on single models. In my opinion, this limits the novelty and significance of the contribution, which I elaborate on in more detail below.\\n\\n__W1.__ _Novelty of Theorem 1 \\u201cMore is better for RF Ensembles\\u201d and results in section 3_: For \\u201cnon-ensemble\\u201d RF-KRR, the referenced papers by [Simon _et al_. (2023)](https://openreview.net/pdf?id=OdpIjS0vkO) and [Bordelon _et al_. (2024a)](https://arxiv.org/pdf/2402.01092) have established that decreasing the number of random features $N$ leads to a degraded optimal test risk. Could the authors elaborate on why examining ensembles adds new perspectives beyond what is already known for single models? I think a strong justification would be necessary. In particular, I view the statement of Theorem 1 and the results presented in Figure 1 of this submission as a natural consequence of (and essentially the same as) [Theorem 1 \\u201cMore is better for RF regression\\u201d and Figure 1 in in Simon _et al_. (2023)](https://openreview.net/pdf?id=OdpIjS0vkO#page=6); see also point W2.1.\\n\\n__W2.__ _Significance of Theorem 2 \\u201cNo Free Lunch From Random Feature Ensembles\\u201d and results in section 4_: \\\\\\n__W2.1.__ A main premise of the work is to have a fixed total parameter count $M$ and compare ensembles of $K$ members and $N$ random features such that $M=N\\\\cdot K$ always holds. This is presented as a \\u201cpragmatic\\u201d approach to limit computational overhead associated with ensemble methods (lines 34-37). However, this leads to increasingly \\u201cweaker learners\\u201d when the ensemble size $K$ is increased. For this reason, it is less surprising and in line with the conclusions of [Simon _et al_. (2023)](https://openreview.net/pdf?id=OdpIjS0vkO) and [Bordelon _et al_. (2024a)](https://arxiv.org/pdf/2402.01092) that decreasing the number of random features $N$ (by increasing ensemble size $K$) leads to a degraded optimal test risk (at fixed total parameter count $M$). From this perspective, it is expected that $K=1$ leads to the optimal result. This limits the novelty and significance of theorem 2 on \\u201cNo Free Lunch From Random Feature Ensembles\\u201d where specifically the number of random features are chosen as $N\\u2019=M/K\\u2019$ and $N=M/K$ with $K\\u2019<K$ leading to $N\\u2019>N$. As this is a central result of the paper, it could strengthen the contribution to elaborate more on why ensembles provide new insights beyond what can be directly inferred from the existing work on single models. In my opinion, a more substantial justification would be required. \\\\\\n__W2.2.__ In addition to the previous point, I believe that the interplay between the number of random features $N$, the size of the ensemble $K$ and the total parameter count $M$ make it difficult to compare the information provided in Figures 1 and 2. For instance, in Figure 2A the relationship between test error and ensemble size $K$ for different sample sizes $P$ at fixed total parameter count $M$ is considered, but with increasing $K$ the number of random features $N$ decreases. It appears that a more careful discussion would be required, which is intimately connected to the premise above.\", \"minor_remarks\": [\"\\u201cRF KRR\\u201d and \\u201cRF-KRR\\u201d are both used in the manuscript and the authors might want to make the usage consistent.\", \"Line 101: Missing full stop \\u201c.\\u201d at the end of the sentence.\", \"Line 106: The computation of $g(v_n,x)$ is mentioned, but $g$ is not defined in the main text. From Appendix A, line 690, it seems that it is another notation for $\\\\psi^k (x)$, but its use in the main text is not clear to me.\", \"Line 118-119: Missing citation.\", \"Line 142: Wrong formatting of \\u201cf_{*} (x)\\u201d.\", \"Line 176-177: Double parenthesis in citation.\", \"Line 187: I would not recommend writing \\u201ceq\\u2019s\\u201d.\", \"Line 248-249 and Figure 1: The \\u201cfilled\\u201d lines are quite likely supposed to be dashed lines.\", \"In summary, I view the points raised in W1 and W2 as the main challenges in the current version of the submission and believe a major revision of the paper is required. However, I invite the authors to address my objections and clarify potential misunderstandings.\"], \"questions\": \"My questions revolve around the core assumptions of the paper as outlined in W1 and W2 above. In particular, considering classical literature on RF-KRR like [Rahimi and Recht, \\u201cWeighted Sums of Random Kitchen Sinks: Replacing minimization with randomization in learning\\u201d (2008)](https://people.eecs.berkeley.edu/~brecht/papers/08.rah.rec.nips.pdf) and the referenced works of [Simon _et al_. (2023)](https://openreview.net/pdf?id=OdpIjS0vkO) and [Bordelon _et al_. (2024a)](https://arxiv.org/pdf/2402.01092), my questions are as follows:\\n\\nQ1. Is there any evidence that an ensemble of random features could provide a different result than is stated in [Theorem 1 of Simon _et al_. (2023)](https://openreview.net/pdf?id=OdpIjS0vkO#page=6) for RF regression?\\n\\nQ2. In connection to Theorem 2 in the submission and in the context of RF-KRR, is there any reason to assume that a larger ensemble of weak learners can outperform a small ensemble of stronger learners?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper covers several theoretical and empirical analyses related to whether a parameter budget should be allocated to a single large classifier or an ensemble of smaller classifiers.\\nThe authors first analyse ridge regression using random-features where the ensemble prediction is the mean response of the member predictors, and the test risk is the mean squared error. They prove that the expected test risk decreases when any of: data size, ensemble feature size, or ensemble size increases when the ridge parameter is chosen optimally for the given setting (with mild assumptions on the task eigenstructure) and show empirically the monotonicity in test error for a binarized CIFAR10 task using RELU random features. The authors then prove a \\u201cno free lunch\\u201d theorem for random feature ensembles showing that, for a fixed total parameter count, lower test risk is achieved with fewer ensemble members (with mild assumptions on the task eigen structure) and empirically show the monotonicity of test error with ensemble size fitting RELU random feature models to the binarized CIFAR10 task. \\nNext the authors derive scaling laws for random feature ensembles in the width-bottlenecked regime (the number of features per ensemble member is much smaller than the data size). They show on synthetic data that for r (a parameter controlling the relative rate of decay of power in modes relative to the rate of eigenspectrum decay) greater than 0.5 (an \\u201ceasy task\\u201d) then there is a growth exponent l* above which scaling laws are near optimal. Therefore ensembles with K>1 can be near optimal as long as the number of parameters per ensemble member grows quickly enough. \\nFinally, the authors initialize CNNs in the maximal update parameterization and, using CIFAR10, empirically show for a fixed parameter budget, and task richness, that test accuracy typically decreases monotonically with increased ensemble size. They also show empirically that training loss increases monotonically with increased ensemble size when fitting transformers to C4 data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"I think the paper is in general clear and well-written, the research question is useful and the paper is of interest to ICLR.\\nI have not checked the proofs, but the results looked sensible and were broadly consistent with the empirical results.\", \"weaknesses\": \"In figure 5a, one of the highest performing richness parameters (although not the admittedly highest) does not show monotonic decrease with ensemble size. Given this does not fit with your other analysis, it would be useful to comment on it in the paper.\", \"minor_points\": \"Both CIFAR10/MNIST and a CIFAR10/MNIST-derived binary classification task are included in the paper. What is the papers convention for differentiating between the binarized version and the original version? In places the paper appears to call the binary classification task \\u201ca CIFAR classification task\\u201d or \\u201cBinarized CIFAR10\\u201d but in others it then refers to this binary task just as CIFAR (I think this happens for example in line 370). Could the binary tasks have a name (for which CIFAR10 is a prefix perhaps) and then be referred to by that name to improve readability?\\n\\nThe same notation of learning rate and task eigen structure, and for richness and the data-size-scaled degree of freedom parameters. Giving separate notation would be preferable.\\n\\n118 \\u2013 \\u201c(citations)\\u201d should instead be the actual citation\\n\\n228 \\u2013 Figure 1: Are the non-red lines supposed to be dashed in this plot? otherwise does the legend contain a dashed line? Since none appear in the actual plot.\\n\\n340 \\u2013 Figure 3 \\u2013 could you draw the line for l* (= 1/(1+ alpha*(2*r \\u2013 1))) in red on the plots for figure 3 A?\\n\\n397 \\u2013 \\u201ca\\u201d is singular, but \\u201ctasks\\u201d is plural, should remove \\u201ca\\u201d?\\n\\n452 \\u2013 \\u201cthat at the\\u201d -> \\u201cthat the\\u201d\", \"questions\": \"In Figure 2c, ensembles (K>1) appear to be relatively robust to ridge parameter, where as K=1 is highly sensitive to it. Could this correspond to it potentially being more practically expedient to train small ensembles when optimally tuning the regularising parameter is expensive?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to reviewer comments\", \"comment\": [\"Weaknesses:\", \"I think the authors should be very clear that the theorems do not directly \\\"explain\\\" the empirical results in section 6, except in the case of very small richness parameters, as their theorems do not cover feature learning. I think it would also be useful for the authors to discuss this limitation in the weaknesses section in concluding discussion.\", \"> **Response** We thank the reviewer for this suggestion, which we agree with. We have made significant updates to clarify that our theory does not cover feature learning in both section 6 and in the discussion section.\", \"Questions:\", \"Is it possible to plot the validation loss in figure 5b as opposed to the test loss?\", \"> **Response**: Thank you for this suggestion. We clarify that Figure 5b shows the training loss for the C4 language modeling task as a function of training steps in the online setting. In this setting, there is no distinction between train and test loss as each batch consists of previously unseen data. Calculating a validation loss on a larger held-out test set might reduce noise in this figure, but because the curves are already visually smooth, we do not believe this will make a significant difference in the resulting plot. We will add text clarifying the meaning of the loss in the online setting.\", \"Could the authors discuss how for certain values for $\\\\gamma$ in figure 5a, it seems the increasing the ensemble count improves performance, namely for larger values of $\\\\gamma$? Is it possible that behaviour for models outside of the lazy regime do not follows the same \\\"no free lunch\\\" theorem?\", \"> **Response**: Thank you for pointing this out. After checking our experiments, we agree that models outside the lazy regime do not necessarily follow the \\\"no free lunch\\\" principle. However, we do find empirically that monotonicity is restored when both the weight decay and the richness parameter are jointly tuned to their optimal values. Please see the updated figure 5a and text of section 6.\"]}", "{\"comment\": \"Thank you for the responses, but this discussion goes in circles without addressing the main point. From the narrative of the paper and the authors\\u2019 responses, it becomes clear that much of the motivation is borrowed from insights of feature learning models. However, the paper positions the main theorems and main contributions in the context of random feature models. I perceive this as an inconsistency that pertains to the empirical evaluation, too. Arguing that because some results hold for the feature learning setting, a similar hypothesis should hold for the random feature setting is a relatively weak justification for the hypotheses leading to Theorem 1 and 2, in my opinion. This is a main reservation regarding the submission which has not been addressed so far. Based on my overview of the literature on random feature models, the submission does not address a significant gap in the existing literature. I acknowledge the extension to ensembles but view the rationale behind why the ensemble approach should lead to significantly different and novel results than those provided in the highlighted literature as insufficiently justified.\"}", "{\"comment\": \"Thank for authors responses. After reading the responses and other reviews comments, I find some key concerns are not well addressed. In summary:\\n\\n1) The theorical result is trival (close to the first two weakness pointed by reviewer **sshy**). The main theorical result in this work is still *\\\"one right regression on random features size M is always the optimal under the optimal L2 regularization, compared with the ensemble of K ridge regressions on random features size M/K.\\\"*. This is trival because there is a L2 regularization to trade-off the model complexity. \\n\\n2) **Text, Theory and Experiments are not consistent**, as author replied *\\\"We do not claim that our theoretical results directly explain our empirical findings in deep neural networks (fig. 5).\\\"*\\n\\n3) The CNN experiment on Cifar is problematic, because the network of each ensemble member is highly constrained (too small). \\n\\nOverall, I couldn't recommend for acception.\"}", "{\"title\": \"Response to reviewer comments\", \"comment\": \"Thank you for your review! To your question of using conventional neural network architectures, we have also validated these results for CIFAR10 calssification with ResNet18 ensembles (See Fig. 5.B in updated manuscript).\"}" ] }
7rxn2wnx88
Unmasking the Version-Switching Capabilities of Code Generation Models
[ "Nizar Islah", "Justine Gehring", "Diganta Misra", "Eilif Benjamin Muller", "Irina Rish", "Terry Yue Zhuo", "Massimo Caccia" ]
The rapid evolution of software libraries presents a significant challenge for code generation models, which must adapt to frequent version updates while maintaining compatibility with previous versions. Existing code completion benchmarks often overlook this dynamic aspect, and the one that does consider it relies on static code prediction tasks without execution-based evaluation, offering a limited perspective on a model's practical usability. To address this gap, we introduce GitChameleon, a novel, manually curated dataset comprising 116 Python code completion problems, each conditioned on specific library versions and accompanied by executable unit tests. GitChameleon is designed to rigorously assess the ability of modern large language models (LLMs) to generate version-specific code that is not only syntactically correct but also functionally accurate upon execution. Our comprehensive evaluations reveal that state-of-the-art LLMs struggle with this task; for instance, \textbf{GPT-4} achieves a pass@10 of only 39.9\% (43.7\% when provided with error feedback), highlighting the complexity of the problem and the limitations of current models. By providing an execution-based benchmark that emphasizes the dynamic nature of code libraries, GitChameleon serves as a critical tool for advancing the development of more adaptable and reliable code generation models. We release the dataset and evaluation framework to encourage further research in this vital area.
[ "Code generation", "LLM", "code LLM", "benchmark", "code versioning" ]
https://openreview.net/pdf?id=7rxn2wnx88
https://openreview.net/forum?id=7rxn2wnx88
ICLR.cc/2025/Conference
2025
{ "note_id": [ "tMVOp5KKPy", "aDofdCBuav", "Szjp2vNVaX", "FOYWgEqi6e", "7Itn3M3KRu" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730641598735, 1731300014516, 1732111900106, 1730658025509, 1730713387542 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8031/Reviewer_igqt" ], [ "ICLR.cc/2025/Conference/Submission8031/Reviewer_RyqN" ], [ "ICLR.cc/2025/Conference/Submission8031/Authors" ], [ "ICLR.cc/2025/Conference/Submission8031/Reviewer_QvUh" ], [ "ICLR.cc/2025/Conference/Submission8031/Reviewer_5kj9" ] ], "structured_content_str": [ "{\"summary\": \"The paper presents GitChameleon, a benchmark for evaluating large language models on version-specific code generation tasks. GitChameleon comprises 116 Python tasks tied to specific library versions, with unit tests for execution-based validation. The benchmark highlights current LLMs' limitations in handling dynamic library changes, offering a valuable tool for advancing version-aware code generation models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper presents GitChameleon, a benchmark specifically designed to test the ability of LLMs in generating version-specific code. It includes 116 Python-based tasks tied to specific versions of popular libraries and equipped with executable unit tests. The paper presents an empirical study evaluating the performance of various state-of-the-art LLMs (e.g., GPT-4, CodeLlama, DeepSeek-Coder) on GitChameleon, highlighting their strengths and limitations in handling version-specific tasks. The paper is well-structured and clearly explains GitChameleon\\u2019s dataset creation, evaluation metrics, and model performance analysis, making it accessible to readers who may not be familiar with version-specific challenges in code generation.\", \"weaknesses\": \"While GitChameleon introduces a unique focus on version-specific code generation, the dataset is limited to 116 tasks and 11 Python libraries. This relatively small scale might restrict the generalizability of findings and the benchmark\\u2019s robustness. The evaluation lacks experimentation with widely used techniques in prompt engineering (such as chain of thought) and does not consider fine-tuning approaches like prompt-tuning or parameter-efficient tuning. GitChameleon\\u2019s current setup seems primarily focused on software engineering tasks and libraries. This specificity may make it more suitable for specialized conferences in software engineering (e.g., ICSE)\", \"questions\": \"1. Have you considered using additional techniques, such as prompt engineering, chain of thought (CoT), and prompt fine-tuning, to provide a more comprehensive evaluation of LLMs\\u2019 capabilities? Including these methods might better showcase the model's adaptability in version-specific code generation. We would like to see corresponding experimental results for these methods.\\n\\n2. Could you elaborate on how this dataset could aid in improving the version-specific code generation abilities of existing LLMs? For instance, do you see potential in approaches like fine-tuning on version-specific tasks or using reinforcement learning from unit test feedback? We would be interested in seeing relevant experimental results for these approaches.\\n\\n3. Would it be feasible to include more mainstream LLMs, such as Claude, in your evaluations? Additionally, do you plan to expand the dataset size? The current dataset is relatively small, which might limit the depth of the benchmark and its ability to generalize across various version-specific challenges.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper argues that LLMs are extensively used as virtual coding assistants but frequently fail to generate accurate version-specific code amidst the fast-evolving software landscape. To address this, the paper presents GitChameleon, a new benchmark that assesses and enhances LLMs' ability to produce executable, version-tailored code. This benchmark highlights existing models' limitations and offers a pathway to improve their effectiveness in dynamic coding environments.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"1\", \"strengths\": [\"The paper underscores the need for LLMs to keep up with rapidly changing libraries, enhancing the practicality of code generation tools. Thus, GitChameleon focuses on version-specific code to evaluate model limitations and suggest improvements in LLM development.\", \"A new benchmark specifically targeting version-specific code generation. The benchmark assesses models based on real execution of version-conditioned prompts, providing a systematic and practical measure of model performance.\"], \"weaknesses\": [\"The benchmark assesses model performance on only a few hundred examples, which may not fully capture the diversity and complexity of real-world codebases.\", \"Expanding GitChameleon as the modes evolve will be non-trivial. The benchmark will soon be saturated. Also, to cover more libraries and languages may be difficult.\", \"While GitChameleon addresses an important gap, the benchmark and related evaluation seems to be an incremental improvement.\"], \"questions\": \"How are you planning to keep the benchmark up-to-date as the APIs and the models will evolve?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper proposes GitChameleon, an execution-based benchmark for evaluating LLMs\\u2019 coding capabilities in scenarios involving library updates. The benchmark is manually constructed with a joint effort from multiple authors. The paper provides a detailed analysis of the benchmark and an evaluation of multiple LLMs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper studies an important problem in LLMs for code, i.e., dynamic updates of LLMs when it comes to library changes. I appreciate the authors\\u2019 effort in constructing GitChameleon and performing evaluation and analysis over it. This could be useful for future research.\", \"weaknesses\": [\"One major selling point of the paper is providing unit tests for assessing LLM-generated code. However, I don\\u2019t really get why unit tests are important for the domain of library changes. For the types of problems considered in this paper (i.e., Lines 167-176), the compiler or interpreter should already tell if a deprecated library version is used. Moreover, when the generated code passes the compiler or interpreter, do you ensure that the LLM actually uses the target API? The LLM could use some other features of the library to implement the same functionality. The LLM could also make errors on parts not concerning the target API.\", \"The dataset construction is a manual process. While I appreciate such an effort, this unfortunately results in manual bias and a relatively small size of data samples. For example, GitChameleon only covers 116 problems in Python libraries related to machine learning. This could threaten the validity of the results.\", \"The paper only provides an evaluation, without studying or discussing how to address the issue of LLMs in library updates. This is a relatively thin contribution, especially given that there are already a few benchmarks in the same domain as cited by the paper.\"], \"questions\": [\"I also have a few smaller questions:\", \"Line 177: How many samples fall under the Argument or Attribute and Function Name change categories exactly?\", \"Line 322: Why is performance better in 2023 than 2022?\", \"Line 430: Figure 3.4 should be Figure 5.\", \"Line 495: Wang et al. is not properly linked to the reference.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a new dataset and benchmark, GitChameleon, designed to evaluate the ability of large language models (LLMs) to handle version-specific code generation challenges in Python.\\nGitChameleon consists of 116 Python code completion problems, curated to test models' responses to specific library versions and accompanied by executable unit tests for functional validation. \\nThis benchmark highlights the limitations of LLMs in managing code library version updates, which are critical for practical applications in dynamic software environments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper tackles the novel topic of assessing LLMs on version-specific code generation. Unlike existing benchmarks that generally focus on static code generation, GitChameleon is designed to evaluate models\\u2019 adaptability to changing library versions, a problem particularly relevant for real-world applications.\", \"The dataset is meticulously curated with executable unit tests, enhancing the robustness of its evaluations. GitChameleon addresses a critical gap in code generation benchmarking. By introducing an execution-based benchmark focused on version-specific compatibility, the work not only highlights current model limitations but also provides a concrete dataset for future improvements in LLM adaptability.\", \"The paper is well-organized and clear in its presentation. The motivation behind GitChameleon is well presented as a crucial challenge in the field of code generation.\"], \"weaknesses\": [\"Although the focus is clear and the dataset creation process is rigorous, the current GitChameleon dataset remains limited in size. Many libraries in the dataset include only a few API changes, which increases randomness and reduces the stability of the results.\", \"The benchmark\\u2019s limitation to Python makes the version-switching problem less complex in terms of **type information**. In other languages, such as Java, version updates often involve type changes, which are a significant aspect of the version-switching challenge. Thus, Python\\u2019s focus may oversimplify some key challenges in version-switching, limiting the paper\\u2019s broader contribution to this problem.\", \"Although the authors acknowledge methodological limitations, some baseline methods, such as RAG, should have been included as a critical evaluation aspect rather than overlooked. For practical benchmark usage, the authors should provide settings for RAG or few-shot learning, as these are more common approaches for addressing knowledge gaps in LLMs and should have been considered.\", \"Some related work in API learning, which is closely tied to the version-switching problem, is omitted. Including recent popular work on API learning could better contextualize the research. Relevant citations might include:\", \"Zan, Daoguang, et al. \\\"When language model meets private library.\\\" arXiv preprint arXiv:2210.17236 (2022).\", \"Zhang, Kechi, et al. \\\"Toolcoder: Teach code generation models to use API search tools.\\\" arXiv preprint arXiv:2305.04032 (2023).\", \"In summary, the paper\\u2019s topic is interesting, but it appears to miss several critical dimensions of the version-switching problem, such as choosing a weakly-typed language and omitting a key aspect of version-switching. The current dataset may also be too limited to fully support the range of challenges outlined in the introduction section.\"], \"questions\": \"In Section 2.1, the authors describe four types of changes. How do you ensure that this classification comprehensively captures the version-switching problem?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7rq2OzkJg3
Personalized Federated Learning With Similarity Information Supervisor
[ "Jiaqiang Li", "Qiqi Liu", "Yaochu Jin", "Xiaohu Wu", "Zhilong Li", "Han Yu" ]
A crucial issue in federated learning is the heterogeneity of data between clients, which can lead to model weight divergence, eventually deteriorating the model performance. Personalized federated learning (pFL) has been proven to be an effective approach to addressing data heterogeneity in federated learning. However, existing pFL studies seldom verify whether the broadcast global model is beneficial for the local model performance. To address this, we propose a novel pFL method, called federated learning with similarity information supervision (FedSimSup). Specifically, FedSimSup incorporates a local supervisor to assist the model training and a personalized model for global information aggregation. The role of the supervisor is to refine the personalized model when it is not beneficial for the local model performance, ensuring the effective global information aggregation while aligning with the local heterogeneous data. Additionally, the similarity relationships between the clients are measured using label distribution differences of the local raw data to weight the personalized models, promoting information usage among similar clients. Experimental results demonstrate three advantages of FedSimSup: (1) It shows better performance over heterogeneous data compared with seven state-of-the-art federated learning methods; (2) It can allow for different model architectures across different clients; (3) It offers a certain degree of interpretability.
[ "Personalized Federated Learning", "Heterogeneous Data" ]
Reject
https://openreview.net/pdf?id=7rq2OzkJg3
https://openreview.net/forum?id=7rq2OzkJg3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yZDTv2qCyH", "u3OylY8u3K", "WeaicTm9fB", "Rq2tigJa6h", "OYs6SGiubN", "Kpcfz8uuU5", "Imk2g2eaBE" ], "note_type": [ "official_review", "meta_review", "official_review", "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1730630160847, 1734655564749, 1730627503090, 1737523973844, 1729751991543, 1730255662125, 1730374036876 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9285/Reviewer_psBQ" ], [ "ICLR.cc/2025/Conference/Submission9285/Area_Chair_mwh6" ], [ "ICLR.cc/2025/Conference/Submission9285/Reviewer_AY6u" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9285/Reviewer_r3xw" ], [ "ICLR.cc/2025/Conference/Submission9285/Reviewer_xEpY" ], [ "ICLR.cc/2025/Conference/Submission9285/Reviewer_dbQQ" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents a personalized federated learning method named FedSimSup, designed to address the challenges posed by heterogeneous (Non-IID) data across clients in federated learning. Traditional federated learning methods may struggle with model weight divergence when clients have diverse data distributions, degrading performance. FedSimSup aims to tackle this challenge by introducing a local supervisor for each client to ensure the personalized model remains aligned with local data. This supervisor can override global updates if they are not beneficial, effectively balancing the integration of global and local knowledge.\\n\\nKey contributions include the following ones. \\n1. FedSimSup assigns a unique local supervisor to each client, allowing the model to selectively integrate global updates based on their relevance to the client's data. If an update isn't beneficial, the supervisor maintains the client model's previous state, enhancing performance with minimal communication rounds. \\n2. FedSimSup uses label distribution similarity between clients to enable selective information sharing, allowing clients to learn primarily from those with similar data distributions, improving model personalization. \\n3. It supports various model architectures across clients, adapting to the specific computational capabilities and needs of each client. The model also provides interpretability through Class Activation Maps that visualize the supervisor's influence in keeping model attention aligned with relevant features.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"See the contributions in the summary.\\n\\nAlso, two highlights of strengths:\\n1. Information among similar clients are utilized.\\n2. local model aggregation is somehow effective.\", \"weaknesses\": \"1. Insufficient Justification for Additional Model: It's needed to clarify the necessity of adding a local supervisor model that frequently participates in training. Note that there are many approaches to avoid adding a trainable model for each client, which would significantly increase resource consumption in terms of space and time.\\n\\n2. Unclear Model Heterogeneity: This work mentions that the proposed method enables model heterogeneity to accommodate different mobile devices. However, it appears that heterogeneity is only considered for the supervisor model which may not be necessary. Typically, model heterogeneity applies to the client's primary model (i.e., the personalized model) or to the entire client framework. Since this work focuses on the supervisor model alone, the contribution regarding model heterogeneity is unclear and not well-justified.\\n\\n3. Unclear contributions regarding explainability: This work uses CAM to demonstrate only explainability rather than interpretability. Adding external XAI mechanisms to enhance explainability is not an inherent advantage of the proposed method itself.\\n\\n4. Unclear Data Heterogeneity: The paper uses Dirichlet and pathological distributions but should clarify which specific type of heterogeneity the method targets, as data heterogeneity includes many categories. Besides, even within Dirichlet and pathological distributions, different parameter settings have distinct real-world implications, which should be specified in the introduction.\\n\\n5. Methodology Clarity: The motivation behind the supervisor's role is unclear, as is the explanation of similarity information. Figure 1(a) could be confusing, particularly regarding the communication rounds.\\n\\n6. Lack of Theoretical Analysis: Although convergence is demonstrated experimentally, this paper in the present form lacks theoretical analysis of the method.\\n\\n7. Some issues in experiments and results:\\n7.1 Unconvincing Baseline Selection: The baseline choices are either insufficiently targeted or poorly explained. Baselines with similar designs or research objectives should be selected for comparison.\\n7.2 Lack of SOTA Performance: As shown in Figures 1 and 2, the proposed method is not always optimal on pathological distributions, and its advantage on Dirichlet distributions is not always prominent.\\n7.3 The results shown in Tables 1 and 2 need detailed explanation. Please also explain why Per-FedAvg performs so poorly.\\n7.4 Given the addition of a supervisor, the experiments should include an analysis of time and space complexity.\\n7.5 Lack of Privacy Protection Discussion: Since model is communicated, it would be necessary to provide some discussion of privacy preserving.\", \"questions\": \"Please refer to weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"The paper received five negative ratings, with all reviewers inclined to reject it. While presenting a promising approach, it has significant weaknesses. It introduces FedSimSup, which aims to enhance federated learning with a supervisor model, but the justification for adding a local supervisor model is unclear, and the application of model heterogeneity is insufficiently explained. Contributions on explainability are limited to using Class Activation Mapping (CAM), without providing inherent interpretability. Data heterogeneity is not well-defined, and the paper does not clarify the types of heterogeneity addressed or their real-world implications. The calculation of similarity lacks detail, leaving uncertainties about its implementation. The paper also lacks a theoretical analysis to support experimental results, such as the stability of the similarity measure and the supervisor's impact on performance. The experimental design is weak, with poor baseline selection, insufficient performance on pathological distributions, and no analysis of time or space complexity. Privacy protection in model communication is not addressed. The paper would benefit from clearer visualizations, detailed ablation studies, and better performance evaluations across diverse datasets. While the results show promise, they lack consistent superiority, and the rationale behind some design choices remains unclear. The authors do not adequately discuss future research directions. Given these issues and the lack of response from the authors, the Area Chair recommends rejection.\", \"additional_comments_on_reviewer_discussion\": \"The paper received five negative ratings, with all reviewers inclined to reject it.\"}", "{\"summary\": \"This paper introduces FedSimSup, a novel pFL method that uses a local supervisor to refine the personalized model and ensure effective global information aggregation. FedSimSup also measures client similarity based on label distribution differences to enhance information sharing among similar clients. Experimental results show that FedSimSup outperforms seven state-of-the-art methods on heterogeneous data, supports different model architectures across clients, and offers interpretability.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The paper introduces FedSimSup, a new method in personalized federated learning that incorporates a local supervisor to refine personalized models.\\n2. FedSimSup allows for different model architectures across clients, making it adaptable to various computational capacities and needs.\\n3. Experimental results show that FedSimSup outperforms seven state-of-the-art federated learning methods on heterogeneous data, demonstrating strong capabilities.\", \"weaknesses\": \"1. From Equations (3), (4), and (5), it appears that the client models are training two models simultaneously. This does not seem to follow a clear supervision process like knowledge distillation. Could the authors provide a clearer explanation of how the supervision mechanism operates within this framework?\\n\\n2. The paper mentions using label distribution differences to compute $s_{ij}$. It would be helpful if the authors could elaborate on the exact method used to evaluate this similarity, ensuring clarity on its computation based on label distributions.\\n\\n3. Is it possible to observe the effects of the similarity metric $s_{i,j}$ during the training process? Specifically, for a given client $i$, could the authors illustrate how it determines which other clients it tends to select throughout the training?\\n\\n4. Could the authors provide the memory usage of FedSimSup compared to other baseline methods, particularly when different model architectures are employed on the client side?\\n\\n5. In Table 3, FedSimSup-T seems to underperform compared to the original method utilizing LeNet-5, despite having more parameters in the Transformer model. Could the authors provide any insights for this case?\", \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper addresses the challenge of data heterogeneity in federated learning, which can lead to model weight divergence and performance deterioration. This paper proposes a novel personalized federated learning (pFL) method called Federated Learning with Similarity Information Supervision (FedSimSup). The key idea behind FedSimSup is to integrate a local supervisor that refines the personalized model when the broadcast global model is not beneficial to local model performance.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe paper makes an original contribution by proposing a novel hierarchical client clustering approach for federated learning, effectively addressing the challenge of data heterogeneity across clients.\\n2.\\tThe paper is reasonably clear in structure and presentation, with a logical flow that generally allows readers to follow the authors' approach.\", \"weaknesses\": \"1.\\tWhile the paper attempts to introduce a hierarchical client clustering method in the field of federated learning, the level of innovation is relatively limited. The concept of client clustering has been explored in existing research, it lacks breakthrough novelty.\\n\\n2.\\tThe experimental section lacks sufficient comparison experiments and diverse datasets, making it difficult to convincingly demonstrate the effectiveness of the proposed method across different scenarios. The description of the client clustering strategy in federated learning is detailed, but the depth and breadth of the experiments are insufficient. \\n\\n3.\\tCertain technical details are described ambiguously, particularly in the client clustering process and the complexity analysis of the algorithm. This could make it difficult for readers to fully understand the key points of innovation. \\n\\n4.\\tFigure 2 and Figure 4 are not visually clear, which hinders the reader\\u2019s ability to interpret the results effectively. The lack of clear visualizations and detailed performance analysis further weakens the overall quality.\\n\\n5.\\tThe paper lacks a strong discussion of future research directions, failing to show the potential long-term impact of its approach.\", \"questions\": \"1.\\tBoth Figure 2 and Figure 4 are not visually clear, making it difficult to interpret the experimental results effectively. Could you provide improved versions of these figures with clearer labels, higher resolution, and better color differentiation?\\n\\n2.\\tThe current experiments are limited to a small set of datasets and lack comparisons with more recent or diverse methods in the field. Could you extend the experimental evaluation to include a wider variety of datasets and compare the proposed method against more recent federated learning techniques that address data heterogeneity? \\n\\n3.\\tCould you expand on the possible challenges or limitations of your method in more complex federated learning scenarios? Additionally, what future improvements or directions do you envision for enhancing the clustering process or further addressing data heterogeneity?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposed a novel personalized federated learning method called FedSimSup, which addresses the issue of data heterogeneity by introducing local supervisors to assist in model training and combining personalized models for global information aggregation. The method considers the similarity relationships among clients and enhances information utilization efficiency through weighted personalized models. Experimental results show that FedSimSup outperforms seven state-of-the-art federated learning methods when handling heterogeneous data.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1) The FedSimSup method introduces the concept of local supervisors into traditional personalized federated learning, providing a new approach to address the issue of data heterogeneity.\\n2) The baseline methods are diverse: through comparisons with various federated learning approaches, FedSimSup demonstrates strong performance. \\n3) FedSimSup demonstrates a certain level of interpretability in its information utilization.\", \"weaknesses\": \"1)The theoretical support of the paper is insufficient. Although the FedSimSup method is proposed, there is a lack of in-depth analysis of its theoretical foundations. For example, key issues such as how to ensure the effectiveness and stability of the similarity measurement, as well as the specific impact of the supervisor model on the final model's performance, are not adequately addressed.\\n2)In Section 3.2, to address the issue that \\\"clients cannot determine whether the received model (containing global information) is more beneficial than the model trained in the previous round,\\\" the authors divide the model into a supervisor and a personalized part. However, it is unclear why, after this division, clients can assess whether the received information is more advantageous. What metrics are used to quantify this judgment? The authors should provide necessary explanations.\\n3)In Section 4.2, the performance of FedSimSup under the Pathological distribution is poor. The authors attribute this to the limited discrete values resulting from similarity calculations under the pathological distribution, which affects the differentiation of similarity between clients. This explanation lacks theoretical and experimental support. It is recommended that the authors conduct a more in-depth discussion and analysis.\\n4)The specific implementation of similarity measurement is unclear. The paper mentions using cosine similarity to calculate the similarity between clients, it lacks detailed explanations on how to specifically implement and optimize this process. There is also no discussion on how to maintain the stability and accuracy of similarity calculations under different label distributions.\\n5)The connections between various components are still unclear, and there is a lack of ablation studies. The paper does not conduct ablation experiments to verify the impact of different components (such as the separation of the supervisor and personalized model) on overall performance, making it impossible to determine the importance and necessity of each part.\\n6)The complexity of the model has not been adequately considered. The paper mentions that different clients can use different supervisor architectures, it does not provide specific guidance or standards for selecting these architectures. Additionally, it does not discuss the impact of model complexity on training time and resource consumption.\\n7)The evaluation of privacy performance has not been considered. Although the paper discusses personalized learning and model aggregation, there is limited consideration of privacy protection. It lacks a discussion on how to effectively utilize similarity information while ensuring client privacy.\", \"questions\": \"See the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper a novel personalized federated learning (pFL) method called Federated Learning with Similarity Information Supervision (FedSimSup) to address the challenge of data heterogeneity across clients in federated learning.FedSimSup integrates a local supervisor mechanism and a personalized model to facilitate effective global information aggregation while respecting the unique data distributions of individual clients.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1) The supervisor module demonstrates a certain degree of flexibility, enabling model heterogeneity at a relatively low cost.\\n2) The paper presents model interpretability in a straightforward and intuitive manner.\", \"weaknesses\": \"1) In Section 3.3, what is the rationale behind not aggregating other clients' data for client i? Additionally, Equation (8) describes the calculation of similarity among all clients to create and deploy different models. Is this calculation performed on the server or client side? If on the client side, there may be issues with security and communication overhead; if on the server side, there could be computational overhead for generating different models. The author is requested to provide a detailed response to this issue.\\n\\n2) The method description is overly concise. For instance, the exact similarity calculation method and how the supervisor guides the model are not elaborated. If the method merely stitches together similarity computation and the supervisor module, the paper may lack sufficient contribution.\\n\\n3) In Table 1, what does the 3597 value for FEMNIST represent? Additionally, Figure 6 is too small, and the significant fluctuations in the proposed method's performance suggest potential design issues. Could there be an algorithmic problem contributing to this?\\n\\n4) Numerous recent methods address parameter decoupling, such as FedCP, which extracts both global and local information from data. Its extraction modules are aggregable. Compared to this, the lack of aggregation in the supervisor module here might lead to overfitting. The author is advised to conduct a thorough survey of recent pFL methods and benchmark against newer approaches.\\n\\n5) If the supervisor module is only conducting gradient descent, would increasing the parameter size of the local model achieve similar results without this module? It is recommended to add ablation experiments and consider this question in further detail.\", \"questions\": \"See the above comments.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7rOdRAGuBA
Spatiotemporal Backward Inconsistency Learning Gives STGNNs Icing on the Cake
[ "Jiaming Ma", "Zhengyang Zhou", "Binwu Wang", "Pengkun Wang", "Xu Wang", "Du Qian", "Yang Wang" ]
Spatiotemporal prediction models facilitate various smart-city applications across various domains,such as traffic and climate. While current advancements in these models emphasize leveraging cutting-edge technologies to enhance spatiotemporal learning, they often operate under the implicit assumption of spatiotemporal feature consistency between inputs and labels, overlooking the critical issue of input-label inconsistency. In this study, we introduce a universal spatiotemporal backward inconsistency learning module capable of seamless integration into a variety of models, offering a notable performance boost by explicitly modeling label features to address input-label inconsistency. Our approach includes the development of a spatiotemporal residual theory, advocating for a holistic spatiotemporal learning that encompasses both forward spatiotemporal learning to capture input data’s spatiotemporal features for generating base predictions, akin to existing STNNs, and a backward process to learn residuals that rectify input-label inconsistency, thereby refining the base predictions. Based on this theory, we design the Spatio-Temporal Backward Inconsistency Learning Module (STBIM) for this backward correction process, comprising a residual learning module for decoupling inconsistency information from input representations and label representations, and a residual propagation module for smoothing residual terms to facilitate stable learning. The generated prediction correction term is used to enhance the prediction accuracy. Experimental results on 11 datasets from the traffic and atmospheric domains, combined with 15 spatiotemporal prediction models, demonstrate the broad positive impact of the proposed STBIM. The code is available at https://anonymous.4open.science/r/ICLR2025-2598.
[ "spatiotemporal learning; time series learning; graph neraul network" ]
https://openreview.net/pdf?id=7rOdRAGuBA
https://openreview.net/forum?id=7rOdRAGuBA
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vVg0lRD2Xs", "p7jQ3p9v0t", "ng4MTk35fN", "nQWUFT1WWW", "m8B62ykoI0", "hKFnsg0qVU", "hB0b6ZGxCB", "g4vBVy2xPa", "YDgW6VfLoM", "XTHZchIAQh", "QOVqu8nK0W", "IcFP5RzWfe", "AFGpO89a1D", "8fYMn6xAs6", "6ZT2K7DJGk" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1732176745109, 1732174452606, 1732176116402, 1732175593145, 1732175303139, 1730698893306, 1732211975236, 1733195320792, 1732477987086, 1732240962345, 1732174597541, 1732174961188, 1732176204048, 1730667978528, 1731005917290 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2598/Authors" ], [ "ICLR.cc/2025/Conference/Submission2598/Authors" ], [ "ICLR.cc/2025/Conference/Submission2598/Authors" ], [ "ICLR.cc/2025/Conference/Submission2598/Authors" ], [ "ICLR.cc/2025/Conference/Submission2598/Authors" ], [ "ICLR.cc/2025/Conference/Submission2598/Reviewer_FGB6" ], [ "ICLR.cc/2025/Conference/Submission2598/Reviewer_FGB6" ], [ "ICLR.cc/2025/Conference/Submission2598/Authors" ], [ "ICLR.cc/2025/Conference/Submission2598/Reviewer_FGB6" ], [ "ICLR.cc/2025/Conference/Submission2598/Authors" ], [ "ICLR.cc/2025/Conference/Submission2598/Authors" ], [ "ICLR.cc/2025/Conference/Submission2598/Authors" ], [ "ICLR.cc/2025/Conference/Submission2598/Authors" ], [ "ICLR.cc/2025/Conference/Submission2598/Reviewer_tHSS" ], [ "ICLR.cc/2025/Conference/Submission2598/Reviewer_RjnU" ] ], "structured_content_str": [ "{\"title\": \"Clarification of question\", \"comment\": \"> Q1. Computational complexity.\\n\\nPlease refer to Weakness 3.\\n\\n> Q2. how much of the above performance can be recovered by using additional spatio-temporal layers?\\n\\nSimply expanding the parameters cannot achieve the same gains as STBIM.\\n\\n**We have completely refuted this point in the original manuscript, involving the line 451 in Experiment Section 5.1**. Subsequently, we have demonstrated this point in **Appendix B.7.2** in detail. Taking the spatiotemporal backbones STID and D2STGNN as examples, we introduced variants: STID-Plus and D2STGNN-Plus, which stacked two backbones to expand the parameter scale, while also increasing the maximum training epochs. The experimental results are presented in Table 14 of the manuscript, which we have replicated below for convenience.\\n\\n| | MAE | RMSE | MAPE |\\n|:-------------:|:------:|:-----:|:-----:|\\n| STID | 18.00 | 30.75 | 12.05 |\\n| STID-PLUS | 17.94 | 30.43 | 12.13 |\\n| STID+ours | 17.08 | 29.92 | 11.32 |\\n| | | | |\\n| D2STGNN | 17.44 | 29.58 | 12.18 |\\n| D2STGNN-Plus | 17.68 | 31.04 | 12.57 |\\n| D2STGNN+STBIM | 17.19 | 28.53 | 11.20 |\\n\\nWe can observe that simply increasing the parameter size does not lead to performance enhancement, For the complex model D2STGNN, simply stacking models may actually decrease model performance, as an excessively large parameter size can lead to overftting of the model to the data. Our performance improvement originates from the effective modeling of inconsistent features.\\n\\n> Q3. How will the test-time labels be used if available?\\n\\nAccording to the general paradigm of machine learning, labels are not accessible during testing. While labels are available during training, they cannot be directly used as inputs to the model. Typically, we optimize the model by utilizing backpropagation with label information loss.\\n\\n> Q4. What extra information is the inconsistency loss bringing/recovering from the input data apart from just maintaining a spatial smoothness\\uff1f\\n\\nOur inconsistency modeling (not a loss function) can incorporate label features into the model without using additional inputs, improving the model's prediction performance on inconsistent samples. In addition to maintaining spatial smoothness, it also preserves temporal smoothness. \\n\\n**We evaluated the effectiveness of STBIM in helping models to eliminate inconsistency between input and label dimensions in the temporal dimension in Section 5.3 and Appendix B.5**, thereby ensuring temporal smoothness of the backbone, and the results prove that our proposed module STBIM comprehensively deals with the inconsistencies of temporal and spatial dimensions.\"}", "{\"title\": \"Rebuttal of Weakness\", \"comment\": \"Dear Reviewer RjnU,\\n\\nThank you very much for your valuable review; it is crucial for improving the quality of our manuscript.\\n\\n- ### W1 Reproducibility\\n\\nWe sincerely apologize for the oversight in reporting the performance of BigST, and the actual average Mean Absolute Error (MAE) performance of BigST, which was mistakenly recorded as 24.12, is accurately represented as 21.12. **This error have occurred due to the proximity of the digits \\\"4\\\" and \\\"1\\\" on the numeric keypad of the Chinese (US) standard keyboard layout**\\ud83d\\ude2d, which is illustrated below. \\n\\n| | Keyboard Layout | |\\n|:---:|:---------------:|:---:|\\n| 7 | 8 | 9 |\\n| **4** | 5 | 6 |\\n|**1** | 2 | 3 |\\n| | | | |\\n\\nBelow, we present the average performance metrics for BigST over 12 time steps. The table above contains the incorrectly reported performance values, while the table below provides the corrected figures.\\n\\n| | | | |\\n|:-------------------------:|:---------:|:------:|:--------:|\\n| | MAE | RMSE | MAPE |\\n| BigST( Incorrect results) | **24.42** | 34.54 | 18.19 |\\n| BigST( Correct results) | **21.42** | 34.54 | 18.19 |\\n| +STBIM(JT) | 20.18 | 33.34 | 15.37 |\\n| Gap | +5.79% | +3.48% | +15.50% |\\n| +STBIM(FT) | 20.16 | 33.03 | 15.45 |\\n| Gap | +5.88% | +4.39% | +15.01% |\\n| | | | |\\n\\nThe corrected results continue to illustrate the effectiveness of STBIM, as evidenced by a 15.50% improvement in the average MAPE performance of BigST. To address any further concerns, we will provide access to all training logs for the models via an anonymous code link for your review. We sincerely apologize for the error caused by our oversight.\\n\\n---\\n\\n- ### W2 Shift learning.\\n\\nThe inconsistency we define differs fundamentally from the concept of shift in spatiotemporal out-of-distribution (OOD) learning. The latter refers to the overall differences in data distribution between training and testing datasets, which can lead to OOD challenges. In contrast, our focus is on a more granular aspect: the inconsistency between input values and future values. This inconsistency can occur not only in OOD scenarios but also in independent and identically distributed (IID) situations.\\n\\nTo further assess the impact of the proposed Spatiotemporal Bidirectional Interpolation Model (STBIM), we investigate whether spatiotemporal shift learning models can benefit from its implementation. Following the OOD setting in the open-source code of STONE [1], we use their STSD dataset as example and select two spatiotemporal OOD learning models as backbones, and STBIM is integrated with joint-training manner. The results are as follows:\\n\\n| | | | |\\n|:------:|:-----:|:-----:|:-----:|\\n| | MAE | RMSE | MAPE |\\n| STONE [1] | 18.46 | 30.65 | 15.29 |\\n| +STBIM | **17.89** | **30.08** | **14.91** |\\n| CauST [2] | 26.77 | 40.20 | 21.48 |\\n| +STBIM | **24.63** | **37.86** | **20.35** |\\n| | | | |\\n\\nWe find that STBIM can also improve the predictive performance of spatiotemporal shift learning models because it effectively enhances the model's accuracy on samples with historical-future inconsistency in OOD scenario. We w included the discussion of spatiotemporal shift learning in Section Related Work of the revised version. \\n\\n---\\n\\n- ### W3 Writing quality.\\n\\nThank you very much for your suggestion. We have enlisted the assistance of advanced language models skilled in language refinement, as well as collaborators who are native English speakers, to thoroughly proofread the manuscript and correct any grammatical errors. We appreciate your guidance in enhancing the quality of our work.\"}", "{\"title\": \"Clarification of Weakness 2.\", \"comment\": \"> (1) ' Adding just an inconsistency loss should not bring any new information for such impressive performance gains'\\n\\nIn our method, we did not add any additional losses except the regression loss.\\n\\n> (2). \\u2018Those gains should be achievable through traditional architectures as well\\u2019\\n\\nUnfortunately, even if an STGNN is designed with a more sophisticated structure, they do not emphasize modeling the inconsistency between input and labels. STBIM, on the other hand, addresses this issue and achieves performance improvements by focusing on it. Through extensive experimental cases (approximately 15 spatiotemporal prediction models), we demonstrate that STGNNs can benefit from STBIM to gain additional potential performance improvements, with performance increases of up to 18.74%, even for advanced models. As mentioned earlier, this gain cannot be achieved through simply expanding the parameter scale.\\n\\n> (3) ' Conducting an ablation study comparing STBIM to equivalent increases in model complexity for traditional architectures. \\u2019\\n\\nIncreasing model parameters and computational complexity is not equivalent to the benefits brought by STBIM, as discussed above.\\n\\n> (4) What information traditional architectures may miss or No additional information?\\n\\nThis is where our design gets magical.\\n\\nThese traditional architectures cannot effectively model label features, leading to confused predictions for samples with inconsistent input labels, as discussed in our introduction. **This is where our clever design comes in - we noticed this key information about label representation, which contains abundant table features, enabling the model to explicitly utilize label features during training to enhance learning from inconsistent samples without additional information, thereby improving the accuracy of the inference process.** We validated this motivation through extensive experiments in the Experiment Section 5.3 and Appendix Section B.6 of the manuscript.\"}", "{\"title\": \"Clarification of Weakness 1.\", \"comment\": \"Dear Reviewer tHSS,\\n\\nThank you very much for your valuable comments; we will respond to your concerns point by point.\\n\\n> (1) Simply expanding the parameters cannot achieve the same gains as STBIM.\\n\\n**We have completely refuted this point in the original manuscript, involving the line 451 in Experiment Section 5.1**. Subsequently, we have demonstrated this point in **Appendix B.7.2** in detail. Taking the spatiotemporal backbones STID and D2STGNN as examples, we introduced variants: STID-Plus and D2STGNN-Plus, which stacked two backbones to expand the parameter scale, while also increasing the maximum training epochs. The experimental results are presented in Table 14 of the manuscript, which we have replicated below for convenience.\\n\\n| | MAE | RMSE | MAPE |\\n|:-------------:|:------:|:-----:|:-----:|\\n| STID | 18.00 | 30.75 | 12.05 |\\n| STID-PLUS | 17.94 | 30.43 | 12.13 |\\n| STID+ours | 17.08 | 29.92 | 11.32 |\\n| | | | |\\n| D2STGNN | 17.44 | 29.58 | 12.18 |\\n| D2STGNN-Plus | 17.68 | 31.04 | 12.57 |\\n| D2STGNN+STBIM | 17.19 | 28.53 | 11.20 |\\n\\nWe can observe that simply increasing the parameter size does not lead to performance enhancement, For the complex model D2STGNN, simply stacking models may actually decrease model performance, as an excessively large parameter size can lead to overftting of the model to the data. Our performance improvement originates from the effective modeling of inconsistent features.\\n\\n Please refer to the experimental analysis section of the main body and Appendix B.7.1 in the submitted manuscript, we have discussed this when we submitted the paper.\\n\\n----\\n\\n\\n> \\\"The method for measuring the inconsistency between the input data and labels seems redundant?\\\"\\n\\n**We argue that the method of modeling the inconsistency between input data and labels is not redundant !** On the contrary, it is a crucial factor contributing to our performance. **In Experiment Section 5.3 and Appendix Section B.6 of the manuscript**, we evaluated the performance of various spatiotemporal models on spatiotemporal inconsistency samples. Clearly, STBIM can effectively enhance these models to better handle such inconsistencies. For example, as shown in Table 4, taking STID as an example, STBIM can improve its performance on handling inconsistent samples by 24.05%, significantly boosting the predictive performance of STID.\"}", "{\"title\": \"Clarification of question\", \"comment\": \"**Q1.** The paper introduces the unique concept of \\\"input-label inconsistencies\\\" in spatiotemporal prediction. Could you clarify how this concept differs from other types of spatiotemporal inconsistency, such as spatiotemporal out-of-distribution (OOD) or distribution shift? **Please refer to W1.**\\n\\n**Q2.** The current claim regarding differences between input and label features (Line 269) seems ambiguous. **Please refer to W2.**\\n\\n\\n**Q3.** Similar input data resulting in different labels.\\n\\nAs emphasized in the introduction, this dilemma is a critical challenge faced by traditional spatiotemporal learning models. In cases where two nodes exhibit similar input distributions but have different labels, the model struggles to effectively differentiate between these nodes, thus resulting in similar predictions for both and leading to substantial errors between predictions and labels. This observation serves as our motivation: to explicitly model label features in order to enhance the ability of spatiotemporal learning models to address samples with similar inputs but differing labels, as well as samples with different inputs but similar labels. We will revise the corresponding content in the introduction to further emphasize this motivation. \\n\\n**Q4.** What is STRIP in Figure 4(b)?\\n\\nSorry for the error, it should be corrected to STBIM here. And we have corrected this in the revised version.\\n\\n**Q5.** Computational stduy\\n\\n**We have reported the computational overhead introduced by STBIM in Appendix B.7 (as indicated in the main manuscript at line 347) when we submitted the paper.** To address your further concerns, we additionally report the computational complexity on LargeST-CA dataset with 8600 nodes to demonstrate the scalability of STBIM on the large-scale scenario. For your convenience, we have replicated the results for the LargeST-CA dataset in the table below as an example:\\n\\n| STGCN | Parameters | Train time/epoch(s) | Total train time (h) | Inference time (s) | Memroy (MB) |Improvement\\n|:-----:|:-----------:|:-------------------:|:--------------------:|:------------------:|:-----------:|:-----------:|\\n| - | 508K | 788.75 | 30.21 | 236.72 | 29470 |-\\n| +JT | 624K | 1319.47 | 55.19 | 271.07 | 53156 |+7.60%\\n| +FT | 624K | 882.39 | 33.52 | 254.41 | 31615 |+5.63%\\n| | | | | | |\\n| STID | Parameters | Train time/epoch(s) | Total train time (h) | Inference time (s) | Memory (MB) |\\n| - | 150K | 232.95 | 4.94 | 55.15 | 6704 |-\\n| +JT | 270K | 303.17 | 9.23 | 72.38 | 11265 |+12.80%\\n| +FT | 270K | 276.38 | 5.58 | 70.24 | 7261 |+14.61%\\n\\nOur analysis shows that STBIM achieves significant performance gains with minimal computational complexity. Fine-tuning training methods involve less computational time by directly fine-tuning pre-trained STNN with STBIM. Both approaches balance performance and efficiency, offering flexibility in selection. In conclusion, given the notable performance enhancement, the complexity burden introduced by STBIM is considered acceptable. Please note that in **Appendix B.7.2**, we also analyzed that traditional architectures cannot simply directly increase the computational complexity to achieve similar performance gains.\"}", "{\"summary\": \"The paper introduces a Spatio-Temporal Backward Inconsistency Learning Module (STBIM) designed to enhance spatiotemporal prediction models by addressing input-label inconsistencies. This approach incorporates a residual learning mechanism to refine predictions and improve performance across multiple domains, such as traffic and climate prediction. STBIM\\u2019s integration demonstrates significant accuracy improvements in various datasets and model types.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper presents a novel paradigm for handling \\\"input-label inconsistencies\\\" with residual theory based on spatiotemporal Gaussian Markov Random Field.\\n2. The proposed STBIM is model-agnostic, allowing easy integration into existing models without extensive modification.\\n3. Experiments demonstrate versatility through extensive testing across diverse datasets and models.\", \"weaknesses\": \"1. The paper lacks a thorough discussion of related works addressing spatiotemporal inconsistency. While the discussion on \\\"Label propagation in GNN,\\\" does not directly address the core issue and could be streamlined.\\n2. The connection between the spatiotemporal residual theory and the proposed STBIM module is somewhat unclear. See questions for details.\\n3. The paper does not include an ablation study for the individual components of STBIM, such as the retrospect MLP and the propagation kernel for smoothing.\", \"questions\": \"1. The paper introduces the unique concept of \\\"input-label inconsistencies\\\" in spatiotemporal prediction. Could you clarify how this concept differs from other types of spatiotemporal inconsistency, such as spatiotemporal out-of-distribution (OOD) or distribution shift?\\n2. The current claim regarding differences between input and label features (Line 269) seems ambiguous. While the paper uses hidden embeddings as features for input and label, the original values can also be considered features. Does this imply that the difference between input and label values could serve as the residual (if labels are available)? This seems contradictory to Equation 8, which defines the residual as the difference between prediction and label. Could you clarify?\\n3. In the case of \\u201csimilar input data resulting in different labels,\\\" if there are two identical inputs with different labels, wouldn\\u2019t the method struggle to predict correctly since it is a deterministic model? Can this limitation be addressed?\\n4. What is STRIP in Figure 4(b)?\\n5. Can the authors provide insights into the computational overhead introduced by STBIM, particularly in large-scale applications?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"W1: I may be misunderstanding something, but your claim about OOD is confusing. You state: \\\"The latter refers to the overall differences in data distribution between training and testing datasets, which can lead to OOD challenges. In contrast, our focus is on a more granular aspect: the inconsistency between input values and future values.\\\" However, a typical case of OOD involves training on historical data and testing on future data, where distribution changes occur\\u2014temporal OOD/shift. Could you clarify how your focus differs from this understanding?\", \"w2\": \"Of course we cannot use ground truth labels for prediction. However, your high-dimensional representations are not embeddings of the ground truth labels but rather embeddings of the predicted labels. If that's correct, this \\\"residual information\\\" essentially reflects the difference between input values and predicted labels. If so, I struggle to understand how this provides meaningful information. Could you elaborate?\", \"q3\": \"Could you answer my question directly? Can your method make correct predictions if two identical inputs are given but have different labels?\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"I still find the definition of input-label differences/residuals to be problematic. As I mentioned earlier, computing differences between input values and predicted labels (or even true labels) makes no sense to me.\\n\\nRegarding W3, consider two spatiotemporal graphs (samples) that are nearly identical except for one node in each graph having different future trajectories (as per your claim about the node-level model). In such a scenario, it is evident that the model would fail to make the correct prediction.\"}", "{\"title\": \"Thank you very much for your response.\", \"comment\": \"> W1. Shift in OOD.\\n\\nWhen executing our prediction model, both test and training sets are divided along the temporal axis into a set of input values and labels. For example, in the test set, we select data from $T_p$ past time steps at time point t, $X_t \\\\in R^{T_{p}\\\\times N}=[x_{t-T_p+1},\\u2026, x_t]$ as input values, and then select data from the following $T_f$ future time steps $Y_t \\\\in R^{T_{p}\\\\times N}=[x_{t-T_f+1},\\u2026, x_t]$ as prediction values. Through this operation, the test set essentially becomes a union of a set of input values $\\\\mathbb{X}=[X_0,...,X_L]$ and a set of labels $\\\\mathbb{Y}=[Y_0,...,Y_L]$, where $L$ means the number of samples. Test set is denoted as $\\\\mathbb{D}$=$\\\\mathbb{X}$$\\\\bigcup$$\\\\mathbb{Y}$. After similar operations, the training is recorded as $\\\\mathbb{T}$.\\n\\n**We focus on the differences between inputs and labels**, but inputs and labels are sampled from the same distribution. This difference reflected in two aspects:\\uff081\\uff09The difference in the node dimension. Given the input of node $v$ and node $u$ at time step t: $X_t^u$ and $X_t^v$ where $X_t^u$ and $X_t^v$ is similar, but the label of two nodes is significantly different. \\uff082\\uff09The difference in the temporal dimension. Given the input and label of node v$ at time step t : $X_t^u$ and $Y_t^u$, their statistical characteristics (such as mean and variance) differ significantly.\\n\\n**OOD tasks focus on the differences between test set distribution** Pr($\\\\mathbb{D}$) and training set Pr($\\\\mathbb{T}$), i.e., Pr($\\\\mathbb{D}$) $\\\\neq$ Pr($\\\\mathbb{T}$). which we refer to as overall differences. The differences we are concerned about also exist in the OOD task.\\n\\n> W2. Label embedding\\n\\nYes, during the training process, as the model's predictions become increasingly accurate, the embeddings of model-generated predictions also approach the mapping of true values (though gaps still exist). At this point, the prediction embeddings also contain numerous label features, and we make the most use of these features to improve model accuracy.\\n\\n> W3. Could you answer my question directly? Can your method make correct predictions if two identical inputs are given but have different labels?\\n\\nYes, we can do better, as shown in Figure 4. Allow me to further clarify your misunderstanding. The inconsistency we focus on refers to cases within a sample (containing data from T time steps for N nodes) where (some) nodes have the same inputs but different labels. In this case, the model will tend to make similar predictions for these nodes, which can lead to errors.\\n\\nIf two samples that have identical values, meaning all N\\u00d7T data points are exactly the same, all models will give identical predictions - I believe this is what you refer to as deterministic models.\"}", "{\"title\": \"Clarification of the question\", \"comment\": \"> Q1 The third subplot in Figure 1 is not mentioned in the text, and the \\\"abnormal signal\\\" it shows is not referenced in the document. Is it necessary to keep it? What is its significance?\\n\\n We apologize for any confusion. Abnormal signals are defined as samples in which traffic flow experiences a rapid increase or decrease, highlighting a significant example of inconsistencies in the temporal dimension between input and future sequences. To elucidate this point, we would revise the original text to: \\\"As illustrated in Figure 1 (c), inconsistencies in temporal features exist between historical and future values. Typical examples of this phenomenon include abnormal signals characterized by a rapid increase or decrease in traffic flow.\\\"\\n\\n---\\n> Q2 The text highlights the inconsistency between inputs and outputs, using the prediction label to represent the time series of a later time window. Can this be expressed more clearly? If a label is used, could the specific cases be clarified? Typically, a label is a distinct value, not a continuous range of floating-point numbers. \\n\\nThank you very much for your suggestion. In this paper, we use 'label' to represent the time series for future time windows. To avoid further confusion, we will replace \\\"input\\\" and \\\"label\\\" with \\\"historical value\\\" and \\\"future value,\\\" respectively. Consequently, the term \\\"input-label inconsistency\\\" will be revised to \\\"historical-future inconsistency.\\\" To prevent other reviewers from being confused by the renaming of key concepts, we promise that this change will be consistently reflected in the revised version of the manuscript.\\n\\n---\\nQ3. If all results are accurate, why do many fine-tuned results exceed those of joint tuning? Can this be analyzed further?\\n\\nThe fine-tuning method entails adjusting the STBIM and the pre-trained backbone, which serves to establish a robust optimization starting point while simplifying the optimization process. This approach is beneficial for complex models like DGCRN and DDGCRN. As a result, it can outperform joint training methods in specific scenarios. Nevertheless, in the majority of cases, joint training\\u2014where STBIM and the backbone are trained simultaneously\\u2014facilitates more flexible tuning and adaptation to the task.\"}", "{\"title\": \"Clarification of weakness\", \"comment\": \"Dear Reviewer FGB6,\\n\\nThank you very much for your valuable review; it is crucial for improving the quality of our manuscript.\\n\\n**W1**. Discussion of related works addressing spatiotemporal inconsistency\\n\\nThe inconsistency we define differs fundamentally from the concept of shift in spatiotemporal out-of-distribution (OOD) learning. The latter refers to the overall differences in data distribution between training and testing datasets, which can lead to OOD challenges. In contrast, our focus is on a more granular aspect: the inconsistency between input values and future values. This inconsistency can occur not only in OOD scenarios but also in independent and identically distributed (IID) situations.\\n\\nTo further assess the impact of the proposed Spatiotemporal Bidirectional Interpolation Model (STBIM), we investigate whether spatiotemporal shift learning models can benefit from its implementation. Following the OOD setting in the open-source code of STONE [1], we use their STSD dataset as example and select two spatiotemporal OOD learning models as backbones, and STBIM is integrated with joint-training manner. The results are as follows:\\n\\n| | | | |\\n|:------:|:-----:|:-----:|:-----:|\\n| | MAE | RMSE | MAPE |\\n| STONE | 18.46 | 30.65 | 15.29 |\\n| +STBIM | **17.89** | **30.08** | **14.91** |\\n| CaST | 26.77 | 40.20 | 21.48 |\\n| +STBIM | **25.63** | **37.86** | **20.95** |\\n| | | | |\\n\\nWe find that STBIM can also improve the predictive performance of spatiotemporal shift learning models because it effectively enhances the model's accuracy on samples with historical-future inconsistency in OOD scenario. Following your suggestion, we have deleted the introduction about \\\"Label propagation in GNN\\u201d and included this discussion in Section Related Work in the revised version.\\n\\n---\\n**W2**. The details of spatiotemporal residual theory.\\n\\nYes, if the labels are available, the difference between the input and the label is used as residual input to the model for learning. However, this assumption is never true because, according to the general paradigm of machine learning, labels cannot be directly used as model inputs for regression prediction tasks. Therefore, we propose an effective solution using label hidden embeddings, and these high-dimensional representations contain richer feature information to comprehensively learn the residual information.\\n\\nEquation 8 in the paper defines the residual as the difference between the predicted expectation and the label expectation. To analyze the internal patterns of spatiotemporal data, we employ the Gaussian Markov Random Field (GMRF) model. In this framework, **as discussed in lines 268 to 276 of our manuscript, the prediction expectation is determined by the high-dimensional representation of the input**, which arises from the spatiotemporal learning model's abstraction of input features, as illustrated in Equation 6. Conversely, the **label expectation is governed by its intrinsic characteristics**. Thus, at a fundamental level, the difference between these two expectations approximates the disparity between input features and label features. We will polish this discussion for clarity.\\n\\n---\\n\\n**W3**.Ablation study\\n\\nSorry for the confusion. We conduct ablation experiments on the Large-SD dataset using the STGCN. We created two variants: \\\"w/o MLP\\\" means we remove the retrospect MLP, and \\\"w/o kernel\\\" means that we remove the propagation kernel for smoothing. The experimental results are as follows:\\n| Large-SD | MAE | RMSE | MAPE |\\n|------------|-----------|--------|-------|\\n| w/o MLP | 19.24 | 32.63 | 12.99 |\\n| w/o kernel | 18.95 | 32.65 | 12.68 |\\n| +STBIM | 18 41 | 31.84 | 12.45 |\\n\\n\\\"w/o MLP\\\" showed significantly higher errors, as this retrospect MLP is used to map label features to the same hidden space as input features, leading to smoother training. The experiment without the kernel resulted in poor prediction performance because residual smoothing benefits the model's learning process. We would include the ablation experiments with more backbones in the revised version.\"}", "{\"title\": \"Clarification of Weakness 3: Computational complexity.\", \"comment\": \"**We have reported the computational overhead introduced by STBIM in Appendix B.7 (as indicated in the main manuscript at line 347)**. In the paper, we present the parameter scale, training time, and performance improvements achieved. To address your concerns further, we additionally report the computational complexity on CA dataset with 8600 nodes to demonstrate the scalability of STBIM on the large-scale scenario. For your convenience, we have replicated the results for the CA dataset in the table below as an example:\\n\\n| STGCN | Parameters | Train time/epoch(s) | Total train time (h) | Inference time (s) | Memroy (MB) |\\n|:-----:|:-----------:|:-------------------:|:--------------------:|:------------------:|:-----------:|\\n| - | 508K | 781 | 30.21 | | 29470 |\\n| +JT | 624K | 1321 | 55.19 | | 53156 |\\n| +FT | 624K | 882 | 33.52 | | 31615 |\\n| | | | | | |\\n| STID | Parameters | Train time/epoch(s) | Total train time (h) | Inference time (s) | Memory (MB) |\\n| - | 150K | 232 | 4.94 | | 6204 |\\n| +JT | 270K | 303 | 9.23 | | 11865 |\\n| +FT | 270K | 276 | 5.58 | | 7261 |\\n\\nOur analysis shows that STBIM achieves significant performance gains with minimal computational complexity. Fine-tuning training methods involve less computational time by directly fine-tuning pre-trained STNN with STBIM. Both approaches balance performance and efficiency, offering flexibility in selection. In conclusion, given the notable performance enhancement, the complexity burden introduced by STBIM is considered acceptable. Please note that we have analyzed that traditional architectures cannot simply directly increase the computational complexity to achieve similar performance gains.\"}", "{\"summary\": \"The paper proposes Spatio-Temporal Backward Inconsistency Learning Module (STBIM) designed to enhance Spatiotemporal Neural Networks (STNNs) by addressing the problem of input-label inconsistency in spatiotemporal prediction tasks. STBIM operates by capturing residuals between input data and labels through their spatio-temporal features, smoothing these residuals through a residual propagation kernel, and adding the resulting corrections to the base predictions generated by STNNs. Extensive experiments across multiple datasets and baseline models demonstrate substantial improvements in prediction accuracy.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The experimental validation across 11 datasets with diverse STNN architectures showcases the generalizability and effectiveness of STBIM. Performance gains up to 21% underscore its practical impact.\", \"The fact that STBIM is designed to be compatible with a wide range of STNN architectures enhances its potential for broader application and scalability in different domains like traffic and atmospheric forecasting.\"], \"weaknesses\": [\"The method for measuring the inconsistency between the input data and labels seems redundant in spatio-temporal methods. Ideally, the spatio-temporal methods such as convolution or transformer and their temporal variants (convLSTM or spatio-temporal transformer) have the inductive biases required to take into account the spatio-temporal features that are mapped to labels. It is just a matter of adding additional layers that anyway this method proposes to use (STBIM is a separate module). I would suggest the authors to provide a more detailed comparison between STBIM and traditional spatiotemporal architectures, specifically addressing how STBIM's approach differs from or improves upon simply adding more layers. Further providing clarification on how STBIM's method of measuring inconsistency provides information that cannot be captured by the inductive biases of existing spatiotemporal methods would help the paper.\", \"Since, there is no new information added (such as test-time labels), adding just an inconsistency loss should not bring any new information for such impressive performance gains and those gains should be achievable through traditional architectures as well. Conducting an ablation study comparing STBIM to equivalent increases in model complexity for traditional architectures. This would help demonstrate why traditional architectures cannot achieve similar gains. I would suggest the authors explain in more detail how STBIM extracts additional information from the existing data that traditional architectures may miss.\", \"Although performance gains are highlighted, the computational cost associated with adding STBIM to complex STNN models is not fully explored. An analysis of training and inference times with and without STBIM would clarify its practical feasibility. A detailed computational cost analysis, including training and inference times, for models with and without STBIM across different dataset sizes and model complexities could be provided. Additionally, a breakdown of memory requirements for models with and without STBIM.\"], \"questions\": [\"Could the authors provide an in-depth breakdown of the computational resources required to train models with and without STBIM? How does this impact its scalability?\", \"how much of the above performance can be recovered by using additional spatio-temporal layers?\", \"How will the test-time labels be used if available?\", \"What extra information is the inconsistency loss bringing/recovering from the input data apart from just maintaining a spatial smoothness, for which convolution / transformer have the right inductive bias?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a spatiotemporal prediction module, the spatiotemporal backward inconsistency learning module (STBIM), to solve the problem of inconsistent input labels in existing spatiotemporal neural networks (STNNs) (i.e., the same input may lead to different outputs, or different inputs may have the same output). By combining label features and integrating spatiotemporal residual theory, STBIM effectively improves the prediction accuracy of the model. Experimental results show that STBIM significantly improves the prediction performance across multiple datasets.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The authors introduces an innovative module to capture spatiotemporal inconsistencies between inputs and outputs (labels in this paper).\\n\\n2. The authors conduct many experiments to show the effectiveness of the proposed model STBIM.\\n\\n3. The code is given which is helpful to reproduce the experimental results.\", \"weaknesses\": \"1. I have significant reservations about the authenticity of the experimental results. For example, on page eight, the findings for the \\\"LargeST-GBA dataset\\\", particularly regarding the BigST model, show that the average improvement is considerably less than the claimed increase of over seventeen percentage points.\\n\\n2. The model's core concept involves correcting predicted outcomes through spatiotemporal residuals. This notion closely resembles shift learning in spatiotemporal networks, yet there is no discussion of the shifts.\\n\\n3. Additionally, the writing quality in the article requires improvement, as evidenced by a typographical error in the first line of page five (\\\"to be be\\\") and inconsistencies in tense and capitalization throughout.\", \"questions\": \"1. The third subplot in Figure 1 is not mentioned in the text, and the \\\"abnormal signal\\\" it shows is not referenced in the document. Is it necessary to keep it? What is its significance?\\n\\n2. The text highlights the inconsistency between inputs and outputs, using the prediction label to represent the time series of a later time window. Can this be expressed more clearly? If a label is used, could the specific cases be clarified? Typically, a label is a distinct value, not a continuous range of floating-point numbers.\\n\\n3. If all results are accurate, why do many fine-tuned results exceed those of joint tuning? Can this be analyzed further?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
7qMrDf9zFU
Priority on High-Quality: Instruction Data Selection for Optimized Instruction Tuning
[ "Hong Zhang", "Feng Zhao", "Ruilin Zhao", "Cheng Yan" ]
Large Language Models (LLMs) have demonstrated a remarkable understanding of language nuances through instruction tuning, enabling them to effectively tackle various natural language processing tasks. Previous research on instruction tuning mainly focused on the quantity of instruction data. Recent studies indicate that the quality of instruction data is more significant than the quantity of data. Even selecting a small amount of high-quality data can achieve optimal fine-tuning effects. However, existing selection methods have severe limitations in defining the quality of each instruction data and considering the balance between data quality and data diversity. To address these challenges, we propose a strategy that utilizes noise injection to identify the quality of instruction data. We also implement the strategy of combining inter-class diversity and intra-class diversity to improve model performance. Experimental results demonstrate that our method significantly outperforms the model trained on the full dataset when utilizing only 12% of the entire dataset. Our study provides a new perspective on noise injection in the field of instruction tuning, and also illustrates that a high-quality instruction dataset should possess both quality and diversity. Additionally, we have published our selected high-quality instruction data.
[ "Instruction Data Selection", "Instruction Tuning", "Large Language Models", "High-quality" ]
Reject
https://openreview.net/pdf?id=7qMrDf9zFU
https://openreview.net/forum?id=7qMrDf9zFU
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xUb4iZ6f1u", "x1yey72Ss5", "wTJdlAC29u", "rCXcSVQnkr", "q2VosiXZLu", "fSBPfOo0M1", "WpuHS3oArj", "WmfHGDiFnZ", "H85g3sxV0C", "0dKftswK8N" ], "note_type": [ "official_review", "official_comment", "official_review", "official_review", "decision", "official_review", "official_comment", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1730684396853, 1731983542740, 1730102655017, 1730201473997, 1737523744517, 1730347002886, 1731983396641, 1731983624138, 1733992018807, 1731983594423 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6099/Reviewer_g9bs" ], [ "ICLR.cc/2025/Conference/Submission6099/Authors" ], [ "ICLR.cc/2025/Conference/Submission6099/Reviewer_WAh5" ], [ "ICLR.cc/2025/Conference/Submission6099/Reviewer_cwnb" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6099/Reviewer_dfXN" ], [ "ICLR.cc/2025/Conference/Submission6099/Authors" ], [ "ICLR.cc/2025/Conference/Submission6099/Authors" ], [ "ICLR.cc/2025/Conference/Submission6099/Area_Chair_knTu" ], [ "ICLR.cc/2025/Conference/Submission6099/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The authors propose a data selection approach using noise injection to assess instruction quality by introducing controlled noise into the input data and measuring the consistency of the model's output distribution. This method identifies high-quality data that aligns with the model\\u2019s pre-trained knowledge, allowing for more efficient fine-tuning. Additionally, to prevent over-representation of certain data types, the authors implement inter-class and intra-class diversity strategies using clustering and cosine similarity to ensure a balanced dataset. Their approach demonstrates superior model performance using only 12% of the original dataset, reducing training costs while enhancing model efficiency. As part of their contributions, the authors also publish a high-quality instruction dataset curated through their method, offering a valuable resource for further research in instruction tuning.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The approach of assessing data quality via noise injection is innovative, presenting a fresh perspective on defining quality in instruction tuning without relying on external scoring models. This originality extends to combining noise-based quality assessment with diversity strategies, addressing a critical gap in current methods that either prioritize quality or diversity but rarely balance both effectively.\", \"the use of inter-class and intra-class diversity strategies showcases a nuanced understanding of dataset composition and further enhances the quality of the work.\", \"The explanation of noise injection and consistency measures is detailed and accessible, allowing readers to understand the novel concepts and replicate the approach.\"], \"weaknesses\": [\"The paper\\u2019s experimental results could be improved with additional baseline comparisons. While the authors compare their method to full-data training, adding comparisons to recent quality and diversity-oriented methods would allow readers to better gauge the benefits and trade-offs of this approach in a broader context.\", \"The paper solely relies on output distribution consistency following noise injection to define quality, which may overlook other nuanced aspects of instruction quality, such as contextual relevance, instruction clarity, or alignment with specific model objectives.\", \"Although the paper introduces inter-class and intra-class diversity strategies, it only uses k-means clustering and cosine similarity to ensure diversity. This limited approach might lead to clusters that fail to represent complex hierarchical or nuanced differences within data types.\"], \"questions\": [\"The paper\\u2019s noise injection approach is innovative, but it lacks an in-depth exploration of the noise parameters. Could you provide more details on how the noise injection parameters, such as the scaling factor \\u03b2, were selected? Did you experiment with different values, and if so, how did these affect the results?\", \"The selected high-quality data are determined based on consistency following noise injection, but it would be helpful to understand what specific characteristics these data share. Could you analyze or describe typical examples of the high-consistency data and explain why these instructions align well with model knowledge?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your comments and we reply to them below.\\n\\n**Response to weaknesses**: \\n+ Our initial concept draws from traditional machine learning, where data points with different labels are separated in low-density regions, and similar data points yield similar outputs. Consequently, applying actual perturbations to learned data should not result in significant changes to predictions, indicating consistency in the outputs. In line with the assumptions proposed in Lima\\u2019s paper, we hypothesize that the knowledge underlying instructions is consistent with the knowledge acquired during pre-training, making it easier for models to learn the style of instructions. Intuitively, once knowledge is learned, the model can quickly adapt to a new style following a certain degree of stylistic rewriting.\\n\\n+ The core idea of our paper is that the best data for a model is that which suits the model itself. Therefore, unlike traditional selection methods that employ external models for data filtering, we use the model itself for data selection, which may entail some resource overhead.\\n\\n+ In our paper, we have compared our selection method with others. For instance, AlpaGasus [1] employs an external model to score each data point, using these scores to assess the quality.\\n\\n+ In addition to the llama2 model, our paper extends the analysis to include the qwen2-0.5B and qwen2-1.5B models to validate the generalizability of our method across different model types and sizes.\\n\\n**Response to questions**: \\n\\n+ Q1: Our core hypothesis is that instructions consistent with knowledge absorbed during pre-training are more easily learned and integrated by the model through subsequent fine-tuning. Therefore, by introducing data perturbations and comparing the consistency of output probability distributions, we verify whether the model has learned the data during pre-training. The experimental results confirm our hypothesis, with the selected data enhancing the model\\u2019s capabilities in areas such as code and mathematics.\\n\\n+ Q2 and Q3: We conducted an in-depth analysis of the filtered data by using GLM4 to classify texts into 10 categories. We observed that different models show certain discrepancies in their preferences for data selection, both in terms of the most and least favored data. Within the same class of models, data selection trends are consistent, which may be attributed to the similar training data used during pre-training. Based on this, we propose that data filtered by smaller models can be utilized to train larger models. This approach could potentially be an effective method to reduce resource expenditure in the future.\\n\\n| | Alpaca-ALL | Alpaca_Selected | Rate_of_Change | Alpaca_Selected | Rate_of_Change |\\n|:-------------:|:----------:|:---------------:|:--------:|:---------------:|:-----------:|\\n| Model | - | LLama2-7b | - | Qwen2-0.5b/1.5B | - |\\n| Discipline | 2193 | 242 | 88.96% | 277 / 283 | 87.37% / 87.10% |\\n| Language | 5855 | 72 | **98.77%** | 80 / 78 | **98.63%** / **98.67%** |\\n| Knowledge | 15761 | 2012 | 87.23% | 2567 / 2537 | 83.71% / 83.90% | \\n| Comprehension | 3860 | 669 | **82.67%** | 767 / 817 | 80.13% / 78.83% |\\n| Reasoning | 837 | 94 | 88.77% | 118 / 89 | 85.90% / 89.37% |\\n| Creation | 12758 | 2103 | 83.51% | 2565 / 2780 | **79.89%** / **78.20%** |\\n| Code | 626 | 59 | 90.58% | 82 / 90 | 86.90% / 85.62% |\\n| Mathematics | 3195 | 99 | 96.90% | 89 / 84 | 97.21% / 97.37% |\\n| Other | 5874 | 697 | 88.13% | 796 / 810 | 86.45% / 86.21% |\\n\\n+ Q4: This reference literature does not have corresponding open-source code, making it difficult to accurately reproduce the code from the paper in a short period of time. In our paper, we have compared our selection method with others. For instance, AlpaGasus [1] employs an external model to score each data point, using these scores to assess the quality.\\n\\n[1] L. Chen, S. Li, J. Yan, H. Wang, K. Gunaratna, V. Yadav, Z. Tang, V. Srinivasan, T. Zhou, 306 H. Huang, and H. Jin. Alpagasus: Training a better alpaca with fewer data. In The Twelfth 307 International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 308 2024.\"}", "{\"summary\": \"This paper presents a data selection method which could define the quality of each instruction data and considering the balance between\\ndata quality and data diversity. Experiments demonstrate that the proposed method maintain the performance of the whole dataset, and even outperforms the model trained on the full dataset.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Novel approach: The proposed method of using noise injection to identify the quality of instruction data is innovative and provides a new perspective in the field of instruction tuning.\\n2. Performance improvement: Experimental results show that the method significantly outperforms the model trained on the full dataset when using only 12% of the entire dataset, reducing training costs while improving model performance.\\n3. Consideration of quality and diversity: The study effectively combines data quality and diversity, addressing the limitations of existing methods that often focus on one aspect over the other.\", \"weaknesses\": \"1. This paper is not well-motivated.\\n\\n> From this insight, we formulate a hypothesis: instructions that align with the knowledge absorbed during pre-training are more easily learned and integrated by the model through subsequent fine-tuning. We term these effective guiding instructions as \\\"high-quality instructions. (at line 83)\\n\\nThere is no related publications or experiments (not mentioned in this paper) to support this opinion, but the method of this paper is motivated by such a non-verified 'idea'.\\n\\n2. Lack of in-depth exploration of noise: The study did not conduct an exhaustive gradient experiment to determine the optimal level of noise intensity, leaving room for further investigation into the relationship between noise and data quality.\\n3. While the method shows promising results, the process of noise injection and how it exactly relates to data quality may lack interpretability. This could make it difficult for practitioners to understand and trust the method fully.\\n4. The method relies on the pre-trained model's behavior and certain assumptions such as the smoothness and clustering assumptions. If these assumptions do not hold true for certain datasets or models, the effectiveness of the method may be compromised.\", \"questions\": \"see the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a method that utilizes noise injection to identify the quality of instruction data. They also implement the strategy of combining inter-class diversity and intra-class diversity to select instruction data accompanied by the quality identification method. Experimental results demonstrate that the method outperforms some previous methods and outperforms the model trained on the full dataset when utilizing a small percentage of the entire dataset.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. It is reasonable to use noise inject to judge the quality of instruction data. The authors have explained their insight in the paper, and it is also effective in experiments.\\n2. The writing is clear and easy to understand.\\n3. The ablation experiment well demonstrated the roles of consistency and diversity in the method.\", \"weaknesses\": \"1. Only one data set is used for the experiment. Using multiple data can illustrate the generality of the method. For example, Mods[1] also uses a larger mixture instruction datasets which is composed of instruction data from several different datasets.\\n2. I noticed that the methods cited and compared in this paper are up to 2023, but there are still new methods that have not been compared in related works or experiments, e.g. [2][3].\\n\\n[1] MoDS: Model-oriented Data Selection for Instruction Tuning\\n\\n[2] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning\\n\\n[3] LESS: Selecting Influential Data for Targeted Instruction Tuning\", \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper introduces a novel approach for selecting high-quality instructional data for tuning LLMs. Unlike previous research, which primarily emphasizes the volume of instructional data, this work addresses limitations in defining and balancing the quality and diversity of the data. The authors propose a method that uses noise injection to identify high-quality instructional data, analyzing the pre-trained model\\u2019s response to perturbed inputs to measure the consistency of probability distributions. This approach enables a model-centric quality assessment without relying on external scoring models. Additionally, the framework integrates both inter-class and intra-class diversity to ensure comprehensive task coverage and reduce data redundancy. Experimental results demonstrate that the proposed method achieves strong performance on unfiltered datasets across various standard benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper is well-written in general and well-motivated.\", \"The research direction of studying how to select datasets for LLM supervised fine-tuning is of practical importance.\", \"The proposed data selection pipeline is easy to follow and intuive.\", \"The experimental results of the proposed data selection pipeline seem to be very promising.\"], \"weaknesses\": [\"The paper lacks fundamental explanation and justification for the assumption that low KL divergence after noise perturbation indicates high data quality. It is unclear whether this is purely heuristic or based on a specific hypothesis or experimental observation.\", \"The proposed method seems computationally intensive. For instance, all the perturbed data must pass through the LLM to obtain predictions and calculate KL divergence.\", \"There is no comparison between the proposed data selection pipeline and similar methods, e.g., [1].\", \"It is unclear whether the dataset selected using one LLM (preferably at a smaller scale, e.g., 7B) can be broadly applied to fine-tune various LLMs.\", \"[1] https://arxiv.org/abs/2405.00705\"], \"questions\": [\"Why is low KL divergence among LLM outputs before and after noise perturbation an indicator of high data quality? Is there any fundamental reason behind it?\", \"Are there potential methods to accelerate the speed of data selection in the proposed approach?\", \"Could one use, say, Llama3 7B to select data and then use it for fine-tuning Llama3 405B?\", \"How does the proposed method compare to the approach presented in https://arxiv.org/abs/2405.00705?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your comments and we reply to them below.\\n\\n**Response to Q1**: \\n\\tIn our paper, we primarily present the results of data filtering with the parameter \\u03b2 set to 1 and 10. To further refine our research, we supplement the experimental results for different \\u03b2 values here. Through comparative analysis, we find that our selection method consistently outperforms the full-data training across different \\u03b2 values.\\n\\n| | MMLU | Math | Code | Commonsense | World Knowledge | Average |\\n|:----------:|----------:|:-----:|:-----:|:-----------:|:---------------:|:-------:|\\n| Alpaca_all | 47.93 | 13.12 | 13.41 | 55.04 | 20.83 | 30.07 |\\n| ours(\\u03b2=3) | 46.13 | 11.90 | 15.24 | 53.48 | 26.59 | 30.67 |\\n| ours(\\u03b2=5) | 45.28 | 14.10 | 17.07 | 51.76 | 28.86 | 31.41 |\\n| ours(\\u03b2=10) | 47.12 | 15.69 | 15.85 | 56.51 | 29.83 | 33.00 | \\n| ours(\\u03b2=15) | 43.43 | 13.57 | 15.24 | 53.89 | 28.37 | 30.90 | \\n\\n**Response to Q2**: \\n\\tWe extract gerunds from the Alpaca_all, Alpaca_selected, and Alpaca_deleted datasets and present the top six results in terms of frequency, as shown in the table below. In analyzing the characteristic features of high-consistency data, we find that these samples tend to include gerund structures of the \\u201cgenerate\\u201d type, while gerunds of the \\u201crewrite\\u201d type are seldom selected. The knowledge content embedded in these generative instructions is significantly greater than that in rewrite instructions, and the intrinsic knowledge of such instructions is more likely to align well with what the model has learned. Furthermore, we analyze the instruction length in data selection within our paper and discover that our method tends to favor longer instructions. This is similar to the type of data the model is exposed to during the pre-training phase, where long data is predominantly handled. This preference for longer instructions may also be an indication of data consistency, as longer instructions typically contain more information.\\n\\n| Verb | Noun | count(Alpaca_All) | Verb | Noun | count(Alpaca_Selected_LLama2) | Verb | Noun | count(Alpaca_Deleted_LLama2) |\\n|:----------:|:--------:|:------------------:|:--------:|:-----------:|:-----------------------------:|:--------:|:--------:|:----------------------------:|\\n| generate | list | 859 | generate | list | 186 | rewrite | sentense | 741 |\\n| rewrite | sentence | 742 | explain | concept | 93 | generate | list | 673 |\\n| give | example | 489 | create | list | 86 | give | example | 457 |\\n| create | list | 480 | write | story | 70 | create | list | 394 |\\n| generate | sentense | 381 | make | list | 52 | sentence | sentence | 374 |\\n| write | story | 358 | write | description | 46 | write | story | 327 |\"}", "{\"comment\": \"Thanks for your comments and we reply to them below.\\n\\n+ Our research methodology is grounded in the core hypothesis proposed by Lima [1]. Lima introduces the hypothesis that the knowledge of large models is predominantly acquired during the pre-training phase, while the fine-tuning process primarily involves learning to adhere to the style of specific instructions. This hypothesis, first posited by Lima, has garnered widespread acceptance in the field. Building upon this foundation, we introduced our own core hypothesis: instructions that align with the knowledge absorbed during pre-training are more readily learned and integrated by the model through subsequent fine-tuning. Our experimental results, as detailed in the paper, validate this hypothesis. The data selected by our method has significantly enhanced the model\\u2019s capabilities in areas such as code, mathematical, and commonsense.\\n\\n+ Given the rationality of our method, we did not delve into detailed gradient experiments during the initial experimental phase. This was because our approach outperformed comprehensive data screening under noise intensities of 1 and 10. To further validate our findings, we supplemented gradient experiments. As the data in the table indicate, the experimental results show that our method surpasses the effects of comprehensive data training across different \\u03b2 values.\\n\\n| | MMLU | Math | Code | Commonsense | World Knowledge | Average |\\n|:----------:|----------:|:-----:|:-----:|:-----------:|:---------------:|:-------:|\\n| Alpaca_all | 47.93 | 13.12 | 13.41 | 55.04 | 20.83 | 30.07 |\\n| ours(\\u03b2=3) | 46.13 | 11.90 | 15.24 | 53.48 | 26.59 | 30.67 |\\n| ours(\\u03b2=5) | 45.28 | 14.10 | 17.07 | 51.76 | 28.86 | 31.41 |\\n| ours(\\u03b2=10) | 47.12 | 15.69 | 15.85 | 56.51 | 29.83 | 33.00 | \\n| ours(\\u03b2=15) | 43.43 | 13.57 | 15.24 | 53.89 | 28.37 | 30.90 | \\n\\n\\n+ The quality of the data itself is difficult to define in an intuitive manner. Existing research papers, which employ methods such as custom IFD scores or external model ratings, often fail to provide high-confidence interpretability. In contrast, our approach starts from the model's own perspective, using noise injection to evaluate whether the model has encountered related knowledge during the pre-training phase. Our hypothesis is that instructions consistent with the knowledge absorbed during pre-training are more easily learned and integrated by the model through subsequent fine-tuning. The experimental results confirm our hypothesis.\\n\\n+ Firstly, the assumptions of smoothness and clustering form the foundational hypotheses in the domain of traditional machine learning, and their core concepts are fully realized in algorithms such as k-means clustering and the Consistency Regularization method. Secondly, we conducted separate experiments on three different types of instruction datasets (model-generated, manually written, and template-revised), and tested them on models of different types and with varying parameter scales to validate the effective generalization capability of our method.\\n\\n[1]Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe\\nMa, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettle-\\nmoyer, and Omer Levy. LIMA: less is more for alignment. In Alice Oh, Tristan Nau-\\nmann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine (eds.), Advances\", \"in_neural_information_processing_systems_36\": \"Annual Conference on Neural Informa-\\ntion Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16,\\n2023, 2023.\"}", "{\"metareview\": \"This paper focuses on selecting critical instruction data for optimizing instruction tuning. It proposes utilizing noise injection to identify the quality of instruction data and consider both the quality and diversity of selected data. Experiments demonstrate the effectiveness of the proposed method. A high-quality instruction set is curated through the proposed method and published, which contributes to the research community.\\n\\nThe strengths of this paper are reflected in several aspects. First, the research problem and direction of selecting more important instruction data are valuable and promising. Second, the motivation for considering both quality and diversity is overall clear. Third, the reported experimental results are good. The weaknesses mainly lie in two perspectives. There is a lack of an intuitive and in-depth analysis of the effectiveness and technical contributions of the proposed method. Besides, the effectiveness and adaptability of the proposed method in realistic scenarios (such as the validity of underlying assumptions) are concerned. The weaknesses outweigh the strengths, which puts this work below the acceptance line.\", \"additional_comments_on_reviewer_discussion\": \"Before rebuttal, four reviewers point out their concerns from several aspects.\\n\\n- The motivation of this paper is not strong and not clear (reviewer WAh5).\\n- The experiments are not sufficient to verify the superiority of the proposed method and justify the claims (reviewers g9bs and dfXN). \\n- The mechanism and in-depth analysis of the proposed method are not clear and unconvincing (reviewers g9bs, dfXN, and WAh5). \\n\\nThe feedbacks provided by the authors mainly address the problem in experiments, by providing more empirical results. However, it does not well answer the noise mechanism in instruction data selection, how to formally connect the noise to high quality, and the reasonableness of introduced assumptions. \\n\\nMost of the reviewers are negative about the current form of this paper. Besides, no one champions it. Based on the above, AC then makes the final rejection recommendation.\"}", "{\"comment\": \"Thanks for your comments and we reply to them below.\\n\\n+ We appreciate your positive feedback on our paper. However, concerning the use of a larger dataset, it is challenging to undertake this task during the rebuttal period, as the training and evaluation of models is a time-consuming and resource-intensive process. In our study, we have chosen the widely recognized alpaca dataset as the primary subject for our experiments, which has been used as a key dataset in numerous studies on instruction fine-tuning and selection. Moreover, we have also achieved excellent results on the smaller dolly-15k dataset, which further attests to the effectiveness of our method across datasets of different sizes.\\n\\n+ The contribution of the LESS paper is primarily reflected in the dataset selection for specific target tasks, which does not entirely align with our multi-task screening approach. Considering the constraints of computational resources and time, we only supplemented the experiments from the Superfiltering paper. We utilized the publicly available dataset from that paper and selected a similar number of data points as our main experiment. The results presented in the following table indicate that our method outperforms the approach described in the Superfiltering paper across all noise intensities. In our experiments, the method proposed in the LESS paper did not perform ideally, which may be due to its data selection method not fully considering the model's specific data needs. Relying solely on an external model to select data may not be sufficient to choose the most suitable data for the model itself.\\n\\n| | MMLU | Math | Code | Commonsense | World Knowledge | Average |\\n|:-----------------:|----------:|:-----:|:-----:|:-----------:|:---------------:|:-------:|\\n| Superfiltering | 41.03 | 7.73 | 11.59 | 49.63 | 19.14 | 25.82 | \\n| ours(\\u03b2=3) | 46.13 | 11.90 | 15.24 | 53.48 | 26.59 | 30.67 |\\n| ours(\\u03b2=5) | 45.28 | 14.10 | 17.07 | 51.76 | 28.86 | 31.41 |\\n| ours(\\u03b2=10) | 47.12 | 15.69 | 15.85 | 56.51 | 29.83 | 33.00 | \\n| ours(\\u03b2=15) | 43.43 | 13.57 | 15.24 | 53.89 | 28.37 | 30.90 |\"}" ] }
7psWohxvxp
Exploring a Principled Framework for Deep Subspace Clustering
[ "Xianghan Meng", "Zhiyuan Huang", "Wei He", "Xianbiao Qi", "Rong Xiao", "Chun-Guang Li" ]
Subspace clustering is a classical unsupervised learning task, built on a basic assumption that high-dimensional data can be approximated by a union of subspaces (UoS). Nevertheless, the real-world data are often deviating from the UoS assumption. To address this challenge, state-of-the-art deep subspace clustering algorithms attempt to jointly learn UoS representations and self-expressive coefficients. However, the general framework of the existing algorithms suffers from feature collapse and lacks a theoretical guarantee to learn desired UoS representation. In this paper, we present a Principled fRamewOrk for Deep Subspace Clustering (PRO-DSC), which is designed to learn structured representations and self-expressive coefficients in a unified manner. Specifically, in PRO-DSC, we incorporate an effective regularization on the learned representations into the self-expressive model, prove that the regularized self-expressive model is able to prevent feature space collapse, and demonstrate that the learned optimal representations under certain condition lie on a union of orthogonal subspaces. Moreover, we provide a scalable and efficient approach to implement our PRO-DSC and conduct extensive experiments to verify our theoretical findings and demonstrate the superior performance of our proposed deep subspace clustering approach.
[ "deep subspace clustering", "self-expressive model", "representation learning" ]
Accept (Poster)
https://openreview.net/pdf?id=7psWohxvxp
https://openreview.net/forum?id=7psWohxvxp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yvnPl4ufY9", "ycFkjSURkv", "xqVo0Zcdnh", "vYu6BOOZiw", "uWDx7xNAF3", "r6LFXwLOhW", "qn3h9a1dSq", "qO8EQ55twg", "o6rCU8jben", "nHlZwKsnkv", "lbSV8sNGR9", "lVmglcoNAy", "irU77ICjmS", "iY611F5DMf", "hKkUiYRNab", "h79mxljfwl", "cD1cKFCISz", "Wl2OKpzzfz", "VB7G0WjvIv", "TX3QodheFE", "RaHq3FG8g8", "ProuWA73c5", "PHGk3VAe3S", "OsQu2Iu0Hq", "KMIAcZ1xuF", "IvW6v76tRK", "I70A5HavAr", "I0Dvw2a1gN", "DKHEqxwhsy", "95XOj3I6nj", "8wvOL7bDXt", "8cVpJtXVk3", "5xga9CWjUb", "51vWuXuUL1", "4XrODHOWfg", "3QBk82wQXW", "3IjUZeenPK", "2rx3OnMP2o", "0jrbDqJaFc", "01d2n5ena5" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732667376631, 1733210634762, 1733206289141, 1732158047602, 1732668183149, 1732156663913, 1732156309291, 1732155901321, 1732156815794, 1732628723777, 1732158386487, 1732158645442, 1732510259241, 1732156611364, 1730246285013, 1732159687041, 1732155936564, 1732510139888, 1737523597936, 1732159714231, 1732156230301, 1732668536858, 1733107119594, 1732160326188, 1732509747470, 1732797516041, 1730633846627, 1730299448771, 1732155682949, 1732509983807, 1732666660075, 1734599453985, 1732667377846, 1732158079390, 1732156773737, 1733106971844, 1730788792962, 1733106608132, 1732157949236, 1732159773595 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3766/Reviewer_JbyQ" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Reviewer_KBNV" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Reviewer_JbyQ" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Reviewer_eucv" ], [ "ICLR.cc/2025/Conference/Submission3766/Reviewer_6RSz" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Area_Chair_TWTH" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Reviewer_KBNV" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ], [ "ICLR.cc/2025/Conference/Submission3766/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thanks for the response\", \"comment\": \"Dear authors,\\n\\nThank you for your detailed response. It addressed most of my concerns, and I will consider raising my score.\\n\\nBest,\\nReviewer\"}", "{\"comment\": \"We are very glad that the reviewer has satisfied our responses and revisions, and we sincerely appreciate the reviewer for the supportive rating. The constructive comments have really helped us to improve the quality of our paper. Thank you very much!\"}", "{\"comment\": \"Thanks to the authors for their detailed response! I am happy to see that the theoretical results are much clearer than before and I have thus increased my rating accordingly.\"}", "{\"title\": \"2. Regarding to Comparison to SOTA Deep Clustering Methods\", \"comment\": \"We thank the reviewer for bringing two recent SOTA methods to our attention.\\n\\nThe performance comparison to ProPos [1] and CDC [2] is shown below, where the results of ProPos and CDC are directly cited from the respective papers. \\n\\n| Method | CIFAR10 | CIFAR20 |\\n| -------------- | ----------- | ----------- |\\n| ProPos | 94.3 / 88.6 | 61.4 / 60.6 |\\n| ProPos w/o PSL | 87.9 / 79.4 | 55.0 / 57.0 |\\n| CDC | 94.9 / 89.3 | 61.7 / 60.9 |\\n| CDC w/o init | 89.4 / 86.5 | 44.4 / 52.3 |\\n| **PRO-DSC** | 92.7 / 85.9 | 59.0 / 60.4 |\\n\\nAlthough ProPos and CDC achieve superior performance, their results primarily depend on introducing a novel Prototype Scattering Loss (PSL) and a specific initialization. While the performance of our PRO-DSC is not the best, it is still quite competitive, because currently our PRO-DSC is a vinila self-expressive model based deep subspace clustering (SEDSC) framework without ``bells and whistles''.\\n\\n- PSL obtains the posterior probabilities of the data categories by performing $k$-means on the entire dataset at each epoch. During the training, the posterior probabilities are used to compute the prototypes of each category, upon which the prototype contrastive learning is conducted. \\n- The initialization of CDC involves performing $k$-means on the entire dataset and using the cluster centers as the initialization for the fully connected layer. This initialization step is iterated until the entire MLP network is initialized. \\n\\nIn summary, the leading performance of PSL and CDC over our PRO-DSC mainly is due to their effectively exploiting the supervision of the pseudo-label information; whereas our PRO-DSC framework is merely a deep version of the most basic self-expressive model without exploiting the pseudo-label information.\\n\\nOur PRO-DSC attempts to provide a reasonable deep subspace clustering framework with theoretical justification, especially to tackle the big challenge that traps the self-expressive model based deep subspace clustering (SEDSC) to learn the representations with desired structure (i.e., a union of subspaces). We provided theoretical justification to demonstrate that our PRO-DSC can prevent the catastrophic feature collapse and the empirical results (See Figs.1 and 2) also confirm our theoretical findings. Moreover, we conducted extensive experiments on deep clustering benchmarks to evaluate the performance of our PRO-DSC. Currently, our framework is implemented naively without special designs or tricks for the leaderboard of the performance; therefore, the reported performance of our PRO-DSC may not be the absolutely best. \\n\\nAs future work, we are interested in enhancing the performance of our PRO-DSC to achieve the best possible clustering results by incorporating the ideas from the works [1][2], and the work in the field of subspace clustering that uses pseudo-label information to achieve excellent results (e.g., Li et al., TIP 2017).\\n\\n[1] Huang et al., \\\"Learning representation for clustering via prototype scattering and positive sampling\\\", IEEE Transactions on Pattern Analysis and Machine Intelligence 45 (6), 2022: 7509-7524.\\n\\n[2] Jia et al., \\\"Towards Calibrated Deep Clustering Network.\\\" arXiv preprint arXiv:2403.02998, 2024.\\n\\n[3] Li et al., \\\"Structured sparse subspace clustering: A joint affinity learning and subspace clustering framework,\\\" IEEE Transactions on Image Processing 26 (6), 2017: 2988-3001.\"}", "{\"comment\": \"We thank the reviewer for the valuable time and effort in reviewing our paper.\\n\\nIn the rebuttal, we have added the discussions on the limitations and failure cases in Appendix C. Moreover, we have clarified the fairness in our experiments and the reproducibility of our work, and also clarified the significance of our work. In addition, we note that in the initial submission, we have submitted the code to reproduce the results in our paper. \\n\\nWe hope you might find our responses and revisions satisfactory, and sincerely hope you will reconsider your rating based on our clarification in responses and the revised paper. Thank you for your time!\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"title\": \"2. Reproducibility and Experimental Details\", \"comment\": \"To ensure the reproducibility of our work, we have submitted the anonymized source code. All datasets used in our experiments are publicly available, and we provided a comprehensive description of the data processing steps in Appendix B.1.\\nAdditionally, we also provided detailed experimental settings and configurations in Appendix B.1 to facilitate the reproduction of our results. We are very glad to answer any question about the details of reproducing our experimental results.\"}", "{\"title\": \"5. More results on synthetic data\", \"comment\": \"The synthetic experiments of adding an additional subspace are presented in Figure B.1 of Appendix B.3 in the revised version.\\n\\n- In case 1, we implement two sets with 100 points in each cluster sampled from Gaussian distribution $\\\\boldsymbol{x}\\\\sim\\\\mathcal{N}([\\\\frac{1}{\\\\sqrt 2},0,\\\\sqrt 2]^\\\\top,0.05\\\\boldsymbol{I}_3)$ and $\\\\boldsymbol{x}\\\\sim\\\\mathcal{N}([-\\\\frac{1}{\\\\sqrt 2},0,\\\\sqrt 2]^\\\\top,0.05\\\\boldsymbol{I}_3)$ in the same side of the sphere. PRO-DSC eliminates the nonlinearity in representations and maximally separates the different subspaces.\\n\\n- In case 2, we add a vertical curve with 100 points sampled by:\\n$$\\n\\\\begin{equation}\\n \\\\boldsymbol{x}=\\\\begin{bmatrix}\\n \\\\cos\\\\left(\\\\frac{1}{5}\\\\sin\\\\left(5\\\\varphi\\\\right)\\\\right)\\\\cos\\\\varphi\\\\\\\\\\\\\\\\\\n \\\\sin\\\\left(\\\\frac{1}{5}\\\\cos\\\\left(5\\\\varphi\\\\right)\\\\right)\\\\\\\\\\\\\\\\\\n \\\\cos\\\\left(\\\\frac{1}{5}\\\\sin\\\\left(5\\\\varphi\\\\right)\\\\right)\\\\sin\\\\varphi\\n \\\\end{bmatrix}+\\\\boldsymbol{\\\\epsilon},\\n \\\\end{equation}\\n$$\\n where $\\\\boldsymbol{\\\\epsilon}\\\\sim\\\\mathcal{N}(\\\\mathbf{0},0.05\\\\boldsymbol{I}_3)$ and use $\\\\sin(\\\\frac{1}{5}\\\\cos(5\\\\varphi))$ to avoid overlap in the intersection of the two curves. For the experiments on synthetic data, the learnable mappings $\\\\boldsymbol{h}(\\\\cdot;\\\\boldsymbol{\\\\Psi})$ and $\\\\boldsymbol{f}(\\\\cdot;\\\\boldsymbol{\\\\Theta})$ are implemented with two MLPs with Rectified Linear Units (ReLU) as the activation function. The hidden dimension and output dimension of the MLP is set to $100$ and $3$, respectively. \\nWe observed that PRO-DSC finds difficulties to learn desired representations for a subset of data points lying at and nearby the intersection of subspaces. However, those data points away from the intersection are linearized well. \\n\\nThe detailed settings for our PRO-DSC are given as follows.\\n- In case 1, we train PRO-DSC with batch-size $n_b=300$, learning rate $\\\\eta=5\\\\times 10^{-3}$ for $5000$ epochs and set $\\\\gamma=1.3,\\\\beta=500,\\\\alpha=3/0.1\\\\cdot 300$.\\n- In case 2, we train PRO-DSC with batch-size $n_b=200$, learning rate $\\\\eta=5\\\\times 10^{-3}$ for $8000$ epochs and set $\\\\gamma=0.5,\\\\beta=500,\\\\alpha=3/0.1\\\\cdot 200$.\\n\\nFor clarity, we submitted a revised version of our paper. And we hope that our point-to-point responses could address the reviewer's questions well, and we are very glad to answer any further question.\"}", "{\"title\": \"2. Regarding to Lemma 1 and its Proof\", \"comment\": \"We note that Lemma 1 had clearly marked where it comes from. The reason to provide the proof for Lemma 1 in Appendix A is to serve as the background prerequisite and the preparatory of the following proofs.\"}", "{\"title\": \"4. Regarding to the Novelty\", \"comment\": \"While each term of the losses in our PRO-DSC framework has existed in prior literature, we argue that it is still for the first time that the term $-\\\\log \\\\det (\\\\cdot)$ is introduced into the self-expressive loss to address the catastrophic collapse issue. Moreover, we have provided theoretical justification and extensive empirical evaluations to demonstrate that adding such a regularization term can indeed resolve the catastrophic feature collapse, which is a longstanding issue in the self-expressive model based deep subspace clustering (SEDSC). Besides, the efficient implementation enable our proposed PRO-DSC can handle large-scale, complex real-world dataset. We believe that our PRO-DSC can benefit the people who are working in subspace clustering or stuck in deep subspace clustering.\\n\\nFor clarity, we have also submitted a revised version of our paper. And we hope that our point-to-point responses could address the reviewer's questions well, and we are very glad to answer any further question regarding to our submission.\"}", "{\"title\": \"Summary of Our Revisions\", \"comment\": [\"We would like to thank all reviewers for the time and effort in reviewing our paper.\", \"To address reviewers' comments and concerns, we have made the following changes.\", \"We have clarified the existence of the optimal dual variable $\\\\nu_\\\\star$ in Theorem 1 by deriving its upper bound, and provided a detailed theoretical justification in **Appendix A**. (L1036-1087) (**KBNV**)\", \"We have revised the contents of Theorem 3 by properly giving a permutation matrix $\\\\Gamma$ and using a relaxed CSC condition, and slightly modified the proof in **Appendix A**. (**KBNV**)\", \"We have clarified the detailed settings for the reported experimental results and updated the results in Table 1 and Table 2 over 10 trials. (**eucv** and **JbyQ**)\", \"We have conducted experiments on datasets Reuters-10k and UCI HAR and added experimental results in Table B.4 of **Appendix B**. (**JbyQ**)\", \"We have conducted experiments to compare with AGSSC and three versions of ARSSCs and added the experimental results in Table B.5 of **Appendix B**. (**JbyQ**)\", \"We have added the discussions on limitations and failure cases in **Appendix C**. (See L1654-1668) (**eucv**)\", \"We have add notes on using $\\\\log \\\\det (\\\\cdot)$ and discussions on extensibility of our PRO-DSC in **Appendix C** (L131-134; L1669-1676). (**6RSz** and **JbyQ**)\", \"The revised contents are highlighted **in blue** in the revised paper. **We kindly invite all reviewers to take a look at these updated results and our responses.**\", \"We sincerely thank all reviewers again for their valuable suggestions, which have greatly helped improve the quality of our work. If you have any further questions, we would be very happy to discuss them further.\"]}", "{\"title\": \"1. Regarding to the Evaluation\", \"comment\": \"First of all, we would like to express our appreciation to the reviewer for recognizing our theoretical justification, empirical evaluations and writing, and for the constructive comments and insightful suggestions.\\n\\nIt is not the case that other algorithms are tested only one trial. \\nIn Appendix B.1, we provided the running details of EnSC, SSCOMP, DSCNet, and CPP. \\n- For SENet, SCAN and EDESC, we adjust the hyper-parameters and repeat experiments for three times, with only the best results are reported.\\n- For TEMI, we directly cited the results from the paper.\\n- For our PRO-DSC, we repeated experiments for three trials and report the average results (with standard deviation). \\n\\nNevertheless, the two classical methods (i.e., k-means and spectral clustering) are overlooked to report the running details: \\n- For k-means and spectral clustering (including when spectral clustering is used as the final step in subspace clustering), we repeat the clustering 10 times with different random initializations (by setting n$\\\\\\\\_$init=10 in scikit-learn) and report the best results.\\n\\nDuring the rebuttal, we conducted more experiments on the CLIP features and reported the average results with standard deviation over 10 trials in the revised version. To have a clear comparison, we list the results over 3 trials and 10 trials in Table B.3. For the experiments that trained from scratch, we will update them in the final version due to time limitation in the rebuttal period.\"}", "{\"title\": \"2. Experimental Results on Datasets Reuters and UCI HAR\", \"comment\": \"The dataset Reuters-10k consists of four text classes, containing 10,000 samples of 2,000 dimension. The UCI HAR is a time-series dataset, consisting of six classes, 10,299 samples of 561 dimension. We take EDESC as the baseline method for deep subspace clustering on Reuters-10k, and take N2D [1] and FCMI [2] as the baseline methods for UCI HAR, in which the results are directly cited from the respective papers. We conducted experiments with PRO-DSC on Reuters and UCI HAR following the same protocol for data processing as the baseline methods. We train and test PRO-DSC on the entire dataset and report the results over 10 trials.\\n\\nExperimental results (ACC% / NMI%) are provided in Table B.5. We observed that our PRO-DSC yields better results than EDESC on REUTERS-10k; but yields competitive results on UCI HAR. Note that our PRO-DSC with a vanilla self-expressive model has already demonstrated highly competitive performance. By incorporating the pseudo-label information, more sophisticated self-supervision strategy or more calibrated self-expressive model might enable PRO-DSC to yield more promising performance. We have added the results in the Table 4 of Appendix B. \\n\\n| Dataset | REUTERS-10k | UCI HAR |\\n| :------- | :---------------------------: | :-----------------------: |\\n| k-means | 52.4 / 31.2 | 59.9 / 58.8 |\\n| SC | 40.2 / 37.5 | 53.8 / 74.1 |\\n| AE | 59.7 / 32.3 | 66.3 / 60.7 |\\n| VAE | 62.5 / 32.9 | - / - |\\n| JULE | 62.6 / 40.5 | - / - |\\n| DEC | 75.6 / 68.3 | 57.1 / 65.5 |\\n| DSEC | 78.3 / 70.8 | - / - |\\n| EDESC | 82.5 / 61.1 | - / - |\\n| DFDC | - / - | 86.2 / 84.5 |\\n| N2D [1] | - / - | 82.8 / 71.7 |\\n| FCMI [2] | - / - | **88.2** / 80.7 |\\n| PRO-DSC | **85.7**\\u00b11.3 / **64.6**\\u00b11.3 | 87.1\\u00b10.4 / **80.9**\\u00b11.2 |\\n\\n[1] McConville, et al., \\\"N2d: (not too) deep clustering via clustering the local manifold of an autoencoded embedding,\\\" 25th International Conference on Pattern Recognition (ICPR), Jan. 2021.\\n\\n[2] Zeng, et al., \\\"Deep fair clustering via maximizing and minimizing mutual information: Theory, algorithm and metric,\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.\"}", "{\"title\": \"Kindly invite you to check our point-to-point responses\", \"comment\": \"We would like to thank the reviewer for the time and effort in reviewing our paper. We have carefully addressed the comments in detail, and hope you might find our responses satisfactory. Since that the discussion phase is approaching to close, we are very much looking forward to hearing from you about any further feedback. We will be very happy to clarify further concerns (if any). Thank you for your time.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"1. Regarding to Limitations and Failed Cases\", \"comment\": \"We thank the reviewer for recognition of our paper writing, techniques, and experiments.\\n\\n**Limitations:** In this paper, we explore an effective framework for deep subspace clustering with theoretical justification. However, it is not clear how to develop the geometric guarantee for our PRO-DSC framework to yield subspace-preserving (correct) solution. Moreover, it is an unsupervised learning framework, we left the extension to semi-supervised setting as future work. \\n\\n**Failed cases:** In this paper, we evaluate our PRO-DSC framework on two scenarios of synthetic data (Fig. 5), six benchmark datasets with CLIP features (Table 1), five benchmark datasets which are for training from scratch (Table 2), three out-of-domain datasets (Table B.9), using four different regularization terms (Table 4), using different feature extractor (Table B.8) and varing hyper-parameters (Fig. 7 and Table B.10). During the rebuttal, to reply Reviewer JbyQ, we also conducted experiments on two face image datasets (Extended Yale b and ORL), an object recognition (COIL-100), a text dataset (REUTERS-10k) and a time-seris dataset (UCI HAR). Currently, we did not find significant failure cases. \\n\\nHowever, as demonstrated in Fig. 1, our PRO-DSC will fail if the sufficient condition to prevent catastrophic feature collapse is not satisfied by using improper hyper-parameters $\\\\gamma$ and $\\\\alpha$. We have added the discussions on limitations and the failure cases in the revised version in the Appendix C.\"}", "{\"summary\": \"Deep subspace clustering methods usually encounter the challenge of feature collapse and lack theoretical guarantees for learning representations that form a union of subspaces (UoS) structure. To address these issues, this paper presents a principled framework for deep subspace clustering (PRO-DSC). The framework incorporates an effective regularization on the learned representations to prevent feature space collapse. Furthermore, theoretical analysis demonstrates that PRO-DSC can yield representations of a UoS structure under certain conditions. Experimental results show the effectiveness of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This work presents an effective method to prevent feature collapse and addresses the lack of theoretical guidance for learning representations with a UoS structure. An efficient implementation is presented to alleviate the computational burden of self-expressive learning.\\n2. The work is sound from a technical perspective. It combines solid theoretical proof and empirical evidence. \\n3. Experiments on synthetic and real-world data are conducted to evaluate the effectiveness. Meanwhile, both quantitative and qualitative results are provided for comparison.\\n4. Overall, this paper is well-written and thoughtfully structured.\", \"weaknesses\": \"1. As one of the main challenges this work focuses on is learning UoS representations, I suggest the authors emphasize the significance of this challenge by clarifying the associated consequences and providing empirical observations to support it.\\n2. Since the proposed PRO-DSC is designed as a framework, I expect further discussion and experimentation on its scalability and extensibility.\\n3. I have several questions and concerns regarding the experimental parts:\\n 1) Why is only the proposed method run multiple times for evaluation, while other methods seem to be tested only once? Additionally, why is the method run only three times? Repeating the evaluation 10 times is commonly preferred for more reliable results.\\n 2) Why are only image datasets used for evaluation? I suggest testing the performance on a wider range of datasets, such as the Reuters and UCI HAR datasets.\\n 3) Most of the comparison methods used in experiments are outdated. Please consider adding two more state-of-the-art subspace clustering methods for comparison, such as AGCSC [1] and SAGSC [2].\\n 4) Why do the comparison methods differ between experiments in Tables 1 and 2?\\n 5) In the synthetic data experiments, why is DSCNet used as the representative SEDSC method rather than the more competitive, recent SENET?\\n\\n[1] Wei, Lai, et al. \\\"Adaptive graph convolutional subspace clustering.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n\\n[2] Wang, Libin, et al. \\\"Attention reweighted sparse subspace clustering.\\\" Pattern Recognition, 139 (2023): 109438.\", \"questions\": \"I have several questions and concerns regarding the experimental parts:\\n1) Why is only the proposed method run multiple times for evaluation, while other methods seem to be tested only once? Additionally, why is the method run only three times? Repeating the evaluation 10 times is commonly preferred for more reliable results.\\n2) Why are only image datasets used for evaluation? I suggest testing the performance on a wider range of datasets, such as the Reuters and UCI HAR datasets.\\n3) Most of the comparison methods used in experiments are outdated. Please consider adding two more state-of-the-art subspace clustering methods for comparison, such as AGCSC [1] and SAGSC [2].\\n4) Why do the comparison methods differ between experiments in Tables 1 and 2?\\n5) In the synthetic data experiments, why is DSCNet used as the representative SEDSC method rather than the more competitive, recent SENET?\\n\\n[1] Wei, Lai, et al. \\\"Adaptive graph convolutional subspace clustering.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.\\n\\n[2] Wang, Libin, et al. \\\"Attention reweighted sparse subspace clustering.\\\" Pattern Recognition, 139 (2023): 109438.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"3. Comparison to AGCSC and SAGSC\", \"comment\": \"We thank the reviewer for pointing out two state-of-the-art subspace clustering methods AGCSC (Wei et al. CVPR'23) and ARSSC (Wang et al. PR'23). Since that both of the two methods cannot handle the datasets used for evaluating our PRO-DSC, we conducted experiments on the benchmark datasets: Extended Yale B (EYaleB), ORL, and COIL-100.\\nWe set the architecture of pre-feature layer in PRO-DSC as the same to the encoder of DSCNet. \\n\\nWe repeated experiments for 10 trials, reported the average results (ACC% / NMI%) with standard deviation in Table B.5 of Appendix and also listed the results as follows. \\n\\n| Dataset | EYale-B | ORL | COIL-100 |\\n| ------------- | ------------------------------------- | ----------------------------------------------- | --------------------------------------- |\\n| EnSC | 65.2 / 73.4 | 77.4 / 90.3 | 68.0 / 90.1 |\\n| SSCOMP | 78.0 /84.4 | 66.4 / 83.2 | 31.3 / 58.8 |\\n| S3COMP | 87.4 / - | - / - | 78.9 / - |\\n| DSCNet | 69.1 / 74.6 | 75.8 / 87.8 | 49.3 / 75.2 |\\n| J-DSSC [3] | 92.4 / $\\\\underline{95.2}$ | 78.5 / 90.6 | 79.6 / 94.3 |\\n| A-DSSC [3] | 91.7 / 94.7 | 79.0 / 91.0 | $\\\\underline{82.4}$ / $\\\\underline{94.6}$ |\\n| AGCSC [4] | 92.3 / 94.0 | **86.3** / **92.8** | OOT / OOT |\\n| ARSSC-LP [5] | 95.7 / - | 75.5 / - | - / - |\\n| ARSSC-LSP [5] | 95.9 / - | 71.3 / - | - / - |\\n| ARSSC-MCP [5] | **99.3** / - | 72.0 / - | - / - |\\n| PRO-DSC | $\\\\underline{96.0}$\\u00b10.3 / **95.7**\\u00b10.8 | $\\\\underline{83.2}$\\u00b12.2 / $\\\\underline{92.7}$\\u00b10.6 | **82.8**\\u00b10.9 / **95.0**\\u00b10.6 |\\n\\nThe results of AGCSC, ARSSC, and DSSC are directly cited from their papers; the resutls of DSCNet, SSCOMP, S3COMP and EnSC are cited from DSSC [3]. We have added these results in Table B.5 of the revised version of our paper. \\nThe hyper-parameters configuration for training PRO-DSC is summarized as follow (also in Table B.3 in the revised version).\\n\\n| Dataset | $\\\\eta$ | $d_{pre}$ | $d$ | $\\\\\\\\#$ epoch | $n_b$ | $\\\\\\\\#$ warm-up | $\\\\gamma$ | $\\\\beta$ |\\n| ----------- | ------ | --------- | ---- | ---------- | ----- | ------------ | -------- | ------- |\\n| REUTERS-10k | 1e-4 | 1024 | 128 | 100 | 1024 | 50 | 50 | 200 |\\n| UCI HAR | 1e-4 | 1024 | 128 | 100 | 2048 | 20 | 100 | 300 |\\n| EYale-B | 1e-4 | 1024 | 256 | 10000 | 2432 | 100 | 200 | 50 |\\n| ORL | 1e-4 | 80 | 64 | 5000 | 400 | 100 | 75 | 10 |\\n| COIL-100 | 1e-4 | 12800 | 100 | 10000 | 7200 | 100 | 200 | 100 |\\n\\n- AGCSC. Our method surpasses AGCSC on the Extended Yale B dataset and achieves comparable results on the ORL dataset. However, AGCSC did yield the result on COIL-100 in 24 hours.\\n\\n- ARSSC. ARSSC employs three different non-convex regularizers: $\\\\ell_\\\\gamma$ norm Penalty (LP), Log-Sum Penalty (LSP), and Minimax Concave Penalty (MCP). \\n\\nWhile ARSSC-MCP performs the best on Extended Yale B, our PRO-DSC outperforms ARSSC-MCP on ORL. While AGCSC performs the best on ORL, but it yields inferior results on Extended Yale B and it cannot yield the results on COIL-100 in 24 hours. Thus, we did not report the results of AGCSC on COIL-100 and marked it as Out of Time (OOT). Our PRO-DSC performs the second best results on Extended Yale B, ORL and the best results on COIL-100. Since that we have not found the open-source code for ARSSC, we are unable to have its results on COIL-100.\\n\\n[3] Lim et al.: \\\"Doubly stochastic subspace clustering\\\", arXiv preprint arXiv:2011.14859 (2020).\\n\\n[4] Wei et al.: \\\"Adaptive graph convolutional subspace clustering\\\", Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.\\n\\n[5] Wang et al.: \\\"Attention reweighted sparse subspace clustering\\\", Pattern Recognition, 139 (2023): 109438.\"}", "{\"title\": \"3. Regarding to Theorem 1\", \"comment\": \"Thank the reviewer for careful checking our informal statement of Theorem 1. The formal and complete statement of Theorem 1 is provided in Appendix A.\\n\\nNote that $\\\\nu$ is a Lagrangian multiplier (which is also called a dual variable in its Lagrangian dual problem). We add a paragraph of justification (which is highlighted in blue in the Appendix A. of the revised version) for the existence of the optimal Lagrangian multiplier $\\\\nu_\\\\star$ in Appendix A. \\n\\nOur analysis is based on the KKT optimality condition of problem in Eq. (5) and reveals that $\\\\nu_\\\\star \\\\le \\\\frac{\\\\alpha}{1+\\\\alpha}$ or $\\\\nu_\\\\star \\\\le \\\\frac{\\\\alpha}{\\\\frac{d}{N}+\\\\alpha}$ in the case of $d\\\\ge N$ and $d<N$. Thus, when the inequality in each case $\\\\frac{\\\\alpha}{1+\\\\alpha}< \\\\alpha - \\\\gamma \\\\lambda_{max}(\\\\boldsymbol{M})$ or $\\\\frac{\\\\alpha}{\\\\frac{d}{N}+\\\\alpha}< \\\\alpha - \\\\gamma \\\\lambda_{max}(\\\\boldsymbol{M})$ holds, i.e., $\\\\gamma \\\\lambda_{max}(\\\\boldsymbol{M}) < \\\\frac{\\\\alpha^2}{1+\\\\alpha}$ or $\\\\gamma \\\\lambda_{max}(\\\\boldsymbol{M}) < \\\\frac{\\\\alpha^2}{\\\\frac{d}{N}+\\\\alpha}$, then the condition of Theorem 1 will be automatically satisfied. This result implies that, we are only required to adjust the hyper-parameters $\\\\gamma$ and $\\\\alpha$ to satisfy the inequality $\\\\gamma \\\\lambda_{max}(\\\\boldsymbol{M}) < \\\\frac{\\\\alpha^2}{1+\\\\alpha}$ or $\\\\gamma\\\\lambda_{max}(\\\\boldsymbol{M})<\\\\frac{\\\\alpha^2}{\\\\frac{d}{N}+\\\\alpha}$, no need to concern about the specific value of $\\\\nu_\\\\star$. \\n\\nThe justification is added in the Appendix A of the revised version.\"}", "{\"title\": \"Kindly invite you to check our point-to-point responses\", \"comment\": \"We would like to thank the reviewer for the time and effort in reviewing our paper. We have carefully clarified the comments and the concerns in detail, and hope you might find our responses satisfactory. Since that the discussion phase is approaching to close, we are very much looking forward to hearing from you about any further feedback. We will be very happy to clarify further concerns (if any). Thank you for your time.\\n\\nBest regards,\\n\\nAuthors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"4. About the different baselines in Tables 1 and 2\", \"comment\": \"we apologize for the confusion. The different baselines in Tables 1 and 2 are actually due to the fact that many of the baselines can only handle either the extracted features or the raw images.\\n\\nTo be specific, the reason to use different baseline algorithms to compare in Tables 1 and 2 mainly due to the types of input data that can be handled by different algorithms. Table 1 shows the clustering performance of using extracted CLIP features, so the baselines here must be applicable to the input of vectors. On contrary, all the deep clustering methods in Table 2 rely on data augmentation which is applied to raw image, and thus cannot be applied to the fixed extracted features. \\nTo further clarify them, we will group the baselines in Tables 1 and 2 for further explanation.\\n\\n- **The baselines included in both Tables 1 and 2.**\\n For $k$-means and spectral clustering (SC), which are used in both Tables 1 and 2, we take the extracted CLIP features (for Table 1) and the flattened raw images (for Table 2) as their input. \\n For SCAN, when it is performed with the CLIP features (in Table 1), we omit its augmentation consistency based representation learning loss.\\n Notably, CPP (in Table 1) and MLC (in Table 2) are almost the same, which are both based on the MCR2 principle (Yu et al. NeurIPS'20), except for that MLC additionally promote consistency among different augmentations.\\n\\n- **The baselines are included in Table 1 but not in Table 2.**\\n The scalable subspace clustering algorithms, e.g., SSCOMP, ENSC, SENet, are not suitable to raw image as inputs. Since that DSCNet is not a scalable algorithm, it is challenging to be run on these datasets.\\n Thus, we only conduct experiments on the CLIP features of the test set on CIFAR and ImageNet for these methods. Since that TEMI is specifically designed for CLIP features, it does not appear in Table 2.\\n\\n- **The baselines are included in Table 2 but not in Table 1.**\\n The methods, including CC, GCC, NNM, NMCE, IMC-SwaV, which rely on data augmentation for unsupervised representation learning, cannot be proper transferred to the CLIP features.\\n\\nTherefore, we list these algorithms in Tables 1 and 2 of our submission for best presenting their performance and for fair comparison. To compare with these baseline algorithms in both cases, we extend our PRO-DSC to be able to evaluate clustering performance with both types of input.\"}", "{\"title\": \"4. Regarding to Theorem 3\", \"comment\": \"### **1) About the Optimal Solutions**\\n\\nIn this context, we are referring to 'all' optimal solutions. In the proof of Theorem 3, the inequalities are based on the property of convex function, where the equality holds for all the optimization variables $\\\\boldsymbol{Z}$ and $\\\\boldsymbol{C}$ which satisfy that both $\\\\boldsymbol{G}$ and $\\\\boldsymbol{C}$ are block diagonal under certain permutation $\\\\Gamma$. Thus, the conclusion in Theorem 3 does not depend on a specific global optimal solution.\\n\\nWe note that the optimal solutions of PRO-DSC is NOT unique, since that a multiplication of an orthogonal matrix to $\\\\boldsymbol{Z}$ does not change the objective value and constraint (see the results on synthetic data in Fig. 5). This means that although PRO-DSC is a nonconvex problem, it possesses an elegant rotation symmetry. We conjecture that all the local minimizers up to rotation symmetry are equivalently good [1], and we left the complete analysis of the optimization landscape of our PRO-DSC as a future work. We highly appreciate the reviewer for the insightful questions which remind us to notice of the optimization landscape. That will be left for feature work. \\n\\n[1] J. Wright and Y. Ma, High-Dimensional Data Analysis with Low-Dimensional Models: Principles, Computation and Applications, 2022. (Chapter 7)\\n\\n### **2) About $\\\\Gamma$ and the CSC condition in Theorem 3**\\n\\nWe apology for the confusion in using the notion of permutation matrix $\\\\Gamma$ in Theorem 3. Note that multiplying a permutation matrix on the right (or left) rearrange the columns (or rows) of the matrix. The purpose of rearrangement is to group together the data points that potentially belong to the same subspace into the same block, which allows us to observe the structures of the learned representations and the self-expressive matrix. For simplicity and without loss of generality, we assume that the columns of $\\\\boldsymbol{Z}$ are arranged into blocks with a certain permutation matrix $\\\\boldsymbol{\\\\Gamma}$, i.e., $\\\\boldsymbol{Z} =[\\\\boldsymbol{Z}_1,\\\\boldsymbol{Z}_2,\\\\cdots, \\\\boldsymbol{Z}_k]$. Thus, the contents of Theorem 3 have also been carefully revised. Please refer to the revised version of our submission. Thanks the reviewer for pointing out this confusion issue about abuse of $\\\\boldsymbol{\\\\Gamma}$. \\n\\n### **3) About the CSC Condition and Interpretations**\\n\\nWe consider the solution to satisfy the CSC condition as a (non-empty) feasible set, then the optimal $(\\\\boldsymbol{Z}_ \\\\\\\\star,\\\\boldsymbol{C}_ \\\\\\\\star)$ will make $(\\\\boldsymbol{G}_ \\\\\\\\star, \\\\boldsymbol{C}_ \\\\\\\\star)$ be block-diagonal with respect to certain permutation $\\\\boldsymbol{\\\\Gamma}$. \\n\\nOn the other hand, for a given $(\\\\boldsymbol{Z},\\\\boldsymbol{C})$ (at a certain iteration or when converged), suppose that we arrange $\\\\boldsymbol{Z}$ with respect to the ground-truth labels, then the CSC condition can serve as a computable metric of the distance about $(\\\\boldsymbol{Z},\\\\boldsymbol{C})$ to the desired arrangement of the orthogonal subspaces and $\\\\boldsymbol{Z}_ \\\\\\\\star$ and the (correct) block diagonal coefficient matrix $\\\\boldsymbol{C}_ \\\\\\\\star$ (as in Figure 4).\\n\\nFor a given $(\\\\boldsymbol{Z},\\\\boldsymbol{C})$, we can provide three simple examples to interpret CSC:\\n\\n- When the subspaces of $\\\\boldsymbol{Z}$ are orthogonal, since $\\\\boldsymbol{G}-\\\\boldsymbol{G}^*=0$, the condition is automatically satisfied.\\n\\n- When $\\\\boldsymbol{C}$ is a block diagonal matrix, the condition is also satisfied because $\\\\boldsymbol{G}-\\\\boldsymbol{G}^*$ is an off-block diagonal matrix.\\n\\n- When the subspaces associated with $\\\\boldsymbol{Z}_1, \\\\boldsymbol{Z}_2,\\\\cdots,\\\\boldsymbol{Z}_k$ are independent, since that self-expressive model yields block diagonal $\\\\boldsymbol{C}$ for any regularizer that satisfies the Enforced Block Diagonal (EBD) conditions (see Lu et al 2018) when $\\\\boldsymbol{Z}$ is fixed, CSC is also satisfied.\\n\\nAdditionally, in Figure 4, our experimental results show that, even though there are fluctuations early in the optimization process, the CSC condition gradually holds as the training progresses.\"}", "{\"comment\": \"We thank the reviewer for the valuable time and the effort in reviewing our paper.\\n\\nIn the rebuttal, we have clarified the reason to adopt the $\\\\log \\\\det(\\\\cdot)$ term, interpreted the other choices and the physical meanings, and discussed the performance comparison to the mentioned methods. And we have revised our paper accordingly (see L131-134, L1669-1676). \\n\\nWe hope you might find our responses and revisions satisfactory, and sincerely hope you will reconsider your rating based on our clarification in responses and the revised paper. Thank you for your time! \\n\\nBest Regards,\\n\\nAuthors\"}", "{\"title\": \"Kindly invite you to check our replies\", \"comment\": \"We thank the reviewer for the valuable time and the effort in reviewing our paper. Since that the discussion phase is approaching to close, we are very much looking forward to hearing from you about any further feedback. We hope you might find our responses and revisions satisfactory, and sincerely hope you will reconsider your rating based on our clarification and the revised paper.\\n\\nThank you for your time!\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"title\": \"6. Regarding to Scalablity and Extensibility of Our PRO-DSC\", \"comment\": \"It is noteworthy that the goal of our work is to provide a reasonable (or a principled) framework for deep subspace clustering based on self-expressive (SE) model. We demonstrate theoretically and empirically that adding a $\\\\log \\\\det(\\\\cdot)$ term into the SE model can prevent the catastrophic feature collapse, and show our PRO-DSC promotes to produce representations that are aligned with a union of orthogonal subspaces. Therefore, our PRO-DSC provides a general framework for self-expressive model based deep subspace clustering.\\n\\n- **Scalablity**: In the implementation of our PRO-DSC, rather than directly optimizing the variables $Z$ and $C$, we optimize the parameters which are used to reparameterize them. Such a reparameterization strategy makes our implimentation is scalable to large-scale dataset and enjoy a generalization ability to out-of-sample unseen data. \\n\\n- **Extensibility**: Note that our PRO-DSC with a vanilla self-expressive model has already demonstrated highly competitive performance. Both AGCSC and ARSSC improve the self-expressive model by incorporating GCN modules and self-adaptive attention mechanisms, respectively. Thus, it will be attempting to employ them into the deep self-expressive models under our PRO-DSC framework, and we believe that extending PRO-DSC with current SOTA self-expressive models (such as AGCSC and ARSSC) would be more promising. \\n\\nThus, as a general framework for self-expressive model based deep subspace clustering, our PRO-DSC is reasonable, scalable and flexible to miscellaneous extensions.\\n\\nFor clarity, we have also submitted a revised version of our paper. We hope that our point-to-point responses could address the reviewer's questions and concerns well, and we are very glad to answer any further questions.\"}", "{\"title\": \"Kindly invite you to check our responses\", \"comment\": \"We would like to thank the reviewer for the time and effort in reviewing our paper. We have carefully addressed the comments and the concerns point-to-point in detail. We hope you might find our responses satisfactory. As the discussion phase is about to close, we are very much looking forward to hearing from you about any further feedback. We will be very happy to clarify further concerns (if any).\\n\\nThank you for your time.\\n\\nBest regards,\\nAuthors\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your time in reviewing our paper. We have updated our paper by adding discussions on limitations and failure cases in Appendix C and supplying more experimental results averaged on 10 trails in Tables 1 and 2. We hope you might find our responses and revisions satisfactory, and we sincerely hope you will reconsider your rating based on our responses and the updated paper. Thank you for your time and effort!\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"summary\": \"This paper proposed a Principled fRamewOrk for Deep Subspace Clustering (PRO-DSC), which is designed to learn structured representations and self-expressive coefficients in a unified manner. First, PRO-DSC incorporates an effective regularization into self-expressive model to prevent the catastrophic representation collapse with theoretical justification. Second, PRO-DSC demonstrated that it is able to learn structured representations that form a desirable UoS structure, and also developed an efficient implementation based on reparameterization and differential programming. Comprehensive experiments verify the superiority of the proposed model.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.The paper is well-written and technically sound.\\n2.The experiments are comprehensive.\", \"weaknesses\": \"1.What are the limitations and failed cases of the proposed method? Some discussion needed.\\n2.There may be insufficient implementation details provided, hindering reproducibility of the study and making it challenging for other researchers to replicate the results. Such as, the results for PRO-DSC are averaged over three trials (with\\u00b1std), what about other methods? Their results are the best or mean? As I know, some methods like k-means and SC are sensitive to the initialization, their results are recorded by the best or mean in some runs with different initializations? A fair experimental setting is necessary. \\n3.The subspace description coefficients and manifold parts provided in the article are based on existing research results and lack sufficient innovation.\", \"questions\": \"1.What are the limitations and failed cases of the proposed method? Some discussion needed.\\nThere may be insufficient implementation details provided, hindering reproducibility of the study and making it challenging for other researchers to replicate the results. Such as, the results for PRO-DSC are averaged over three trials (with\\u00b1std), what about other methods? Their results are the best or mean? As I know, some methods like k-means and SC are sensitive to the initialization, their results are recorded by the best or mean in some runs with different initializations? A fair experimental setting is necessary.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies the deep subspace clustering problem. The general framework of the existing algorithms suffers from feature collapse and lacks a theoretical guarantee to learn the desired UoS representation. This paper presents a principled framework for deep subspace clustering (PRO-DSC), which is designed to learn structured representations and self-expressive coefficients in a unified manner. The motivation is clear, and the experimental performance of the proposed model is also good.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1.\\tThe motivation is clear.\\n2.\\tThe proposed method has strong theoretical support. \\n3.\\tThe problem that needs to be solved is important, and the proposed method is reasonable.\", \"weaknesses\": \"In Eq. (4), it needs to be clarified why the logdet term is adopted. Is it only used to solve the representation collapse problem? Are there any other methods that can solve this collapse problem? More importantly, what are the underlying physical meanings of this term?\\n\\nIn Table 2, although the proposed methods seem to have the best performance. Some really SOTA deep clustering methods are not compared, like \\n[1] Learning Representation for Clustering Via Prototype Scattering and Positive Sampling, 2023 TPAMI.\\n[2] Towards Calibrated Deep Clustering Network, 2024 Arxiv.\\n\\nAblation studies should be performed to check the effectiveness of each component of the proposed method.\", \"questions\": \"See the weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"1. Regarding to the Novel Contributions\", \"comment\": \"We appreciate the reviewer for reading our submission carefully and, for providing some critical yet valuable comments on the novelty of our work. However, we do not agree with the reviewer that our work is incremental.\\n\\nIn the past decade, subspace clustering has been substantially put forward owning to: \\n- a) self-expressive (SE) model ; \\n- b) correctness guarantees for SE coefficients to satisfy a subspace-preserving property with respect to different regularization terms, and \\n- c) scalable algorithms to solve SE model. Though there are many heuristic methods to address subspace clustering problem, people usually attribute the substantial advancements in subspace clustering to clear algorithms with solid correctness (theoretical) guarantee.\\n\\nIn the era of deep learning, developing a reasonable deep framework for subspace clustering has also attracted a lot of research attention in the past a few years. One of the most popular and elegant frameworks for deep subspace clustering is DSCNet (Ji et al., NeurIPS'17), which encapsulates a SE model (which is called a SE layer) in the \\\"middle\\\" of a stacked convolutional auto-encoder network (SCAE) and is trained by using a combination of reconstruction loss, SE loss and $\\\\ell_1$ or $\\\\ell_2$ regularizer. Although remarkable clustering accuracy has been reported on four image datasets (especially on Extended Yale B), there is no clear evidence to demonstrate that SCAE in DSCNet is able to amend the input data to align with a union of subspaces (UoS). More critically, as theoretically revealed by (Haeffele et al. ICLR'21), the optimal solution of DSCNet (which is called SE based Deep Subspace Clustering, SEDSC) suffers from a catastrophic feature collapse---which thus leads to a seriously degenerated performance. Unfortunately, Haeffele et al. (ICLR'21) did not provide a reasonable framework for deep subspace clustering. \\n\\nAn interesting recent work is MLC (Ding et al. ICCV'23), which is inspired by MCR2 (Yu et al. NeurIPS'20). In MLC, the loss function is modified from MCR2, by changing the membership matrix in MCR2 to a doubly stochastic affinity matrix, and adding a (negative) entropy regularization on the affinity. However, there is no theoretical justification on the optimal solution and the algorithm is sensitive to the initialization. \\n\\nThe goal of our work is to provide a reasonable framework for deep subspace clustering. To keep clarity and simplicity, we devote a principled solution to make SEDSC to avoid catastrophic feature collapse and to learn representation with desired structure, e.g., a union of subspaces. We argue that: \\n- a) incorporating the total coding rate term as defined in Eq. (3) into SEDSC model as a regularizer has not existed in the literature, and thus it is novel; and \\n- b) providing theoretical justification to verify the effect of incorporating such a regularizer to prevent the catastrophic feature collapse is novel contribution. \\n\\nIncorporating the total coding rate term into SEDSC is natural and elegant, and it has NOT been investigated in the literature yet. Moreover, it is NOT an arbitrary \\\"combining subspace clustering loss and Eq. (3)\\\". We don't think that there are \\\"many different subspace clustering loss functions\\\" can be easily combined and supported by solid theoretical justification. Actually, for our PRO-DSC, as a general framework, we have investigated at least four different types of regularizers on the self-expressive coefficients (See Ablation Studies in Table 4).\\n\\nIn the revised paper, we have added a discussion to connect and compare MLC (Ding et al. ICCV '23) in the end of Related Work. Thanks the reviewer for the careful reading and the suggestions. For any further question, we are very glad to discuss and reply.\"}", "{\"title\": \"Kindly invite you to check our point-to-point replies\", \"comment\": \"We would like to thank the reviewer for the time and effort in reviewing our paper. We have carefully addressed the comments and the concerns in detail. We hope you might find our responses satisfactory. Since that the discussion phase is approaching to close, we are very much looking forward to hearing from you about any further feedback. We will be very happy to clarify further concerns (if any).\\n\\nThank you for your time.\\n\\nBest regards, \\n\\nAuthors\"}", "{\"comment\": \"We appreciate the reviewer for the valuable time and effort in reviewing our paper.\\n\\nWe have clarified the significance of our work, justified the existence of the optimal dual variable, reorganized Theorem 3, added extra experiments, and revised our paper accordingly. \\n\\nWe hope you might find our responses and revisions satisfactory, and sincerely hope you will reconsider your rating based on our clarification in responses and the revised paper. Thank you for your time.\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"metareview\": \"This paper presents a principled framework for deep subspace clustering (PRO-DSC), which is designed to learn structured representations and self-expressive coefficients in a unified manner. First, PRO-DSC incorporates an effective regularization to prevent catastrophic representation collapse. Second, PRO-DSC learns structured representations that form a desirable UoS structure. Both theoretical and experimental analyses demonstrate the effectiveness of the proposed method.\\n\\nHowever, I feel that some contributions might be overclaimed. First, the hyper-parameter configuration of PRO-DSC is elaborately tuned on different datasets, which plays an important role in its performance. Second, though the authors claim PRO-DSC could be trained from scratch, it still relies on a powerful representation learning baseline BYOL, which is also elaborately tuned for different datasets.\\n\\nBased on the reviewers' comments, I decided to accept this paper. Nevertheless, the authors must carefully check the claims about the superior performance and ability to \\\"train from scratch.\\\" Also, the potential limitation in the robustness of PRO-DSC should be explicitly discussed.\", \"additional_comments_on_reviewer_discussion\": \"In general, the authors have addressed the reviewers' major concerns about the novelty, theoretical analyses, and experimental comparisons of this work. Two reviewers did not respond to the authors, while the other two reviewers were satisfied with the author's response and raised their scores accordingly.\"}", "{\"comment\": \"We would like to thank the reviewer for the valuable time and effort in reviewing our paper, and taking time to read our responses. And we sincerely appreciate the reviewer for increasing the rating. Thank you very much!\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"title\": \"3. Regarding to Ablation Studies\", \"comment\": \"We kindly remind the reviewer that the ablation studies were included in the our previous submission in Table 4, where we evaluated the effectiveness of each component in our PRO-DSC framework and also evaluated the performance of using four different regularization term on the self-expressive coefficients. For any question, we are very happy to discuss them further.\\n\\nFor clarity, we have also submitted a revised version of our submission. And we hope that our point-to-point responses could address the reviewer's questions well, and we are glad to answer any further question regarding to our submission.\"}", "{\"title\": \"3. Regarding to Fairness of Reported Results\", \"comment\": \"In Appendix B.1 of our submitted paper, we have provided the running details of EnSC, SSCOMP, DSCNet, and CPP.\\n\\n- For SENet, SCAN and EDESC, we adjust the hyper-parameters and repeat experiments for three times, with only the best results are reported.\\n\\n- For TEMI, we directly cited the results from the paper.\\n\\n- For our PRO-DSC, we repeated experiments for three trials and report the average results (with standard deviation).\\n\\nNonetheless, the two classical methods (i.e., k-means and spectral clustering) are overlooked to report the running details.\\n\\n- For k-means and spectral clustering (including when spectral clustering is used as the final step in subspace clustering), we repeat the clustering 10 times with different random initializations (by setting n$\\\\\\\\_$init=10 in scikit-learn) and report the best results.\\n\\nMoreover, during the rebuttal, we conducted more experiments on both the CLIP feature and the experiments of training from scratch to report the average results with standard deviation over 10 trials, and these results have been updated in the revised paper (See the updated results in Tables 1 and 2 which are highlighted in blue). Here, we also listed the updated results (ACC% / NMI%) in the following tables for reference. \\n\\n- **Updated experimental results on the CLIP feature in Table 1**\\n| Datasets | CIFAR10 | CIFAR100 | CIFAR20 | TinyImageNet | ImgNetDogs-15 | ImgNet-1k |\\n| -------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- | ------------------- |\\n| **repeat 3 trials** | 97.2\\u00b10.2 / 93.2\\u00b10.2 | 77.2\\u00b10.4 /82.4\\u00b10.3 | 71.4\\u00b11.3 / 73.3\\u00b10.4 | 69.4\\u00b11.4 / 80.4\\u00b11.1 | 83.6\\u00b10.1 / 81.5\\u00b10.4 | 65.1\\u00b10.3 / 83.6\\u00b10.2 |\\n| **repeat 10 trials** | 97.2\\u00b10.2 / 92.8\\u00b10.4 | 77.3\\u00b11.0 / 82.4\\u00b10.5 | 71.6\\u00b11.2 / 73.2\\u00b10.5 | 69.8\\u00b11.1 / 80.5\\u00b10.7 | 84.0\\u00b10.6 / 81.2\\u00b10.8 | 65.0\\u00b11.2 / 83.4\\u00b10.6 |\\n\\n- **Updated experimental results of training from scratch in Table 2**\\n| Datasets | CIFAR10 | CIFAR100 | CIFAR20 | TinyImageNet | ImgNetDogs-15 |\\n| -------------------- | ----------------- | ----------------- | ----------------- | ----------------- | ----------------- |\\n| **repeat 3 trials** | 92.7\\u00b10.1 / 85.9\\u00b10.2 | 59.0\\u00b10.2 / 60.4\\u00b10.1 | 56.2\\u00b10.6 / 67.0\\u00b10.2 | 31.2\\u00b10.2 / 47.0\\u00b10.2 | 74.6\\u00b10.2 / 70.2\\u00b10.1 |\\n| **repeat 10 trials** | 93.0\\u00b10.6 / 86.5\\u00b10.2 | 58.3\\u00b10.9 / 60.1\\u00b10.6 | 56.3\\u00b10.6 / 66.7\\u00b11.0 | 31.1\\u00b10.3 / 46.0\\u00b11.0 | 74.1\\u00b10.5 / 69.5\\u00b10.6 |\"}", "{\"title\": \"Kindly invite you to check our replies\", \"comment\": \"We thank the reviewer for the valuable time and the effort in reviewing our paper. Since that the discussion phase is approaching to close, we are very much looking forward to hearing from you about any further feedback. We hope you might find our responses and revisions satisfactory, and sincerely hope you will reconsider your rating based on our clarification and the revised paper. Thank you for your time!\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"summary\": \"The paper studied deep subspace clustering. Existing deep subspace clustering methods suffer from feature collapse, where learned representations collapse into subspaces with dimensions much lower than the ambient space. The paper proposes to add a loss term that alleviates this issue, which is backed up with some theoretical study and experiments on real-world dataset.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Experiments on synthetic and real-world datasets are comprehensive, and the reviewer appreciate that. Synthetic experiments: what happens when you add an additional subspace? Case 1: the subspace is 0-dimensional (points centered around a centroid) Case 2: you add a another curve as you have around the great circle, but now put it vertically. The subspaces will be intersecting. I am not expecting the methods to outperform anything, this is just for understanding the method better.\", \"weaknesses\": \"1. The reviewer is concerned with the novelty of the paper. The main motivation of the paper is the observation from Haeffele el al. 2021 that if one learns a representation and apply a subspace clustering type of loss on the representations, then the representations tend to collapse. Therefore, the paper proposes to incorporate an additional term (equation 3) into the loss to prevent collapse. This theme of combining subspace clustering loss and (equation 3) has been explored before: In Ding et al 2023, they used a combination of (equation 3) and the subspace clustering loss in Ma et al 2007. One could go ahead and try many different subspace clustering loss functions, but the contribution seems incremental apriori. If one reads the introduction of that paper, it appears that the motivation was rather similar to this one, but no discussion was given in the current paper.\\n2. The reviewer is also concerned with the theoretical contributions. \\n 1. Lemma 1 and its proof is not a contribution, as the paper clearly states they are from Haeffele et al. 2021.\\n 2. It is difficult to connect Theorem 1 with the main objective (equation 5). In particular, it is unclear at the optimality of (equation 5), whether (and why) the optimal Lagrangian multiplier nu satisfies the conditions in Theorem 1.\\n 3. Theorem 3: I do not quite understand the statement. \\n 1. First, apriori there might be multiple solutions to PRO-DSC. When you say \\u2018the\\u2019 optimal solution, what do you mean? Do you mean you have a sufficient condition, such that there exists \\u2018one\\u2019 optimal solution such that some ideal properties hold on this solution? Or do you mean \\u2018all\\u2019 optimal solutions\\n 2. Second, I am a bit confused on Z, C vs Z^*, C^*. Gamma is defined to permute (or \\u2018align\\u2019) columns of Z, but on line 238 they are used to permute Z^*. Is Gamma a function of Z or Z^*? In the sufficient condition &lt;(I-C)(I-C)^T, G-G^*>, should it be C^* instead? Or even a step back: how is C defined in Theorem 3?\\n 3. It is unclear what how to interpret sufficient condition &lt;(I-C)(I-C)^T, G-G^*>, e.g., how does it connect with Z^* lying in a union of subspaces or C^* being correctly connecting points only from the same subspace. It might strengthen the result a bit if there is a simple case (e.g., the subspaces are independent or orthogonal, points being uniformly spread within each subspace) where such conditions hold. \\n\\nI am in general happy to adjust my ratings based on whether the rebuttal addresses the comments.\", \"questions\": \"See above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Kindly invite you to check our replies\", \"comment\": \"We thank the reviewer for the valuable time and the effort in reviewing our paper. Since that the discussion phase is approaching to close, we are very much looking forward to hearing from you about any further feedback. We hope you might find our responses and revisions satisfactory, and sincerely hope you will reconsider your rating based on our clarification and the revised paper. Thank you for your time!\\n\\nBest Regards,\\n\\nAuthors\"}", "{\"title\": \"1. Regarding to using the $\\\\log \\\\det(\\\\cdot)$ term\", \"comment\": \"We highly appreciate the reviewer for recognizing our work as a reasonable method with clear motivation and strong theoretical support to address an important problem, for the supportive rating, and for the insightful and constructive comments.\\n\\n### **1.1 Why is $\\\\log \\\\det(\\\\cdot)$ introduced?**\\n\\nThe reason to introduce the $\\\\log \\\\det(\\\\cdot)$ term is to address the catastrophic feature collapse issue in the self-expressive model based deep subspace clustering (SEDSC), which is a longstanding issue to trap the development of deep subspace clustering method with self-expressive framework. Moreover, using the $\\\\log \\\\det(\\\\cdot)$ term also brings two desirable properties. \\n\\n- **Promoting the diversity of the learned representations.**\\n\\n In Theorem 2, we prove that the representation $Z$ learned by our PRO-DSC achieves the maximum rank, implying that the leaned features make the best use of the entire feature space, thereby preserving as much information as possible. The theoretical result is empirically verified (Please see the first column in Fig. B.3). As observed, the singular value of the representations learned by PRO-DSC is larger than CLIP's, implying that the features of our PRO-DSC are more diverse and uniform in the feature space. \\n\\n- **Promoting orthogonality among different subspaces.**\\n\\n In Theorem 3, we show that PRO-DSC can promote orthogonality among different subspaces, which implies that representations of different clusters are well separated. The theoretical result is also supported empirically (Fig. 3 and Fig. B.2). \\n\\n Notably, the $\\\\log \\\\det(\\\\cdot)$ term plays indispensable role for both properties. For a strict analysis, we refer the reviewer to check the proofs of Theorem 2 and Theorem 3. Briefly, it serves as a core part of the objective function of the water-filling problem in the proof of Theorem 2, and it enables the convexity of the objective function can be used in the proof of Theorem 3.\\n\\n### **1.2 Other methods to solve the collapse problem? What is and why logdet()? Physical meanings?**\\n\\n- **About other methods to solve the collapse problem?** \\n\\n The catastrophic feature collapse problem implies that the learned representations occupy merely a tiny subspace. Intuitively, we believe that other methods which can increase the volume of the representation or the dimensionality of the subspace spanned by the representation could potentially solve the collapse issue. For example, the rank of $\\\\boldsymbol{Z}$, and the nuclear norm of $\\\\boldsymbol{Z}$, i.e.,$\\\\Vert \\\\boldsymbol{Z}\\\\Vert_* = \\\\sum_{i=1}^N \\\\sigma_{\\\\boldsymbol{Z}}^{(i)}$, which is the sum of the singular values of the subspace spanned by the learned representations. \\n\\n- **Why is $\\\\log \\\\det(\\\\cdot)$ used?**\\n\\n The rank is the direct measure of dimensionality of the subspace of the learned features, but it is non-differentiable and difficult to optimize. In practice, there are two common surrogates for the rank: the nuclear norm, which is defined as $\\\\Vert \\\\boldsymbol{Z}\\\\Vert_* := \\\\sum_{i=1}^N \\\\sigma_{\\\\boldsymbol{Z}}^{(i)}$ and \\n $\\\\frac{1}{2}\\\\log\\\\det(\\\\boldsymbol{I}+\\\\alpha \\\\boldsymbol{Z}\\\\boldsymbol{Z}^\\\\top)=\\\\frac{1}{2}\\\\sum_{i=1}^N\\\\log(1+\\\\alpha (\\\\sigma_{\\\\boldsymbol{Z}}^{(i)})^2)$. \\n The reason to use $\\\\log \\\\det(\\\\cdot)$, rather than the nuclear norm is that, the $\\\\log \\\\det(\\\\cdot)$ is the differentiable surrogate of the rank, which is tighter than the nuclear norm (see Fazel et al 2003). \\n\\n- **Physical meanings of $\\\\log \\\\det(\\\\cdot)$?**\\n\\n In (Ma et al. TPAMI 2007), the meaning of $\\\\log\\\\det(\\\\boldsymbol{I}+\\\\alpha \\\\boldsymbol{Z}\\\\boldsymbol{Z}^\\\\top)$ is explained using a sphere packing from the perspective of information theory. Suppose that the representations are distorted by the random noise, i.e., $\\\\hat z_i = z_i +\\\\epsilon_i$, where $\\\\epsilon_i \\\\sim \\\\mathcal{N}(0,\\\\frac{\\\\varepsilon^2}{d}\\\\boldsymbol{I})$, then the volume $\\\\text{vol}(\\\\boldsymbol{\\\\hat Z})$ of the region spanned by the representation vectors $\\\\{z_i\\\\}$ and the volume of noise $\\\\text{vol}(\\\\boldsymbol{\\\\epsilon})$ are defined respectively as follow:\\n$$\\n\\\\begin{align}\\n \\\\text{vol}(\\\\boldsymbol{\\\\hat Z}) & \\\\propto \\\\sqrt{\\\\det(\\\\hat \\\\Sigma_\\\\boldsymbol{Z})},\\\\\\\\ \\\\\\\\ \\\\\\\\ ~\\\\hat \\\\Sigma_\\\\boldsymbol{Z} = \\\\mathbb{E}_ \\\\\\\\boldsymbol{\\\\epsilon} \\\\left[\\\\frac{1}{N}\\\\sum_{i=1}^N \\\\hat z_i z_i^\\\\top\\\\right] = \\\\frac{\\\\varepsilon^2}{d}\\\\boldsymbol{I} + \\\\frac{1}{N}\\\\boldsymbol{Z}\\\\boldsymbol{Z}^\\\\top, \\\\\\\\\\\\\\\\\\n \\\\text{vol}(\\\\boldsymbol{\\\\epsilon}) & \\\\propto \\\\sqrt{\\\\det(\\\\frac{\\\\varepsilon^2}{d}\\\\boldsymbol{I})}.\\n\\\\end{align}\\n$$\\n\\nTherefore, we see that $\\\\log \\\\det(\\\\cdot)$, as a measure of the size of the subspace spanned by the representations, approximates the coding length by the noisy sphere packing with binary coding, that is:\\n$$\\n\\\\- \\\\begin{align}\\n R(\\\\boldsymbol{Z}) = \\\\log_2(\\\\text{\\\\\\\\# of spheres}) \\\\\\\\propto \\\\log_2\\\\frac{\\\\text{vol}(\\\\boldsymbol{\\\\hat Z})}{\\\\text{vol}(\\\\boldsymbol{\\\\epsilon})} = \\\\frac{1}{2}\\\\log_2{\\\\det(\\\\boldsymbol{I}+\\\\frac{d}{N\\\\varepsilon^2}\\\\boldsymbol{Z}\\\\boldsymbol{Z}^\\\\top)}.\\n \\\\end{align}\\n$$\"}", "{\"title\": \"5. Why use DSCNet, rather than SENet on synthetic data for comparison?\", \"comment\": \"We note that while SENet is implemented via a query-key network, it does not belong to deep subspace clustering method because the query-key network is used to learn the self-expressive coefficients, rather than to learn the representation of the input data.\\n\\nDSCNet (Ji et al., NeurIPS'17) is designed by encapsulating a self-expressive (SE) model (which is called a SE layer) in the ``middle'' of a stacked convolutional auto-encoder network (SCAE) and is trained by using a combination of reconstruction loss and SE loss with $\\\\ell_1$ or $\\\\ell_2$ regulizer. While the framework of DSCNet is clear and remarkable clustering accuracy has been reported on four image datasets (especially on Extended Yale B), there is no clear evidence to demonstrate that SCAE is able to amend the input data to align with a union of subspaces (UoS). Even worse, as theoretically justified by (Haeffele et al. ICLR'21), the optimal solution of DSCNet suffers from a catastrophic feature collapse---which is also revealed empirically in the experiments on synthetic data in our submission.\"}" ] }
7pIxS9m283
WISE-GNN: Enhancing GNNs with Wise Embedding and Topological Encoding
[ "Md Joshem Uddin", "Astrit Tola", "Cuneyt Gurcan Akcora", "Baris Coskunuzer" ]
Graph Neural Networks (GNNs) have emerged as a powerful framework for graph representation learning. However, they often struggle to capture long-range dependencies between distant nodes, leading to suboptimal performance in tasks such as node classification, particularly in heterophilic graphs. Challenges like oversmoothing, oversquashing, and underreaching intensify the problem, limiting GNN effectiveness in such settings. In this paper, we introduce *WISE-GNN*, a novel framework designed to address these limitations. Our approach enhances any GNN model by incorporating *Wise-embeddings*, which capture attribute proximity and similarities among distant nodes, thereby improving the representation of nodes in both homophilic and heterophilic graphs. Additionally, we propose a topological module that can be smoothly integrated into any GNN model, further enriching node representations by incorporating the topological signatures of node neighborhoods. Comprehensive experiments across various GNN architectures show that WISE-GNN delivers significant improvements in node classification tasks, achieving mean accuracy gains of up to 14% and 23% on benchmark datasets in homophilic and heterophilic settings, respectively. Moreover, WISE-GNN enhances the performance of various GNN architectures, allowing even standard GNNs to outperform SOTA baselines on benchmark datasets.
[ "graph representation learning", "graph neural networks", "node classification" ]
https://openreview.net/pdf?id=7pIxS9m283
https://openreview.net/forum?id=7pIxS9m283
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vsV6d2lWcs", "aEpBiWDp2P", "PZxYdauVll", "KjlGomTkai", "CZ7DILexuO" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1731685362077, 1730542242363, 1730714325265, 1730609947842, 1730744898792 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission3411/Authors" ], [ "ICLR.cc/2025/Conference/Submission3411/Reviewer_7h2H" ], [ "ICLR.cc/2025/Conference/Submission3411/Reviewer_7WKa" ], [ "ICLR.cc/2025/Conference/Submission3411/Reviewer_oVsh" ], [ "ICLR.cc/2025/Conference/Submission3411/Reviewer_wrKp" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We thank the reviewers for their time and valuable feedback. We will incorporate your suggestions and comments to improve the paper and submit it to another venue.\"}", "{\"summary\": \"This paper introduces a framework called **WISE-GNN**, designed to enhance the performance of Graph Neural Networks (GNNs) in node classification tasks, particularly in **heterophilic graphs**. WISE-GNN improves node representation by incorporating \\\"Wise embeddings\\\" and a topology encoding module. These enhancements address the challenges faced by GNNs in capturing long-range dependencies between distant nodes.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The design of WISE-GNN is model-agnostic, allowing it to be easily integrated with any existing GNN architecture.\"], \"weaknesses\": \"### **Method**\\n1. The wise-gnn obtains new features by measuring the distance between nodes and multiple cluster centers, making the representations of similar initial nodes more alike. Node representations are updated through a message-passing mechanism. If a node has a high degree of heterophilic, the message-passing approach will still influence the final node representation.\\n \\n2. The introduction of the topology module does not effectively interact with the wise embedding, but rather functions as a simple combination.\\n\\n### **Experiment**\\n\\n1. The dataset used in the paper is too small.\\n2. The method lacks comparision with the strong baselines for heterophilic graphs.\\n3. The performance of GCN on Cora is too low on the public split.\", \"questions\": [\"Discuss the advantages of WISE-GNN compared to Geom-GCN[1], which utilizes a method for obtaining structural neighbors from a distance, as Geom-GCN can overcome the limitations of message-passing.\", \"Is there a better way to combine the representations in the topology module?\", \"How does WISE-GNN perform on large graphs, such as Ogbn-Arxiv and Ogbn-Products?\", \"As pointed out in paper[2], the heterophilic graph (Chameleon) has an information leakage issue. How does WISE-GNN perform on currently widely used (larger and diverse properties) heterophilic graphs[2]?\", \"Can you compare it with the strong heterophilic graph baselines, such as ACM-GCN [3], GREAD[4], M2M-GNN[5] etc.?\", \"I would appreciate it if you could explain the results of GCN on Cora?\", \"[1] Geom-GCN: Geometric Graph Convolutional Networks, ICLR 2020\", \"[2] A critical look at the evaluation of GNNs under heterophily: Are we really making progress? NeurIPS 2023\", \"[3] Revisiting Heterophily For Graph Neural Networks, NeurIPS 2022\", \"[4]GREAD: Graph Neural Reaction-Diffusion Networks, ICML 2023\", \"[5] Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs, ICML 2024\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents WISE-GNN, a framework designed to address the limitations of GNNs in capturing long-range dependencies, particularly in heterophilic graphs, by incorporating Wise-embeddings and a topological module.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper is generally clear.\\n\\n Promising performances have been achieved.\", \"weaknesses\": \"1. The novelty is limited considering that class landmark (or prototypes) has been extensively used in node representation learning and clustering in the literature.\\n2. The wise embedding seem to be handcrafted. This could be time-consuming. The authors did not discuss the runtime of the method.\\n3. It is unclear why wise embedding is used for GNN input while the betti vector is processed through MLP.\\n4. The Chameleon and Squirrel datasets used in the study are known to be problematic[1]. It is recommended to use the filtered versions of these datasets to ensure more reliable results.\\n5. The heterophilous datasets used for evaluation are all relatively small. To validate the claims more robustly, it is essential to conduct experiments on large-scale heterophilous datasets[1][2].\\n6. More recent heterophilous baselines are needed for performance comparison.\\n7. Reproducibility is poor. No code released, and no hyperparameters specified.\\n\\n\\nReferences\\n\\n[1] Platonov, O., Kuznedelev, D., Diskin, M., Babenko, A., & Prokhorenkova, L. (2023). A critical look at the evaluation of GNNs under heterophily: Are we really making progress?. arXiv preprint arXiv:2302.11640.\\n\\n[2] Derek Lim, Felix Hohne, Xiuyu Li, Sijia Linda Huang, Vaishnavi Gupta, Omkar Bhalerao, and Ser Nam Lim. Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. Advances in Neural Information Processing Systems, 34, 2021.\", \"questions\": \"Please see weaknesses.\", \"flag_for_ethics_review\": \"['Yes, Research integrity issues (e.g., plagiarism, dual submission)']\", \"details_of_ethics_concerns\": \"I suspect that the submission currently under review may overlap significantly with a paper submitted to AAAI, titled \\u201cClassContrast: Bridging the Spatial and Contextual Gaps for Node Representations\\u201d.https://arxiv.org/pdf/2410.02158\\n\\nUpon comparing the two papers, it appears that approximately 50% of the content overlaps. Specifically, both papers propose the use of class-wise landmarks and leverage the distance between individual nodes and these landmarks as additional node embeddings. This concept seems to be a major contribution in both works, raising the concern of a potential dual submission.\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces WISE-GNN, a framework aimed at improving GNN performance in capturing long-range dependencies on graphs, particularly in heterophilic settings. This is achieved through two main components: Wise embeddings, which capture node attribute similarities to class-specific landmarks, and a Topological Module based on persistent homology to encode local subgraph structures.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. WISE-GNN addresses both attribute and topological aspects of graph data, aiming for a more comprehensive representation that could potentially aid tasks in complex graph structures.\\n2. The framework is model-agnostic, making it adaptable to various GNN architectures with minimal modifications.\", \"weaknesses\": \"1. Unclear Objective Function: The objective function and the overall process for integrating Wise embeddings into the GNN model are insufficiently explained, which hinders understanding of how these embeddings work in the training framework.\\n2. Limited Motivation for Wise Embeddings: While capturing information from distant nodes is valuable, the use of classWISE benchmarks as an effective means of encoding this information lacks sufficient justification. It\\u2019s unclear why this specific method would be more beneficial than alternative strategies, such as clustering approaches, which may offer richer structural context.\\n3. Restricted Novelty: The Topological Module is primarily adopted from established work in persistent homology, offering limited originality in its formulation or integration. The Wise embedding concept also appears somewhat incremental rather than ground-breaking.\\n4. Experimental Ambiguity: The experimental setup and results need more clarity, particularly concerning the T-H2GCN performance and whether the results fully validate the claimed improvements. Additionally, comparisons with clustering-based methods would strengthen the case for Wise embeddings as a distinct improvement.\", \"questions\": \"1. Could the authors clarify the objective function and explain how Wise embeddings interact with GNN updates during training?\\n2. Why were Wise embeddings, rather than more conventional clustering techniques, chosen to represent distant node relationships? Would clustering provide richer information than distance measures alone?\\n3. Can the authors provide a more detailed explanation of the experimental setup and discuss specific findings, such as the T-H2GCN performance, to better validate their approach?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper aims to address the limitations of graph neural networks to improve the performance of graph neural network on heterophilic and homophilic graphs. Specifically, a wise embedding is proposed to encode the topological information to benefit the classification with different GNN architectures. Experiments are conducted on several small datasets.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. The background is introduced clearly.\\n2. The results of visualization are interesting.\", \"weaknesses\": \"1. The motivation of this paper is very unclear. In the abstract and introduction, the authors mention that they want to address the issues of GNNs in oversmoothing, oversquashing, and underreaching. However, what are these problems are not introduced in detail. In this work, a method of topology embedding is proposed. It's very unclear why this can solve these many problems of GNNs. Moreover, in the experiments, the authors majorly evaluate the performance on heterophilic graphs, which make the objective of this work more confusing.\\n\\n2. The technical contributions are limited. The core idea of this method is to learn a topology embedding to benefit the classification with GNNs. Similar ideas have already been well explored. For example, LinkX utilizes an MLP to learn the topology embedding from the adjacency matrix. Position embeddings are also clearly investigated for graph transformer [1]. \\n\\n3. The experiments are not convincing. For example, in table 2, the GCN only has 59% accuracy, which is in conflict with the widely reported results (around 72%). Hence, the improvements are not convincing. In addition, experiments are only conducted on several small datasets which only contain several thousand nodes.\\n\\n4. The paper is poorly written. The details of building the wise embedding is difficult to follow. Why this method can be effective for heterophilic graphs is not discussed. The paper is not cited is a right way, which makes the paper very difficult to follow.\\n\\n[1] Ying, Chengxuan, et al. \\\"Do transformers really perform badly for graph representation?.\\\" Advances in neural information processing systems 34 (2021): 28877-28888.\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
7pDI74iOyu
On the Importance of Language-driven Representation Learning for Heterogeneous Federated Learning
[ "Yunlu Yan", "Chun-Mei Feng", "Wangmeng Zuo", "Salman Khan", "Yong Liu", "Lei Zhu" ]
Non-Independent and Identically Distributed (Non-IID) training data significantly challenge federated learning (FL), impairing the performance of the global model in distributed frameworks. Inspired by the superior performance and generalizability of language-driven representation learning in centralized settings, we explore its potential to enhance FL for handling non-IID data. In specific, this paper introduces FedGLCL, a novel language-driven FL framework for image-text learning that uniquely integrates global language and local image features through contrastive learning, offering a new approach to tackle non-IID data in FL. FedGLCL redefines FL by avoiding separate local training models for each client. Instead, it uses contrastive learning to harmonize local image features with global textual data, enabling uniform feature learning across different local models. The utilization of a pre-trained text encoder in FedGLCL serves a dual purpose: it not only reduces the variance in local feature representations within FL by providing a stable and rich language context but also aids in mitigating overfitting, particularly to majority classes, by leveraging broad linguistic knowledge. Extensive experiments show that FedGLCL significantly outperforms state-of-the-art FL algorithms across different non-IID scenarios.
[ "federated learning", "language-driven representation learning", "data heterogeneity" ]
Accept (Poster)
https://openreview.net/pdf?id=7pDI74iOyu
https://openreview.net/forum?id=7pDI74iOyu
ICLR.cc/2025/Conference
2025
{ "note_id": [ "v0bzpSkEu7", "mstihje8wI", "evdcNX3kLo", "aFImMrX7o9", "a6ZKC8tTOJ", "WiGn6IjxRK", "VwLSMhlrHJ", "VltvC62Est", "U4kYs7qwmR", "Jc5vh0zXJx", "IvhVfNxcs6", "Bg2GuOhiUm", "9oqu0FcSzb", "7JNY5ynkJn", "46GRRM93Ju", "40h1RNt08F", "2x0nkVgHCO", "0Nio91ZTBM" ], "note_type": [ "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732421920269, 1732432955921, 1734717064425, 1732466903348, 1732108925283, 1732469251005, 1732108352126, 1732108265320, 1732108528598, 1732108374310, 1737523737436, 1730717223173, 1732109020526, 1730723265676, 1730441217385, 1732108555342, 1730268801965, 1732471670894 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5988/Reviewer_Q1wc" ], [ "ICLR.cc/2025/Conference/Submission5988/Authors" ], [ "ICLR.cc/2025/Conference/Submission5988/Area_Chair_a9vB" ], [ "ICLR.cc/2025/Conference/Submission5988/Authors" ], [ "ICLR.cc/2025/Conference/Submission5988/Authors" ], [ "ICLR.cc/2025/Conference/Submission5988/Reviewer_yM3c" ], [ "ICLR.cc/2025/Conference/Submission5988/Authors" ], [ "ICLR.cc/2025/Conference/Submission5988/Authors" ], [ "ICLR.cc/2025/Conference/Submission5988/Authors" ], [ "ICLR.cc/2025/Conference/Submission5988/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5988/Reviewer_bW7e" ], [ "ICLR.cc/2025/Conference/Submission5988/Authors" ], [ "ICLR.cc/2025/Conference/Submission5988/Reviewer_yM3c" ], [ "ICLR.cc/2025/Conference/Submission5988/Reviewer_4piz" ], [ "ICLR.cc/2025/Conference/Submission5988/Authors" ], [ "ICLR.cc/2025/Conference/Submission5988/Reviewer_Q1wc" ], [ "ICLR.cc/2025/Conference/Submission5988/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for your response. I raise my rating to 6.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Thanks for your valuable time to respond to our feedback. We are very happy to see that your concerns have been fully addressed.\"}", "{\"metareview\": \"This paper introduces FedGLCL, a language-driven federated learning (FL) framework that integrates global language and local image features via contrastive learning to address challenges posed by non-IID data. By harmonizing local image features with a pre-trained text encoder, FedGLCL ensures uniform feature learning, reduces variance in local representations, and mitigates overfitting to majority classes. Experiments demonstrate its superior performance over state-of-the-art FL algorithms in diverse non-IID settings.\\n\\nThis is a borderline paper. The reviewers have brought up a few issues with this paper, though I think the authors have addressed many of the issues. In my opinion, it would be good to see this paper in ICLR. I would encourage the authors to go through the reviews carefully and address it in the next version.\", \"additional_comments_on_reviewer_discussion\": \"This is a borderline paper. The reviewers have brought up a few issues with this paper, though I think the authors have addressed many of the issues. In my opinion, it would be good to see this paper in ICLR. I would encourage the authors to go through the reviews carefully and address it in the next version.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"As the discussion period draws to a close soon, we extend our sincere gratitude to you for the valuable time and insightful comments.\\n\\nWe understand that you mentioned, \\\"*I am not familiar with the FL literature*\\\", which may have raised some concerns. We sincerely hope our responses have effectively addressed them.\\n\\nIf you have any remaining questions or require further clarification, please do not hesitate to let us know, and we would be glad to provide further explanations\\n\\nThank you again for your efforts in reviewing our work.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We thank the reviewer for his/her constructive comments and provide our point-wise replies as follows.\\n\\n> **Q1:** Which pretrained vision language models are used in the experiments?\\n\\nIn our default setting (Lines 363-372), we used a language model, Bert-base, as the text encoder rather than a vision-language model. Besides, the image backbone is trained from scratch without loading any pre-trained models. \\n\\n\\n> **Q2:** Comparison on different text encoders.\\n\\n|Text Encoder | CIFAR-100 | Office-Caltech-10 | DomainNet |\\n| :-----|:----: |:----: |:----:|\\n| Best baseline | 71.82 | 71.36 | 72.94|\\n|CLIP/ResNet-50| 73.30 | 77.26 | 76.41\\n|CLIP/ViTB-16| 72.88 | 75.99| 76.09|\\n|Bert-base (default)| 73.52 | 77.35| 76.54 | \\n\\nThanks for your comments. We have already presented the results of this experiment in Appendix D.4 (Table 13). As shown, our method still outperformed the best baseline after using different text encoders. This demonstrates the robustness of FedGLCL to various text encoders.\\n\\n> **Q3:** Are the datasets used in the experiments part of the pre-training of VLMs? \\n\\nThe datasets used in the experiments are **NOT** used for pre-training. The used text encoder, Bert-base, is trained on the pure language dataset and does **NOT** contain any knowledge from four used datasets (image datasets). Besides, the image backbone is trained from scratch without loading any pre-trained models. This indicates that the improvement in our method stems not from the pre-training stage, but from our novel learning mechanism.\"}", "{\"title\": \"Reviewer Response\", \"comment\": \"I think the authors have addressed my concerns. Therefore, I am raising my score from 5 to 6.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We thank the reviewer for his/her constructive comments and provide our point-wise replies as follows.\\n\\n> **Q1:** Contribution to the FL community.\\n\\nWe clarify the contribution of our work as follows.\\n\\n(1) **First attempt in FL:** To the best of our knowledge, FedGLCL is the first attempt to introduce language-driven representation learning into FL. Although language-driven representation learning is used in centralized settings [1, 2], we address an entirely different problem i.e., non-IID data in the FL setting.\\n\\n(2) **Label-driven vs. language-driven:** Existing FL methods typically adopt the label-driven training paradigm, leading to significant drift in local models when handling non-IID data. In this work, we found that using a global text embedding for supervision can effectively mitigate feature differences among local models and prevent overfitting, thus alleviating the non-IID issues in FL. This provides more insights and opens up a new prospect for addressing this challenge.\\n\\n(3) **Theoretical and empirical analyses:** Furthermore, we offer theoretical and empirical analyses to gain a deeper understanding of our approach. Extensive experiments show that our method consistently outperforms various state-of-the-art approaches across two different non-IID scenarios, demonstrating its effectiveness.\\n\\n\\n\\n> **Q2:** Add larger-scale datasets.\\n\\nThe four datasets used in the submitted paper are popular benchmarks in heterogeneous FL, which ensures a fair comparison with previous methods. To the best of our knowledge, in heterogeneous FL, no results have been reported on ImageNet, making it difficult to compare with competing methods. Following [3], we conduct experiments using another larger-scale dataset, Tiny-ImageNet [4], which has a similar data distribution to ImageNet. Detailed information about Tiny-ImageNet is provided below, which has been included in Table 5.\\n\\n|Property |Tiny-ImageNet|\\n|:-----|:----:|\\n|# of train samples| 100000 |\\n|# of test samples| 10000 |\\n|# of classes| 200 |\\n|Image size|(64, 64, 3) |\\n\\nWe use the same experimental setup as for CIFAR-100, and $\\\\beta$ is set to 0.5. The results are presented below. As shown, FedGLCL consistently achieves the best performance compared to all competing methods, further demonstrating the effectiveness of our method. The results have been included in the revision (Appendix D.7 Table 16).\\n\\n|Method |FedAvg|FedProx |FedNova|Scaffold| MOON| FedProto| FedLC |FedConcat |FedGLCL |\\n|:----:|:----:|:-----:|:----:|:-----:|:----:|:-----:|:----:|:-----:|:----:|\\n|Tiny-ImageNet| 47.54 |46.61 | 47.11 | 48.09 | 48.97 | 48.16 | 45.97 |49.24|51.05 | \\n\\n\\n\\n\\n> **Q3:** Emphasize the type of FedGLCL.\\n\\nOur work is not a type of pre-training. Instead, it leverages textual information to conduct supervised learning on specific visual recognition tasks. Therefore, it is a type of image-text learning. To distinguish it from text-only or image-only, we emphasize it in the abstract accordingly:\", \"line_017_018\": \"\\\"a novel language-driven FL framework that\\\" to \\\"a novel language-driven FL framework for image-text learning that\\\"\\n\\n\\n\\n> **Q4:** Difference with Lu et al., 2023, Guo et al., 2023, Shi et al., 2023. \\n\\nFedGLCL is fundamentally different from them, we explain the differences as follows.\\n\\n(1) They directly apply the well-trained CLIP model (including both the image and text encoders) as the core components for their tasks. Therefore, their methods are **highly dependent on** the CLIP model. Instead, FedGLCL does not require any modules from CLIP as it can use arbitrary language models as the global text encoder.\\n\\n(2) They use labels for supervision, which remains the traditional **label-driven** FL approach. FedGLCL is a **language-driven** FL method that aligns the local image feature space with the global text feature space through global text supervision.\\n\\n(3) Since they use image encoders of CLIP, they are easy to incur domain shift in visual tasks. In contrast, FedGLCL is robust to domain shift as it learns the visual encoder from scratch. For instance, the word \\\"cat\\\" holds the same semantic in both natural images and cartoon image tasks. However, for image backbones, the natural images and the cartoon images represent two entirely different data distributions.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"Dear reviewers and meta-reviewers,\\n\\nWe appreciate all reviewers for their valuable comments and suggestions. We've revised our manuscript based on reviewers' comments as follows:\\n\\n(1) For Reviewer yM3c, we have added the results for Tiny-ImageNet in Table 16, with detailed information about the dataset provided in Table 5.\\n\\n(2) For Reviewer yM3c, we have revised the abstract (Line 017-018) to clarify the type of FedGLCL.\\n\\n(3) For Reviewer yM3c, we have revised our writting in Line 160.\\n\\n(4) For Reviewer bW7e, we have discussed the impact of language supervision in Appendix D.2 (Line 1069-1079).\\n\\n(5) For Reviewer bW7e, we have reported the standard errors for results in Table 17.\\n\\n(6) For Reviewer bW7e and Q1wc, we have added the results of the experiment using pre-trained image models and a new baseline, FedPCL, in Table 18.\\n\\n(7) For Reviewer Q1wc, we have added the results on medical image segmentation in Table 19.\\n\\n(8) For Reviewer Q1wc, we have added the results of our method with Glove in Table 13.\\n\\nThe changes have been highlighted in **blue** in the revised paper. Please see below for our responses to each reviewer. If you have any further questions or suggestions, please feel free to share them on OpenReview.\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We thank the reviewer for his/her constructive comments and provide our point-wise replies as follows.\\n\\n> **Q1:** Discussion about the impact of language supervision. \\n\\nSorry for the confusion. We clarify it as follows and make it clear in the revision (Line 1069-1079).\\n\\n(1) **Impact of language supervision:** Non-IID data leads to significant drift in local models. The key idea of FedGLCL is to align the local image feature space with the global text feature space, which benefits learning a consistent feature representation and prevents overfitting to the majority classes across different local models. \\n\\n(2) **Explanation of experimental results:** In FedGLCL, local image encoders are trained from scratch without any prior knowledge. Therefore, the critical factor is the consistency of text embeddings between training and testing. This ensures that the encoded text embeddings can effectively guide the learning process, regardless of whether the class names perfectly correspond to the images. While the proxy class names are not the real classes of images, they are still in the domain covered by the pre-trained text encoder. Consequently, the global text embeddings still provide meaningful semantic supervision for training. \\n\\n(3) **Success of linguistic knowledge:** This result highlights that the success of language supervision is not dependent on the precise semantic relationships between class names and images but rather on the overall structure and knowledge encoded in the global text embeddings. This not only offers effective supervision for training local image encoders but, more importantly, ensures the alignment of all local image feature spaces with the global text feature space for mitigating drift in local models. \\n\\n\\n\\n> **Q2:** Measuring the impact of FedGLCL-style training in centralized setting. \\n\\n(1) We present the results of FedAvg and FedGLCL on Office-Caltech-10 under both centralized and federated settings. The results show that FedGLCL achieves significantly greater performance improvements over FedAvg in the federated setting compared to the centralized setting. This demonstrates that the improvements of FedGLCL primarily arise from its ability to address the non-IID problem rather than solely from the contrastive learning approach.\\n\\n|Method| FedAvg (Centralized) | FedGLCL (Centralized)| FedAvg (Federated) |FedGLCL (Federated) |\\n| :-----|:----: |:-----:|:----: |:----: |\\n|Office-Caltech-10|83.32|85.55 | 60.25| 77.35 |\\n\\n(2) This paper investigates the issue of non-IID data across different clients in FL, which leads to significant drift in local models. In contrast, reference [1] explores the distributional differences between pre-training data and downstream task data. They are entirely two different problems. Therefore, our results cannot be directly compared or discussed in relation to the results of reference [1]. \\n\\n\\n\\n> **Q3:** Standard errors for results. \\n\\nThanks for recognizing the **large** improvement of our method. We conduct two additional trials using different random seeds and report the mean and standard deviation (std) across all three trials (mean $\\\\pm$ std). Furthermore, to evaluate the statistical significance of the performance improvements, we perform a paired t-test between the baseline and our method, reporting the corresponding p-value. The above results of FedAvg, FedGLCL, and two best baselines, i.e., FedConcat and FedFA, are presented below. It can be observed that the p-values for all baselines are **less than 0.05**, indicating the statistical significance of the performance improvements achieved by our method. These results have been included in the revision (Appendix D.8 Table 17).\\n\\n\\n|Text Encoder | CIFAR-100| p-value | Office-Caltech-10 | p-value| DomainNet | p-value |\\n| :-----|:----: |:----: |:----:|:----:|:----:|:----:|\\n|FedAvg| 66.64 $\\\\pm$ 0.14| 0.0016 | 60.57 $\\\\pm$ 0.85 | 0.0003 | 66.58 $\\\\pm$ 0.70 | 0.0007|\\n|FedConcat| 71.09 $\\\\pm$ 0.51 | 0.0138| - | - | -| - |\\n|FedFA| -| - |71.50 $\\\\pm$ 1.03 | 0.0026 | 72.27 $\\\\pm$ 0.72 | 0.0031 |\\n|FedGLCL | 74.10$\\\\pm$ 0.41 |-| 77.84 $\\\\pm$ 0.96 | -| 77.13 $\\\\pm$ 0.53| - |\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"> **Q5:** what is \\\"the philosophy of language-driven representation learning\\\"?\\n\\nSorry for the confusion. It refers to the conceptual approach or theoretical framework that emphasizes the use of language as supervision in learning meaningful, transferable representations of data. ''the philosophy of'' may cause confusion, so we have removed it in the revision (Line 160).\\n\\n**References**\\n\\n[1] Jia, Chao, et al. \\\"Scaling up visual and vision-language representation learning with noisy text supervision.\\\" International conference on machine learning. PMLR, 2021.\\n\\n[2] Radford, Alec, et al. \\\"Learning transferable visual models from natural language supervision.\\\" International conference on machine learning. PMLR, 2021.\\n\\n[3] Li, Qinbin, Bingsheng He, and Dawn Song. \\\"Model-contrastive federated learning.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.\\n\\n[4] https://www.kaggle.com/c/tiny-imagenet/overview.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper proposes a novel approach, FedGLCL, to learn image models in non-IID federated settings. Their approach leverages a fixed frozen set of class embeddings (from a language model) and locally updates image representations in a CLIP-like contrastive manner. When tested on multiple variations of non-IID federated learning benchmarks (e.g. significant label shift/imbalance), this approach is shown to notably outperform other training/aggregation-based interventions without introducing additional computation/communication costs. Furthermore, the authors provide several additional ablations and theoretical analyses to deepen understanding of their results.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"**Originality and Significance.** Federated learning with non-IID client data is a well-motivated/challenging problem and the authors' approach, while simple, is novelly and effectively applied in this setting. As mentioned in Sec 5.4, this approach also importantly does not come with any decreases in efficiency.\\n\\n**Quality.** Overall, the results for their method are quite strong compared to a variety of baselines and on several variations of the evaluations. The authors also did a great job ablating aspects of their method (particularly in the Appendix), addressing several of the questions that came up while I was reading the paper. In particular, I appreciated the ablations on the text embeddings, which show that it is important to both (1) keep the embeddings fixed; (2) use a pre-trained language model instead of custom class embeddings. These help establish that contrastive-style training itself is not the main explainer for good performance. \\n\\n**Clarity.** Generally, l found the paper to be well-written and easy to follow. Particularly, the authors did a good job contextualizing their methods/results with previous works.\", \"weaknesses\": \"**More discussion about the impact of language supervision.** One result I found particularly curious (and which I believe should be highlighted more in the main paper) was that using embeddings for randomly mismatched class names results in comparable performance to using embeddings for the correct class names. Before seeing this result, I would have thought that the semantic information and relationships between classes was being usefully leveraged (e.g., similar classes being closer together in text embedding space). But based on this result, I'm not sure then to what extent the \\\"broad linguistic knowledge\\\" emphasized in the abstract is important in explaining FedGLCL's success? Perhaps the authors can expand upon their discussion of this in the Appendix and comment more.\\n\\n**Measuring the impact of FedGLCL-style training in centralized/IID-Federated settings** It would be interesting to see how well the contrastive style training from FedGLCL performs in the non-federated setting (and perhaps also the IID federated setting). This would shed light on whether the gains in the non-IID federated setting might also be explained by the contrastive learning approach in FedGLCL-being more effective generally on these tasks v.s. if it is uniquely helpful for overcoming challenges in non-IID federated settings. Relatedly, it would also be nice to discuss your results in relation to Fang et al., 2022 (https://arxiv.org/abs/2205.01397), which included results assessing the impact of language supervision on CLIP pre-training. in their case, they find that it does not have a major impact on the final robustness of CLIP models.\\n\\n**Standard errors for results.** It would be nice if the empirical results came with standard errors. That being said, I can acknowledge that the current gaps between the proposed method and previous baselines seem to be large/consistent across evaluation settings and that repeated runs might be costly.\", \"questions\": \"Most questions I had were covered in the Weaknesses section. The only other question I had was if you considered any settings involving pretrained image models. On the surface, this might better reflect practical deployments for image-based tasks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"We thank the reviewer for his/her constructive comments and provide our point-wise replies as follows.\\n\\n> **Q1:** Comparison with prototype FL.\\n\\nWe have compared with FedProto [2] in our experiments, and the results are presented in Tables 1 and 2. FedPCL [1] uses the pre-trained image backbone. For a fair comparison, we conducted additional experiments using the pre-trained image backbone (AlexNet) on the Office-Caltech-10. Specifically, we initialized the image backbone with pre-trained weights from ImageNet and kept all other settings unchanged. As we can see, FedPCL performs significantly worse than our method, demonstrating the superiority of FedGLCL over prototype-based FL methods. The results have been included in the revision (Appendix D.9 Table 18)\\n\\n\\n|Method |FedAvg|FedProx |FedNova|Scaffold| MOON| FedProto| FedBN |FedFA| FedPCL |FedGLCL |\\n|:-----:|:----:|:-----:|:----:|:-----:|:----:|:-----:|:----:|:-----:|:----:|:----:|\\n|Office-Caltech-10| 64.99 | 62.46 | 65.50 | 61.33 | 66.78 | 69.37 | 71.87 |74.91| 72.46 | 81.81 |\\n\\n\\n> **Q2:** More tasks.\\n\\nAs stated in Line 052, this work primarily uses visual recognition as an example, but it can be extended to more visual understanding tasks, such as image segmentation. Inspired by [3], we can extend the image-text alignment to pixel-text alignment for image segmentation. We conducted experiments on ProstateMRI [4], a widely used federated medical image segmentation benchmark. This benchmark has six clients including BIDMC, HK, I2CVB, BMC, RUNMC, and UCL, each from a different domain. We used U-Net as the segmentation network and aligned the output of the final layer with the text features. Since this benchmark involves a binary segmentation task, we use \\\"foreground\\\" and \\\"background\\\" as the class names. The communication rounds are set to 200, with local rounds set to 1. The results are presented below. It can be observed that FedGLCL achieves certain improvements compared to FedAvg, indicating that our method has the potential to be applied to other tasks to address non-IID data issues. \\n\\n|Method|BIDMC|HK|I2CVB|BMC|RUNMC|UCL|Avg.|\\n| :-----|:----: |:----: |:----:| :-----|:----: |:----: |:----:|\\n|FedAvg| 87.66|94.48|96.00|90.46|93.21|87.47|91.55|\\nFedGLCL|91.81|94.89|95.95|92.08|93.34|90.68|93.12|\\n\\n\\n> **Q3:** Curious about the peformance of Glove.\\n\\nThanks for your suggestion. We conducted additional experiments using GloVe to encode global text embeddings. The results, presented below, show that GloVe outperforms FedAvg, but achieves lower performance than Bert and CLIP. This is because Bert and CLIP employ deep neural networks to learn data representations through supervised learning on large-scale datasets, allowing them to capture semantic information more effectively. The results have been included in the revision (Appendix D.4 Table 13)\\n\\n|Text Encoder | CIFAR-100 | Office-Caltech-10 | DomainNet |\\n| :-----|:----: |:----: |:----:|\\n|FedAvg| 66.67| 60.25 |66.28 |\\n| Glove | 68.95 | 71.51 | 70.48|\\n|CLIP/ResNet-50| 73.30 | 77.26 | 76.41|\\n|CLIP/ViTB-16| 72.88 | 75.99| 76.09|\\n|Bert-base (default)| 73.52 | 77.35| 76.54 | \\n\\n**References**\\n\\n[1] Tan, Yue, et al. \\\"Federated learning from pre-trained models: A contrastive learning approach.\\\" Advances in neural information processing systems 35 (2022): 19332-19344.\\n\\n[2] Tan, Yue, et al. \\\"Fedproto: Federated prototype learning across heterogeneous clients.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 8. 2022.\\n\\n[3] Li, Boyi, et al. \\\"Language-driven Semantic Segmentation.\\\" International Conference on Learning Representations. 2022.\\n\\n[4] Liu, Quande, et al. \\\"MS-Net: multi-site network for improving prostate segmentation with heterogeneous MRI data.\\\" IEEE\\ntransactions on medical imaging 39.9 (2020): 2713-2724.\"}", "{\"summary\": \"The paper introduces FedGLCL, a FL framework designed to improve the handling of non-IID data in FL environments. The authors leverage language-driven representation learning (e.g. CLIP, as opposed to label driven), incorporating a pre-trained text encoder to stabilize feature representations. This approach enables the alignment of local image features with a global text feature space, reducing feature disparity across clients. Experiments demonstrate that FedGLCL outperforms traditional label-driven FL methods in various non-IID scenarios, including label and feature distribution skew.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method is novel - mitigating non-iid data issue in FL on image classification pre-training with text encoders.\\n\\n2. The experiment results are good, the authors experimented on many datasets, comparing against many baselines, highlighting the generalizability of their method. \\n\\n3. The author provided theoretical guarantees to the generalizability of the proposed method.\", \"weaknesses\": \"1. The use of image-text pre-training is prevalent, and the non-iid data issue is not limited to FL settings, so adding text encoders to mitigate the non-iid issue shouldn't be new. Honestly I am not familiar with the FL literature, so I don't know how much this would add to the FL community.\\n\\n2. The experiments are somewhat limited to small-scaled datasets. Performing experiments on larger scaled datasets (e.g. Imagenet) would make the claims stronger.\", \"questions\": \"In this paper, federated learning is applied to a specific type of image-text pre-training. I suggest to make this clear in the abstract and throughout the paper - as there are also other types of federated learning as well (e.g. text only)\", \"158___161\": \"Can you explain more how your work differs from other works that uses CLIP in FL training (e.g. Lu et al., 2023, Guo et al., 2023, Shi et al., 2023)? In particular - what is \\\"the philosophy of language-driven representation learning\\\"?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents FedGLCL, a novel federated learning (FL) framework designed to address the challenges posed by non-Independent and Identically Distributed (Non-IID) data. The framework leverages a pretrained VLM, CLIP, specifically through contrastive learning that integrates global language and local image features. The proposed method aims to harmonize feature learning across different clients, reducing variance in local representations and mitigating overfitting to majority classes. The paper includes extensive theoretical and empirical analyses, demonstrating the superiority of FedGLCL over state-of-the-art FL algorithms in various non-IID scenarios.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The utilization of large pretrained CLIP into FL is a unique and innovative direction. By leveraging contrastive learning to align local image features with global textual data, the paper proposes a novel solution to tackle the Non-IID problem.\\n2. The paper provides a solid theoretical foundation for FedGLCL, explaining how the framework can mitigate issues related to Non-IID data. This theoretical backing strengthens the credibility of the proposed method.\\n3.The authors conduct a comprehensive set of experiments, comparing FedGLCL with multiple state-of-the-art FL algorithms across different non-IID scenarios. The results demonstrate significant improvements, validating the effectiveness of the proposed framework.\", \"weaknesses\": \"1. Clarity and presentation issue: It is not clear which pretrained vision language models are used in the experiments. CLIP? Align? BLIP?\\n\\n2. It is better to compare the performance of different text encoders from various pretrained VLMs, because the proposed method heavily relies on the pretrained text encoder.\", \"questions\": \"Are the datasets used in the experiments part of the pretraining of VLMs? If this is the case, then your framework achieves better performance is due to the pretraining stage.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Official Comment by Authors\", \"comment\": \"> **Q4:** Add experiments with pre-trained image models.\\n\\nThanks for your suggestion. We conducted additional experiments using pre-trained image models on the Office-Caltech-10. Specifically, we initialized the image backbone (AlexNet) with pre-trained weights from ImageNet and kept all other settings unchanged. The results show that incorporating the pre-trained image backbone improved the performance of all methods, and our approach consistently achieved the best results. This confirms that under fair comparisons (where all methods either utilize the pre-trained image backbone or do not), FedGLCL significantly outperforms other methods, further demonstrating its effectiveness and superiority. The results have been included in the revision (Appendix D.9 Table 18).\\n\\n|Method |FedAvg|FedProx |FedNova|Scaffold| MOON| FedProto| FedBN |FedFA |FedGLCL |\\n|:-----:|:----:|:-----:|:----:|:-----:|:----:|:-----:|:----:|:-----:|:----:|\\n|Office-Caltech-10| 64.99 | 62.46 | 65.50 | 61.33 | 66.78 | 69.37 | 71.87 |74.91| 81.81 |\\n\\n**References**\\n\\n[1] Fang, Alex, et al. \\\"Data determines distributional robustness in contrastive language image pre-training (clip).\\\" International Conference on Machine Learning. PMLR, 2022.\"}", "{\"summary\": \"This paper solves the heterogeneous federated learning (in the image classification field) via a pre-trained and fixed text encoder (BERT) on the server and learned image encoders on the clients. The classification is done by clip-style image-text embedding similarity comparison. The authors report superior performance over traditional heterogeneous federated learning methods (FedAvg, etc.), and provide scalability analysis, efficient analysis, and ablation study.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method shows superior performance.\\n2. Comprehensive analysis is provided.\\n3. The usage of Bert embedding as a prototype vector for FL image classification is novel.\", \"weaknesses\": \"1. There are few comparisons with prototype FL (like [1] and [2]).\\n2. The method proposed has some limitations. It is restricted to image classification and requires a pre-trained model that can give good embeddings of the classification labels. This seems to exclude many FL scenarios. (I am not to say that using Office-Caltech-10 and DomainNet is not enough. I just wonder if it may not be as versatile as FL methods like FedAvg. Yet I still appreciate this paper as a good method for image classfication.)\\n\\n\\n[1] Federated Learning from Pre-Trained Models: A Contrastive Learning Approach https://proceedings.neurips.cc/paper_files/paper/2022/file/7aa320d2b4b8f6400b18f6f77b6c1535-Paper-Conference.pdf\\n[2] FedProto: Federated Prototype Learning across Heterogeneous Clients https://arxiv.org/pdf/2105.00243\", \"questions\": \"1. Comparison/Detailed discussion about prototype FL\\n2. Can it be adjusted for tasks besides image classification?\\n3. Can traditional word embedding such as Glove also achieve a performance comparable to Bert's? (just curious about this)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for your valuable time to respond to our feedback. We are very happy to see that your concerns have been fully addressed :)\"}" ] }
7p8CcxP1Xc
Proximal Mapping Loss: Understanding Loss Functions in Crowd Counting & Localization
[ "Wei Lin", "Jia Wan", "Antoni B. Chan" ]
Crowd counting and localization involve extracting the number and distribution of crowds from images or videos using computer vision techniques. Most counting methods are based on density regression and are based on an ``intersection'' hypothesis, *i.e.*, one pixel is influenced by multiple points in the ground truth, which is inconsistent with reality since one pixel would not contain two objects. This paper proposes Proximal Mapping Loss (PML), a density regression method that eliminates this hypothesis. {PML} divides the predicted density map into multiple point-neighbor cases through the nearest neighbor, and then dynamically constructs a learning target for each sub-case via proximal mapping, leading to more robust and accurate training. {Furthermore}, PML is theoretically linked to various existing loss functions, such as Gaussian-blurred L2 loss, Bayesian loss, and the training schemes in P2PNet and DMC, demonstrating its versatility and adaptability. Experimentally, PML significantly improves the performance of crowd counting and localization, and illustrates the robustness against annotation noise. The code is available at [https://github.com/Elin24/pml](https://github.com/Elin24/pml).
[ "crowd counting" ]
Accept (Poster)
https://openreview.net/pdf?id=7p8CcxP1Xc
https://openreview.net/forum?id=7p8CcxP1Xc
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xpsxBdHcr2", "x6ODewFrZN", "uWqqTgNyI8", "u7ZoKOG9GN", "oAs6FsqnuE", "khkyNO4MN2", "dUjly4C9Ag", "X63hm0P9NK", "TpWD5Qcvtu", "TY2i9HGCYf", "R28gcHfiDQ", "Q31uPm4DM6", "PnHhFb9Bem", "NwCUMWcPxQ", "NcvrNbkoYo", "KMOHi4keG6", "JdXxy0q5iU", "EnoHnZTV7Z", "AksaJc3YBy", "Agz0Ps0gDT", "6R1yyDbnQF", "2fzJEy9h9y", "2LAzxU9r6e" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732292845760, 1732291212326, 1732291947498, 1729993459502, 1730681873452, 1732291701430, 1730453950115, 1732290390094, 1732239494834, 1732846419398, 1732289595454, 1732293264238, 1732531912655, 1732621268601, 1737523502138, 1732519346814, 1732294001291, 1732500270630, 1732617277379, 1734239735952, 1732616734292, 1732540359359, 1730519736312 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2412/Authors" ], [ "ICLR.cc/2025/Conference/Submission2412/Authors" ], [ "ICLR.cc/2025/Conference/Submission2412/Authors" ], [ "ICLR.cc/2025/Conference/Submission2412/Reviewer_fWss" ], [ "ICLR.cc/2025/Conference/Submission2412/Reviewer_JtJ9" ], [ "ICLR.cc/2025/Conference/Submission2412/Authors" ], [ "ICLR.cc/2025/Conference/Submission2412/Reviewer_opsr" ], [ "ICLR.cc/2025/Conference/Submission2412/Authors" ], [ "ICLR.cc/2025/Conference/Submission2412/Reviewer_fWss" ], [ "ICLR.cc/2025/Conference/Submission2412/Authors" ], [ "ICLR.cc/2025/Conference/Submission2412/Authors" ], [ "ICLR.cc/2025/Conference/Submission2412/Authors" ], [ "ICLR.cc/2025/Conference/Submission2412/Reviewer_JtJ9" ], [ "ICLR.cc/2025/Conference/Submission2412/Reviewer_fWss" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2412/Reviewer_opsr" ], [ "ICLR.cc/2025/Conference/Submission2412/Authors" ], [ "ICLR.cc/2025/Conference/Submission2412/Reviewer_cGtL" ], [ "ICLR.cc/2025/Conference/Submission2412/Authors" ], [ "ICLR.cc/2025/Conference/Submission2412/Area_Chair_c1c2" ], [ "ICLR.cc/2025/Conference/Submission2412/Authors" ], [ "ICLR.cc/2025/Conference/Submission2412/Reviewer_fWss" ], [ "ICLR.cc/2025/Conference/Submission2412/Reviewer_cGtL" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer opsr\", \"comment\": \"**Q1: a quantitative comparison of training efficiency**\\n\\n**A1:** The efficiency is compared in the last paragraph of Sec. G in the Appendix, from line 944 to line 949. Overall, the computation ratio in practice is 3.46 on a 3090 Ti GPU. Besides, when training HRNet on the UCF-QNRF dataset, PML converges faster and achieves an MAE smaller than 75 in 1100 epochs, while GL requires more than 2100 epochs to achieve an MAE smaller than 80. For one epoch, GL needs around 90 seconds, but PML only requires around 40 seconds.\\n\\n---\\n\\n**Q2: analyze the characteristics of the proposed loss under different crowd densities.**\\n\\n**A2:** Spatial resolution affects not only PML but also other methods. Fig. 11 presents \\\"the estimation errors\\\" vs. \\\"the resolution\\\" for all loss functions. It shows that larger resolution results in better performance. To show the performance of PML under different crowd densities, we also added the detailed performance on the NWPU counting and localization benchmarks in Sec. J and Tab. 9 in the revised appendix. PML achieves the best performance in S3, where the count ranges from 500 to 5000. In the localization benchmark, PML achieves the best or second-best performance on all levels.\\n\\n---\\n\\n**Q3: Which scene PML works the best?** \\n\\n**A3:** As displayed in Tab. 9 in the Appendix, PML achieves the best counting performance in S3, where the counting range is from 500 to 5000 in the NWPU counting benchmark. In the localization benchmark, PML achieves the best or second-best performance on all levels.\\n\\nBesides, we added a new section (Sec. C) and a new figure (Fig. 7) to show how PML processes dense and sparse crowds. As displayed in Fig. 7(a), these lines demonstrate the \\\"average count in each point-neighbor case\\\" vs. the \\\"NN distance of each point.\\\" PML's average count is closer to 1 when the distance between the concerned point and its nearest point ranges from 20 to 28.\\n\\n---\\n\\n**Q4: Is the output resolution the same to train different models in Tab. 4?**\\n\\n**A4:** The output resolutions are the same for MCNN, CSRNet, and HRNet, as we use the same structure as that in the original papers, and HRNet's structure is not changed. For VGG19, we borrow the results from the original paper in other loss function studies (resolution \\\"X1/8\\\" in Fig. 11). The initialized version of PML used the resolution of \\\"X1/2\\\" in Tab. 4. \\n\\nThanks for the kind reminder. We have revised the results and now present the \\\"X1/8\\\" results in Tab. 4 in the new version (*line-437*). However, the conclusion with the \\\"X1/8\\\" map is consistent with the previously presented results: PML surpasses other loss functions.\\n\\n---\\n\\n**Q5: Does PML converge faster than existing methods?** \\n\\n**A5:** PML has a fast convergence speed. In the following table, we present after how many epochs the MAE of HRNet on the validation set is smaller than a specific value:\\n\\n|MAE | < 100 | < 90 | < 85 | < 80 | < 75 |\\n| :---: | :-----: | :----: | :----: | :----: | :----: |\\n| L2 | 1610 | - | - | - | - |\\n| BL | 220 | 640 | 1880 | - | - |\\n| DMC | 200 | 420 | 1120 | - | - |\\n| GL | 130 | 280 | 970 | 2170 | - |\\n| PML | 40 | 80 | 200 | 410 | 1070 |\\n\\nComparing PML to GL, GL achieves an MAE smaller than 85 after 970 epochs, while PML only requires 200 epochs to reach it and achieves an MAE smaller than 75 after 1070 epochs. Besides, GL needs around 90 seconds for each epoch, but PML only requires around 40 seconds. These pieces of empirical evidence demonstrate that PML converges fast.\"}", "{\"title\": \"Response to Reviewer JtJ9 -- part 3\", \"comment\": \"**Q8: How does the PML\\u2019s nearest neighbor assignment compare to (3) CLTR?**\\n\\n**A8:** CLTR proposes the KMO-based matching strategy, in which the KNN distance of each concerned point in its set (predicted point set or GT point set) is computed to compare the context similarity in loss. Since the KMO is also designed for point-based counters, we apply it to P2PNet and the improved P2PNet+ derived from PML in Sec. D of the Appendix. The comparison is listed here:\\n\\n| Method | MAE | MSE |\\n| :------------------: | :--: | :--: |\\n| P2PNet | 52.74 | 85.06 |\\n| P2PNet+ | 52.49 | 83.02 |\\n| KMO + P2PNet | 55.75 | 91.70 |\\n| KMO + P2PNet+ | 52.47 | **82.92** |\\n| PML | **52.25** | 83.93 |\\n\\nThe performance of KMO + P2PNet is worse than even P2PNet, but \\\"KMO + P2PNet+\\\" can improve the performance and achieve the lowest MSE. However, PML works the best on MAE, because PML separates the prediction into irregular patches by assigning each pixel to its nearest GT points. This ensures that each region naturally captures the spatial relationship similar to KMO. To better understand the proposed PML, a new figure (Fig. 6) and section (Sec. B) are added to the revised version to demonstrate the process of the divide and conquer stage in PML.\\n\\n---\\n\\n**Q9: How does PML handle extremely dense scenes or crowds with partially occluded?**\\n\\n**A9:** Although NN is applied in the divide stage, the model still follows the scheme of density map regression: the count loss $||M^\\\\top {a} - 1||_1$ forces the model to predict the count in each point-neighbor case close to 1. For extremely dense crowds or crowds full of occlusions, the boundary between pedestrians may not be clear, but the count is forced to be close to 1 in each point-neighbor case. \\n\\nFrom *line-750* to *line-785*, We add a new section (Sec. C in the appendix) and a new figure (Fig.7) to describe how PML performs in different density levels. An overall statistic in Fig. 7(a) demonstrates that the predicted count in each point-neighbor case is close to 1, and an example in Fig. 7(b)-(e) visualizes how the model trained with PML performs in both dense and sparse crowds. \\n\\n---\\n\\n**Q10: Detailed results from the NWPU evaluation.**\\n\\n**A10:** Thanks for your comments. We have added the detailed comparison of both the NWPU counting and localization benchmarks in Tab. 9 and added the corresponding description in Sec. J. On the counting benchmark, PML achieves similar performance to STEERER, and PML excels in S3, where the crowd size variance is the largest, ranging from 500 to 5000. On the localization benchmark, PML achieves outstanding performance at all levels and is much better than STEERER.\\n\\n---\\n\\n**Q11: Missing citation for RAZNet in line 135.**\\n\\n**A11:** Thanks for kind reminder, we have revised in the new version, see *line-134*.\"}", "{\"title\": \"Response to Reviewer cGtL -- part 2\", \"comment\": \"**Q7: rephrasing the intersection hypothesis.**\\n\\n**A7:** Due to the view perspective, one pixel may contain parts of two objects in the given image. However, the role of the intersection hypothesis is **to define the learning objective, not to model the image properties.** The learning objective is a highly abstract representation of instances and has different forms in different learning frameworks, such as the mixed Gaussian kernel in Fig. 1(b), the point in Fig. 1(c), and the box mask in Fig. 1(d). The intersection hypothesis assumes that a foreground pixel in the learning objective is shared by multiple instances(Fig. 1(b)), while in Fig. 1(c-e) without the intersection hypothesis, each foreground pixel is only occupied by one instance. \\n\\nSorry for any misunderstanding, and we have revised it from *line-72* to *line-73* in the new version.\\n\\n---\\n\\n**Q8: Visualizing how PML really addresses intersection hypothesis.**\\n\\n**A8:** The intersection hypothesis is not a problem, but a design principle. The elimination of the intersection hypothesis is implemented in the divide stage, where PML assigns each pixel to only one element in the GT point set. To make this clear, we plot the procedure of PML (the divide stage and conquer stage) in Fig. 6 and explain it in Sec. B of the revised appendix, to demonstrate how the intersection hypothesis is removed in PML.\\n\\n---\\n\\n**Q9: PML on recent SOTA models.**\\n\\n**A9:** See cGtL-Q2.\\n\\n---\\n\\n**Q10: quantitatively showcase how much PML can actually eliminate the intersection hypothesis**\\n\\n**A10:** The intersection degree can be quantitatively measured by localization performance, as previous studies have shown that training without the intersection hypothesis can improve localization performance. To showcase this quantitatively, we have updated the paper and presented the detailed performance on the NWPU localization benchmark in Sec. J of the Appendix and Tab. 9 (bottom). PML achieves better F1-measure than all previous methods, including those with and without the intersection hypothesis. In different head size levels, PML also achieves the best recalls from A2 to A4 (head area ranges from 1e2 to 1e5) and achieves the second-best recalls at other levels.\"}", "{\"summary\": \"This paper argues that the intersection hypothesis in existing density-map based counting methods is consistent with reality, where one pixel is influenced by multiple ground-truth points. To eliminate such a hypothesis, the authors propose a proximal mapping loss, which follows the philosophy of divide-and-conquer. Specifically, the predicted density map is first divided into non-overlapping irregular patches. Then, proximal mapping loss (PML) is computed within each point-neighbor case. Experimental results on several crowd counting datasets demonstrate the effectiveness of the proposed method. Additionally, this paper also shows that the proposed PML compasses many existing loss functions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper presents an interesting argument about the intersection hypothesis in existing density-regression counting methods.\\n2. The paper proposes a divide-and-conquer strategy to eliminate the intersection hypothesis, featuring a proximal-mapping loss.\\n3. Experiments show that the proposed method achieves state-of-the-art counting and localization results.\", \"weaknesses\": \"1. The main concern is the training efficiency. The proposed method not only needs to compute the nearest neighbor, but also constructs learning targets for each sub-problem presented in Eq. 1. This could be computationally expensive when dealing with congested scenes.\\n2. The robustness to noisy annotations needs further clarifications, e.g., performance comparisons with existing methods would be sufficient.\", \"questions\": \"1. Does the proposed method suffer from slow convergence speed? As shown in Fig. 4, the training epoch reaches 2000.\\n2. Did the authors use different epsilon for different datasets? Fig. 8 shows that the choice of epsilon does affect the performance.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces Proximal Mapping Loss (PML) to improve crowd counting and localization and challenges the basis of prevailing loss functions used in density regression. PML divides the predicted density map into smaller, non-overlapping point neighbors and is processed independently to construct a target for learning. The paper also links various existing loss functions used in crowd counting to PML, showcasing the generalizability across existing methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper demonstrates limited originality through its approach, which addresses a key limitation, the intersection hypothesis, in existing methods. The method offers a fresh perspective on how crowd density can be modeled. The paper provides good theoretical foundations for the proposed method and empirical results compared to existing methods. The paper is well-structured and progresses coherently to deliver the proposed method.\", \"weaknesses\": \"While the paper demonstrates that PML outperforms several existing loss functions, it does not explain why PML works better for density-based crowd-counting methods. The method\\u2019s reliance on nearest-neighbor assignment for dividing the predicted density map into point-neighbor cases could be problematic, particularly in dense crowd structures. NN algorithms may not capture the correct relationships between nearby points, leading to inaccurate density estimations in such edge cases. The paper does not address how this issue is handled or provide evidence that PML performs well in extreme cases.\", \"questions\": \"Major Comments\\n\\n1. There are existing density-based crowd-counting methods that already don\\u2019t rely on the intersection hypothesis, such as STEERER and CrowdDiff. STEERER achieves non-overlapping density through scaling, and CrowdDiff uses narrow kernels to prevent overlapping. Both methods use Gaussian kernels and avoid overlapping in density kernels. In that case, why would the non-overlapping nature of PML work better than non-overlapping Gaussian kernels?\\n2. The paper lacks a comparison against CrowdDiff, the state-of-the-art NWPU crowd-counting benchmark.\\n3. How does eliminating the intersection hypothesis improve crowd counting and localization performance?\\n4. Fig. 3 shows the density maps obtained with or without applying PML. If it is the former, how is localization achieved from the density maps? Also, in some cases, more than one density kernel is predicted on a single face. How are these false positives filtered for localization and counting?\\n5. How does the nearest neighbor approach in the divide and conquer stage impact performance in dense versus sparse crowd scenes?\\n6. The counting loss was tested using the nearest neighbors in CLTR. How does the PML\\u2019s nearest neighbor assignment compare to CLTR?\\n\\nMinor comments\\n\\n7. How does PML handle edge cases where crowd members are extremely close together or partially occluded? Especially with partial occlusions, even though the pixel values are not influenced by neighboring objects, the density kernels could have some overlapping as this is a distribution predicted over image coordinates conditioned on pixel values.\\n8. The detailed results from the NWPU evaluation could help address these edge cases for the crowd-counting community.\\n9. Missing citation for RAZNet in line 135.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer cGtL -- part 1\", \"comment\": \"**Q1: Miss discussion with D2CNet[a]: Decoupled two-stage crowd counting and beyond, IEEE Transactions on Image Processing, 2021.**\\n\\n**A1:** Thanks for the reminder. We have added and discussed D2CNet in the Introduction (*line 86*) and Related Work (*line 153*). Additionally, FIDTM and iKNN are also discussed, as they were initially overlooked (*lines 77 and 167*).\\n\\n---\\n\\n**Q2: Demonstrate its effectiveness on SOTA models instead of constructing a new one (the HRNet-based model)**\\n\\n**A2:** HRNet is indeed the backbone of the current SOTA -- STEERER. STEERER uses HRNet as the backbone but incorporates a feature selection and inheritance module and a specific training scheme called masked selection and inheritance loss. These elaborate designs aim to fuse density maps with multiple resolutions.\\n\\n| Model | MAE | MSE |\\n| :---------------: | :--: | :--: |\\n| STEERER (#1) | 58.4 | 92.0 |\\n| STEERER (#2) | 56.5 | 92.8 |\\n| STEERER (#4) | 54.6 | 86.9 |\\n| PML (#1) | **52.3** | **84.7** |\\n\\nIn the table, \\\"(#n)\\\" means that there are n density maps from different resolutions fused in STEERER via feature selection. \\\"(#1)\\\" means that feature selection is not applied.\\n\\nThe comparison shows that PML, which outputs just one density map without any complicated feature selection module, achieves better performance than STEERER, which fuses 4 density maps with various resolutions using elaborate neural network structures.\\n\\n---\\n\\n**Q3: vgg16bn results.**\\n\\n**A3:** Thanks for the comment. We have added it to Tab.2 in the revised version.\\n\\n---\\n\\n**Q4: This paper aims to solve the intersection hypothesis problem, but the paper lacks experiments to prove it.**\\n\\n**A4:** The intersection hypothesis is not a problem, but a design principle. In the Introduction, we claimed that most crowd counting methods are trained following the framework of density map regression, whose learning objective is also a density map where each foreground pixel in it is shared by multiple points. We summarized this as the intersection hypothesis. In the Related Work section, we introduce previous methods that are learned with (line 131) and without (line 157) the intersection hypothesis. Our PML follows the density map regression framework but is trained without the intersection hypothesis. To demonstrate how the intersection hypothesis is removed in the divide stage of PML, we added a new figure (Fig. 6) to present how the density map is split into many irregular patches.\\n\\n---\\n\\n**Q5: PML is robust to label noise.**\\n\\n**A5:** The robustness of PML with L1-norm is claimed when compared with L2-norm. See *line-318* to *line-328*, where we explain why L1-norm performs better than L2-norm when used as the count loss. In the revised version, we add uniform noise to human annotations to test the robustness of the L1-norm when compared with the L2-norm. As shown in the updated Fig.4(a)-(b), L1-norm achieves lower estimation errors than L2-norm in all noise degrees.\\n\\nIn our study, we claim that the reason for this phenomenon is that the L1-norm is more robust to label noise than the L2-norm because the former sets the learning objective according to the predicted count, which is consistent with the final evaluation metric. Fig. 5 provides a detailed visualization of the learning objective when PML uses L1-norm and L2-norm. Specifically, the comparison between Fig. 5(d) and Fig. 5(g) illustrates this point. When the count is close to 1 (well-estimated), the learning objective of L1-norm is close to the prediction (Fig. 5(g)), resulting in a small loss, which indicates that the training is good enough. However, the learning objective of L2-norm always forces the prediction to be close to the GT point's location, leading to a large loss even though the prediction is close to 1 (Fig. 5(d)). This latter case is harmful to the counting task because it forces the prediction to move to an inappropriate position even when the count is well-estimated.\\n\\n--- \\n\\n**Q6: CSRNet use vgg16 not vgg16-bn.**\\n\\n**A6:** Thanks for the kind reminder, we have revised it in the new version, see *line-380*.\\n\\n---\"}", "{\"summary\": \"This paper proposes a proximal loss function for crowd counting and localization. The key idea is to divide the predicted density map into multiple non-overlapped patches and compute loss for each patch via proximal mapping. Such an operation eliminates the intersection hypothesis in density regression-based methods, which improves the robustness and accuracy of the counting models. Evaluations on several crowd counting datasets demonstrate the superiority of the proposed loss over previous loss functions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This paper presents a new proximal mapping loss, eliminating the intersection hypothesis in existing density-map based methods.\", \"The authors comprehensively discuss the relationship between the proposed loss and existing loss functions.\", \"Extensive experiments validate the effectiveness of the proposed loss.\"], \"weaknesses\": [\"It would be better to give a quantitative comparison of training efficiency between the proposed loss and existing losses. While the authors mention that the loss computation of PML is more efficient than GL, it is unclear whether the convergence speed of PML is faster than existing loss functions.\", \"It appears that retaining the spatial resolution is important to make PML work as occluded objects could degenerate into a single point when downsampling. In this case, it is beneficial to analyze the characteristics of the proposed loss under different crowd densities.\", \"Following the previous comments, it is suggested to discuss in what scenarios PML is better than previous methods.\"], \"questions\": [\"Did the authors use the same output resolution to train different models in Table 4? As shown in the paper, spatial resolution could affect the performance.\", \"Does PML converge faster than existing methods? This is important when applying the proposed method to large-scale datasets.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"n/a\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer JtJ9 -- part 2\", \"comment\": \"**Q4: Comparison against CrowdDiff.**\\n\\n**A4:** Crowddiff addresses crowd counting through a coarse-to-fine procedure using diffusion models, which requires a denoising process and takes much time to generate the final density map:\\n\\n| UCF-QNRF | MAE | MSE | Time(s) |\\n| :-------------: | :--: | :---: | :-----: |\\n| CrowdDiff (T=1) | 74.6 | 134.8 | 0.21 |\\n| CrowdDiff (T=2) | 71.9 | 130.5 | 0.43 |\\n| PML(ours) | 73.2 | 127.5 | **0.08** |\\n| CrowdDiff(T=4) | **69.0** | **125.7** | 1.36 |\\n\\nPML can achieve similar performance when the refine time is less than 4. However, ours is significantly faster than CrowdDiff. Although CrowdDiff can achieve better performance than PML by increasing the refine times (T=4 in the paper), PML outperforms it in other metrics:\\n\\n- **Inference Efficiency:** As shown in the table above, PML is much faster than CrowdDiff.\\n- **Flexibility:** PML's novelty lies in the *loss function* and is independent of the network structure. As shown in Table 4, PML can even improve the performance of MCNN, a very old and small model. From MCNN to HRNet, the best performance can be obtained by simply changing the backbone. This flexibility is not possible for a diffusion model like CrowdDiff.\\n- **Theoretical Contribution:** We establish the connection between PML and previous loss functions in crowd counting, as shown in Tab.1, which demonstrates that previous loss functions are special cases of PML within the point-neighbor case.\\n\\nCombining CrowdDiff and PML could be a future work. However, reproducing CrowdDiff is challenging since there are a lot of issues in reproducing CrowdDiff's results, as listed in [Issues \\u00b7 dylran/crowddiff](https://github.com/dylran/crowddiff/issues).\\n\\n---\\n\\n**Q5: How does eliminating the intersection hypothesis improve crowd counting and localization performance?**\\n\\n**A5:** Eliminating the intersection hypothesis can improve localization performance, which has been demonstrated in many works, *i.e.,* FIDTM, IIM, and P2P. Their results on the NWPU localization benchmark are listed in Tab. 9 (bottom) of the revised version. However, the counting performances of these methods are not as good as those methods that are based on density regression and follow the intersection hypothesis. In our paper, the main contribution to the improvement of counting **and** localization is attributed to PML, which minimizes the difference between prediction and GT via proximal mapping.\\n\\n---\\n\\n**Q6: How is localization achieved from the density maps in Fig.3?**\\n\\n**A6:** In *line-463*, we apply **OT-M** to transform the density map into localization information. If the sum of a local region in a density map is greater than 1, two or more pedestrians may be predicted. However, this issue is not related to PML but rather to OT-M. PML aims to ensure that the sum of each point-neighbor case is close to 1, *i.e.,* the term $||P^\\\\top {a} - 1||_1$ in equation (24).\\n\\n---\\n\\n**Q7: How does the nearest neighbor approach in the divide and conquer stage impact performance in dense versus sparse crowd scenes?**\\n\\n**A7:** We add a new section (Sec. C) and figure (Fig. 7) in the Appendix to describe the difference. In a sparse crowd, PML can predict a density map similar to a point map. In dense regions, the density map is close to a Gaussian-blurred density map. Note that the nearest neighbor is the KNN with K=1. If K > 1, there will be intersections between adjacent point-neighbor cases, which deviates from the original purpose of eliminating the intersection hypothesis.\\n\\n---\"}", "{\"comment\": \"After reading the comments from fellow reviewers, I agree that additional comparisons and discussions are necessary to support the effectiveness of the proposed approach. Given that the authors did not provide a rebuttal, these concerns remain unaddressed. Therefore, I am inclined to recommend rejecting this paper.\"}", "{\"title\": \"Looking forward to your further comments!\", \"comment\": \"Dear reviewers and AC:\\n\\nThanks again for all of your constructive suggestions, which have helped us improve the quality and clarity of the paper! \\nAlthough we cannot upload a revised PDF, we can post replies on the forum before December 2nd. Please don't hesitate to let us know if we can offer any additional clarifications or address any minor issues, as we would love to convince you of the paper's merits. We appreciate your suggestions. Thanks!\\n\\n-the authors\"}", "{\"title\": \"Response to Reviewer JtJ9 -- part 1\", \"comment\": \"**Q1: Why would the non-overlapping nature of PML work better than non-overlapping Gaussian kernels?**\\n\\n**A1**: *\\\"Sec 4.1: From Dynamic L2 Loss to Gaussian-Blurred L2 Loss\\\"* theoretically analyzes the differences and connections between PML and the Gaussian-blurred L2 loss.\\n\\nOn the one hand, from *line-245* to *line-250*, a learning objective ${p^*}$ for L2 loss is constructed dynamically according to the current prediction. On the other hand, from *line-257* to *line-266*, we demonstrate how to add conditions to (7) to facilitate the transition from a **dynamic ${p^\\\\*}$** to a **fixed Gaussian kernel ${p^\\\\*}'$**.\\n\\nCompared with non-overlapping fixed Gaussian kernels, a dynamic learning objective ${p^*}$ can achieve better performance because it finds a customized and more learnable target for the current model, as explored in numerous studies:\\n- [a] introduces *self-correction supervision*, estimating a dynamic learning objective ${p^*}$ based on a Gaussian Mixture Model derived from the prediction ${a}$.\\n- [b] proposes ADMG, a *refiner module* that fuses different Gaussian kernels to construct the learning objective ${p^*}$, with the *refiner module* being supervised by the current prediction ${a}$.\\n- [c] further lifts the restrictions in [b] and introduces KDMG, which estimates a kernel at each pixel to construct ${p^*}$ according to the ground truth (GT) point map. Similar to [b], the learnable constructor in KDMG is also supervised by the current prediction ${a}$.\\n- The loss function in [d] also includes an L2 term (pixel loss), where the learning objective ${p^*}$ is constructed dynamically by aggregating the transport plan between GT and prediction ${a}$, estimated through unbalanced optimal transport.\\n- The learning objective of P2PNet, as presented in [e], is also constructed based on GT points and the current estimate ${a}$, which is involved in the formation of the cost matrix (18).\\n\\n```\\n[a] Adaptive Dilated Network with Self-Correction Supervision for Counting\\n[b] Adaptive Density Map Generation for Crowd Counting\\n[c] Kernel-based Density Map Generation for Dense Object Counting\\n[d] A Generalized Loss Function for Crowd Counting and Localization\\n[e] Rethinking Counting and Localization in Crowds: A Purely Point-Based Framework\\n```\\n\\nThese studies have shown that adapting the learning target (e.g., the intermediate representation of the density map) can lead to better counting performance by better fitting the properties of the data. We add related content to line-267 in the revised version.\\n\\n---\\n\\n**Q2: Dividing the predicted density map into point-neighbor cases could be problematic, particularly in dense crowd structures.**\\n\\n**A2:** As described from *line-073* to *line-079*, several studies have explored the possibility of dividing the prediction into non-overlapping patches, *i.e.*, LSC-DNN, TopCount, and IIM. Additionally, ikNN and FIDTM also demonstrate that it is reasonable to divide the prediction into irregular patches based on the distance to GT points, of which the description has been added to the introduction (*line-077*) and related works (*line-167*) of the new version.\\n\\n---\\n\\n**Q3: NN may not capture the correct relationships between nearby points. Provide evidence that PML performs well in extremely nearby points.**\\n\\n**A3:** In the revised version, we add Sec. C -- \\\"Sparse *vs.* Dense Crowd in PML\\\" and Fig 7 to demonstrate it. According to Fig. 7, PML can predict a count close to 1 in each point-neighbor case even if it is very dense (NN < 4: the distance between the concerned point and its nearest neighbor in GT is smaller than 4).\\n\\n---\"}", "{\"title\": \"Response to Reviewer fWss\", \"comment\": \"**Q1: training efficiency demonstration, especially the nearest neighbor and learning objective for each sub-problem.**\\n\\n**A1:** In (24), we present how to compute the PML via matrix operations, which is executed fast on GPU. The nearest neighbor can be easily computed by getting the column minimum of the cost matrix, while all sub-problems can be solved in parallel. All operations involved in PML have stable and fast implementation using GPU.\\n\\nIn the last paragraph of Sec. G in the Appendix (line-945 to line-949), we compare the efficiency of PML to GL, and show that the computation ratio in practice is 3.46 on a 3090 Ti GPU. Besides, when training HRNet on the UCF-QNRF dataset, PML converges faster and achieves an MAE smaller than 75 in 1100 epochs, while GL requires more than 2170 epochs to achieve an MAE smaller than 80. For each epoch, GL needs around 90 seconds, but PML only requires around 40 seconds.\\n\\n---\\n\\n**Q2: The robustness to noisy annotations.**\\n\\n**A2:** The robustness of PML with L1-norm is claimed when compared with L2-norm. See *line-318* to *line-328*, where we explain why L1-norm performs better than L2-norm when used as the count loss. In the revised version, we add uniform noise to human annotations to test the robustness of the L1-norm when compared with the L2-norm. As shown in the updated Fig.4(a)-(b), L1-norm achieves lower estimation errors than L2-norm in all noise degrees.\\n\\nIn our study, we claim that the reason for this phenomenon is that the L1-norm is more robust to label noise than the L2-norm because the former sets the learning objective according to the predicted count, which is consistent with the final evaluation metric. Fig. 5 provides a detailed visualization of the learning objective when PML uses L1-norm and L2-norm. Specifically, the comparison between Fig. 5(d) and Fig. 5(g) illustrates this point. When the count is close to 1 (well-estimated), the learning objective of L1-norm is close to the prediction (Fig. 5(g)), resulting in a small loss, which indicates that the training is good enough. However, the learning objective of L2-norm always forces the prediction to be close to the GT point's location, leading to a large loss even though the prediction is close to 1 (Fig. 5(d)). This latter case is harmful to the counting task because it forces the prediction to move to an inappropriate position even when the count is well-estimated.\\n\\n---\\n\\n**Q3: Does PML suffer from slow convergence speed?**\\n\\n**A3:** PML has a fast convergence speed. In the following table, we present after how many epochs the MAE of HRNet on the validation set is smaller than a specific value:\\n\\n|MAE | < 100 | < 90 | < 85 | < 80 | < 75 |\\n| :---: | :-----: | :----: | :----: | :----: | :----: |\\n| L2 | 1610 | - | - | - | - |\\n| BL | 220 | 640 | 1880 | - | - |\\n| DMC | 200 | 420 | 1120 | - | - |\\n| GL | 130 | 280 | 970 | 2170 | - |\\n| PML | 40 | 80 | 200 | 410 | 1070 |\\n\\nComparing PML to GL, GL achieves an MAE smaller than 85 after 970 epochs, while PML only requires 200 epochs to reach it and achieves an MAE smaller than 75 after 1070 epochs. Besides, GL needs around 90 seconds for each epoch, but PML only requires around 40 seconds. These pieces of empirical evidence demonstrate that PML converges fast.\\n\\n---\\n\\n**Q4: Did the authors use different $\\\\epsilon$ for different datasets?**\\n\\n**A4:** No, we use the same $\\\\epsilon = 2$ for all experiments since the ablation study (Fig. 10(a)) shows that the best performance is achieved when it is set to 2 on the UCF-QNRF dataset.\"}", "{\"comment\": \"The authors have responded to the concerns I raised. I have no more additional questions. I will consider these during my final evaluation.\"}", "{\"comment\": \"Thanks for the response. Given that the rebuttal has addressed the concerns, I will retain my initial score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"Thanks for the rebuttal. My concerns have been well addressed. I would like to keep my initial rating.\"}", "{\"title\": \"Looking forward to your responses or further suggestions/comments!\", \"comment\": \"Dear Reviewer fWss,\\n\\nThanks for all of your suggestions and sorry for the late reply. We just posted [our responses to your question](https://openreview.net/forum?id=7p8CcxP1Xc&noteId=Q31uPm4DM6). Please don\\u2019t hesitate to let us know if there are any additional clarifications or evidence we can offer, as we would love to convince you of the merits of the paper. Thanks!\\n\\n-the authors\"}", "{\"comment\": \"Thanks the authors for the responses. My most concerns have been treated carefully. I maintain my initial rating.\\n\\nOnly one remaining comment on the use of PML on SOTA models. In the rebuttal, the authors show that the PML loss outperforms the STEERER model with multi-res density output, which is good. Yet, since PML is considered as a generic loss, it should work compatible with most existing models. What I really want to see is that whether PML can be INCORPORATED into SOTA models to further improve the performance and set a new SOTA.\"}", "{\"title\": \"Response to Reviewer cGtL -- part 3\", \"comment\": \"Thanks for your comments:\\n\\n**Q11: incorporate PML into SOTA models to further improve the performance**\\n\\n**A11:** We tried to incorporate PML into STEERER and conduct experiments on ShTec A/B, the results are like this:\\n\\n| ShTech A | MAE | MSE |\\n| :-: | :-: | :--: |\\n| STEERER | 54.5 | 86.9 |\\n| STEERER + PML | **52.6**| **85.5** |\\n\\n| ShTech B | MAE | MSE |\\n| :-: | :--: | :--: |\\n| STEERER | 5.8 | 8.5 |\\n| STEERER + PML | **5.3** | **8.4** |\\n\\nThe performance of STEERER with PML is better than the one without PML, but we observed that the training process is unstable and converges slowly: STEERER + PML achieves an MAE smaller than 53 on ShTech A after 1150 training epochs, while the simplified version in our approach achieves similar result in just 300 epochs.\\n\\nThis may be caused by the inherent loss in STEERER, which is specifically designed for pixel-wise L2 loss between prediction and stable training targets (Gaussian kernel with fixed sigma (15 in STEERER)). However, PML is a pixel-to-point loss as shown in (24) and (14). The pixel-wise learning objective presented in (7) derived from PML is not a fixed but a dynamic one changing according to the current prediction. During density map selection in STEERER, the dynamic property makes it difficult to reflect which density map resolution is closest to the ground truth, causing the selective inheritance learning in STEERER to fail. An effective and stable way to incorporate pixel-to-point loss like BL, GL, and our PML into the training framework to select the best density map from various resolutions may be a future work to explore.\\n\\n\\nBesides, our ongoing work is to incooperate PML into the other SOTA, CrowdDiff. However, currently reproducing CrowdDiff is challenging. There is no one has been able to reproduce CrowdDiff's results, as listed in the issues module in [Issues \\u00b7 dylran/crowddiff](https://github.com/dylran/crowddiff/issues).\"}", "{\"metareview\": \"This paper proposed a novel density regression loss for crowd counting and localization. The reviewers recognized the method's contributions and comprehensive analysis. Experimental results further validate the effectiveness of the proposed approach. The paper is well-structured. Moreover, it provided good theoretical foundations and achieved SOTA performance. The weakness of this paper is that the proposed loss is incorporated into only a few existing modes. All reviewers rated the paper as marginally above the acceptance threshold, so I recommended it for acceptance.\", \"additional_comments_on_reviewer_discussion\": \"The concerns raised by four reviewers include reference missing, insufficient ablation studies, lack of comparison with some SOTA methods, efficiency analysis and \\u00a0robustness analysis to noisy annotations etc. The authors provided comprehensive rebuttals, addressing most concerns of \\u00a0Reviewer cGtL and all the concerns of the other three reviewers. Finally, one reviewer raised his rating and the other three reviewers maintained their positive ratings.\"}", "{\"title\": \"Response to Reviewer fWss -- part 2\", \"comment\": \"Thanks for your suggestion. We revise the paper again and answer your newest comments here:\\n\\n**Q5: a quantitative comparison with existing methods (e.g., Fig. 4 in NoiseCC)**\\n\\n**A5:** The way to add noise in the last revised version follows that in BL, so we compare it with BL in the newest version (Fig. 4(a)). Besides, in the newest revised version, we replace Fig.4(b) with the comparison that adding noise follows NCC. NCC performs better than the PML with L2-norm, but PML with L1-norm achieves lower estimation errors.\"}", "{\"comment\": \"Thanks for the response. The rebuttal has addressed most of the concerns. Regarding the robustness to noisy annotations, a quantitative comparison with existing methods (e.g., Fig. 4 in NoiseCC) is preferred.\"}", "{\"summary\": \"This paper proposes the Proximal Mapping Loss (PML), a novel density regression loss for crowd counting and localization. PML studies the so-called intersection hypothesis of classic density map and addresses the intersection issue by dividing the predicted density map into multiple patches through nearest neighbor analysis. It then constructs a dynamic learning target for each case, leading to more tractable and robust training. In addition, the paper also shows the theoretical connection between the PML loss and other closely related loss functions. Experiments were conducted to demonstrate the efficacy of PML in both crowd counting and localization.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"**Good motivation**. The paper targets an appropriate problem in crowd localization, featured by the intersection hypothesis. The problem sounds.\", \"**A novel loss**. To evade the sum property of density map, a new loss function is proposed to constrain the regression target by dividing the training sample into many point-neighbor cases and then computing sub-loss by comparing the difference between the prediction and a dynamic learning target defined via proximal mapping. According to Fig. 2, it does advance the prior Bayesian Loss and Generalized Loss in discriminating close individuals.\", \"**Theoretically sound**. The derivations in Sec. 3.2 show how the loss function works in a relatively intuitive way during training, and its connections with several loss functions and training schemes have been proven, which is theoretically sound.\", \"**SOTA performance**. Through evaluations on several benchmark datasets, PML is shown to outperform current state-of-the-art methods in general.\"], \"weaknesses\": [\"**Some closely related work is missing**. For example, the same problem of intersection hypothesis has been discussed in [a]. While the solution differs, the way for addressing such an issue in [a] could be discussed and compared in Fig.1 as well.\", \"[a] Decoupled two-stage crowd counting and beyond, IEEE Transactions on Image Processing, 2021.\", \"**Inappropriate baselines**. Since the main focus of the work is the loss function, it should demonstrate its effectiveness on SOTA models instead of constructing a new one (the HRNet based model). It would be difficult to interpret where the real improvement comes from.\", \"Some important experiments are missing such that some results are not directly comparable. For example, the proposed approach should report the results of vgg-16bn as well.\", \"**A proof-of-concept experiment is missing**. The initial claim of the paper is to solve the intersection hypothesis problem, but it turns out to focus on how to better predict points. There lacks some comparative experiments to prove the soundness of the initial claim.\", \"The paper claims that PML is robust to label noise. Though visualization results are given to demonstrate its robustness, there lacks evidence to support this.\", \"Minor issues\\uff1a\", \"Some cited results may be wrong. E.g., CSRNet in Table 2 uses vgg-16 rather than vgg-16bn.\"], \"questions\": [\"Please consider rephasing the intersection hypothesis. It does not make sense to me that in reality one pixel would not contain two objects. If two objects occlude, one pixel can certainly contain part of the two objects.\", \"Further ablation experiments should be conducted to justify the effect of noise on the model and the effectiveness of PML in coping with it.\", \"The paper has demonstrated its work on removing the intersection hypothesis. Experiments, however, has shown nothing related to the hypothesis and mainly focus on the efficacy of PML. Could you explain through experiments or visualizations whether PML really addresses such an issue?\", \"Table 4 only shows plug-and-play results for MCNN and CSRNet; they are somewhat outdated. It is better to report the use of PML on recent SOTA models.\", \"It would be good to showcase how much PML can actually eliminate the intersection hypothesis? Can it be shown in a quantitative way?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N.A.\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
7orD38wzdi
Ego-centric Learning of Communicative World Models for Autonomous Driving
[ "Hang Wang", "Dechen Gao", "Qiaoyi Fang", "Junshan Zhang" ]
We study multi-agent reinforcement learning (MARL) for tasks in complex high-dimensional environments, such as autonomous driving. MARL is known to suffer from the *partial observability* and *non-stationarity* issues. To tackle these challenges, information sharing is often employed, which however faces major hurdles in practice, including overwhelming communication overhead and scalability concerns. Based on the key observation that world model encodes high-dimensional inputs to low-dimensional latent representation with a small memory footprint, we develop *CALL*, {C}ommunic{a}tive Wor{l}d Mode{l}, for ego-centric MARL, where 1) each agent first learns its world model that encodes its state and intention into low-dimensional latent representation which can be shared with other agents of interest via lightweight communication; and 2) each agent carries out ego-centric learning while exploiting lightweight information sharing to enrich her world model learning and improve prediction for better planning. We characterize the gain on the prediction accuracy from the information sharing and its impact on performance gap. Extensive experiments are carried out on the challenging local trajectory planning tasks in the CARLA platform to demonstrate the performance gains of using *CALL*.
[ "World Model", "Reinforcement Learning", "Autonomous Driving", "Distributed Learning" ]
Reject
https://openreview.net/pdf?id=7orD38wzdi
https://openreview.net/forum?id=7orD38wzdi
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zobhb9M0QY", "w0jTZYb8la", "tbkHfHjD0Z", "qKvyEWIIBy", "konHNuqCb9", "klLZNmqZ1R", "ip3h6B7xzS", "hprxuULl3i", "cMzCmgIIIg", "baJhY427PR", "WWFZJHUfU8", "VkBSoqWdUh", "TPvSq39jtD", "TLdXpnuw4c", "S5Rolx7j4k", "PeyPSnL05H", "NmLm14bbWd", "NT0RRnIKwo", "MwzIjCuw8G", "Lk3dpPzS5v", "H6gmlFR4Gd", "FitFbxOxmW", "9Xew1SMn9p", "8d5Qf5s7bx", "5qh7oz6vWX", "3EPVPuMvXf" ], "note_type": [ "official_comment", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment" ], "note_created": [ 1732518971166, 1730721670141, 1732060391040, 1737524176583, 1733093478414, 1732519497136, 1732059675172, 1734738023811, 1732733284701, 1730950416541, 1732059712107, 1732521009152, 1732701364256, 1733093323645, 1732060777501, 1732060731157, 1732524450369, 1732059909435, 1732734066611, 1732060435774, 1732060343941, 1730694931699, 1732520745466, 1732512052411, 1730045968007, 1732520326138 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12264/Reviewer_nMor" ], [ "ICLR.cc/2025/Conference/Submission12264/Reviewer_j41G" ], [ "ICLR.cc/2025/Conference/Submission12264/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12264/Authors" ], [ "ICLR.cc/2025/Conference/Submission12264/Reviewer_nMor" ], [ "ICLR.cc/2025/Conference/Submission12264/Authors" ], [ "ICLR.cc/2025/Conference/Submission12264/Area_Chair_tGJJ" ], [ "ICLR.cc/2025/Conference/Submission12264/Authors" ], [ "ICLR.cc/2025/Conference/Submission12264/Reviewer_BEZm" ], [ "ICLR.cc/2025/Conference/Submission12264/Authors" ], [ "ICLR.cc/2025/Conference/Submission12264/Reviewer_nMor" ], [ "ICLR.cc/2025/Conference/Submission12264/Reviewer_j41G" ], [ "ICLR.cc/2025/Conference/Submission12264/Authors" ], [ "ICLR.cc/2025/Conference/Submission12264/Authors" ], [ "ICLR.cc/2025/Conference/Submission12264/Authors" ], [ "ICLR.cc/2025/Conference/Submission12264/Reviewer_nMor" ], [ "ICLR.cc/2025/Conference/Submission12264/Authors" ], [ "ICLR.cc/2025/Conference/Submission12264/Authors" ], [ "ICLR.cc/2025/Conference/Submission12264/Authors" ], [ "ICLR.cc/2025/Conference/Submission12264/Authors" ], [ "ICLR.cc/2025/Conference/Submission12264/Reviewer_DLBj" ], [ "ICLR.cc/2025/Conference/Submission12264/Reviewer_nMor" ], [ "ICLR.cc/2025/Conference/Submission12264/Authors" ], [ "ICLR.cc/2025/Conference/Submission12264/Reviewer_nMor" ], [ "ICLR.cc/2025/Conference/Submission12264/Reviewer_nMor" ] ], "structured_content_str": [ "{\"comment\": \"My question includes four aspects: observable scalability, testing adaptability, training transferability, and total quantity scalability. We know that there are three classic paradigms: centralized, decentralized, and shared. The centralized approach will inevitably have the problem of curse of dimensionality, while the decentralized approach of **establishing local models for each agent will inevitably lead to an increase in the number of model parameters as the number of agents increases.** Although shared manner can solve the above problems, it will face the disadvantage of homogenization and poor representation ability, which can easily lead to local optima. Moreover, in our opinions, the methods presented in the article did not bring any new insights, namely a very simplistic approach.\"}", "{\"summary\": \"The paper tries to tackle the issues of partial observability and non-stationarity in multi-agent reinforcement learning (MARL) tasks in complex environments by developing a decentralized and adaptive communication pipeline. Verbatim from the paper, the authors presented an approach to answer: \\u201cHow to synergize world model\\u2019s generalization capability with light-weight information sharing for enhancing ego-centric MARL in high-dimensional, non-stationary environments?\\u201d\\n\\nThe authors work on top of the Dreamer V3 architecture to learn a world model (WM) for compressing the sensor information and for learning the transition dynamics.\\nFor light-weight communication, the authors propose to use the compressed latent representations to predict the next transitions and also to share the other agent\\u2019s latent state and latent intentions (waypoints).\\n\\nThey call this framework CALL (Communicative World Model), where every agent shares its latent state and intention (encoded using a learned WM) and tries to improve their WM by adaptive and effective sharing of information. This adaptive sharing of relevant latent information helps minimizing communication overhead and improves decision-making.\\nThe authors presented some theorems and propositions to manifest how WM can improve the prediction performance and reduce sub-optimality gap.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper tried to explain how prediction error can control the sub-optimality gap and present an upper bound on both these entities.\", \"The paper did a good ablation study (both in terms of experiments and theoretically) to investigate the impact of insufficient information sharing, locally sufficient information sharing, latent information sharing and full observation sharing.\", \"The authors presented the numbers of how much the bandwidth can be improved using the latent representations for information sharing.\", \"The paper has experiments to show that WM generalization combined with information sharing is an effective\\tcomponent to improve prediction in distributed RL in high-dimension environments.\"], \"weaknesses\": [\"The definitions, formulations, introduction of notations can be made a bit more coherent.\", \"Uncertainty related to information sharing was not discussed and taken care of in the paper and was mentioned as one of the problems in the introduction section as well.\", \"The authors also mention privacy-preserving techniques can also be integrated during information sharing to prevent any sensitive information being leaked.\"], \"questions\": [\"Fig 1: Where is a_{1, t} and a_{2, t} coming from? I assume we can also have the notation for the policy from which these actions are getting sampled from.\", \"Fig1: Can we have the symbol for encoder be replaced by the conventional trapezoid? The current symbol gives an impression of a combined enc-dec model.\", \"Line 314: Can we please define PPAD?\", \"Line 318: Is that extra closing bracket here T_{i,t}) a typo?\", \"Line 377: \\u2026.3D environment\\u2026..: missing space after environment.\", \"Algorithm 1: Line 384: How is the communication range updated? Is there any update formula for that?\", \"Nit: Line 1322: \\u201cLarge\\u201d could be made to \\u201clarge\\u201d?\", \"Can we call this similar/analogous to the unification of trajectory prediction and planning task where the next best action during planning is conditioned on the predicted trajectories of other agents; here, the latent intention (encoded waypoints) are provided as information that is used to predict the state dynamics?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer DLBj (2/3)\", \"comment\": \"**A5. Communication mechanism.** The communication mechanism operates with a carefully selected threshold c, which we determined through validation. Our implementation dynamically adjusts communication range based on prediction accuracy. When k-step prediction error exceeds threshold c, the range incrementally expands to include the next nearest neighbor until either prediction improves or maximum range is reached. The range contracts when predictions remain accurate, ensuring minimal but sufficient communication. We show that this approach achieves significant efficiency, requiring only 0.106MB average bandwidth compared to 5.417MB for full observation. In our revision, we add detailed quantitative results in Appendix E demonstrating how different thresholds affect performance and communication overhead.\\n\\n\\n **A6. Bounds in Theorem 1.** The upper bounds in Theorem 1 are tight for the case with two distributions for approximation errors. While deriving lower bounds remains challenging due to the non-trivial nature of RNN generalization bounds, this doesn't impact the practical utility of our results. The finite action space assumption follows standard practice in theoretical MARL analysis, and our implementation handles continuous actions through discretization, as demonstrated successfully in DreamerV3. We have clarified the impact of approximation errors on performance in Section 3.2.\\n\\n\\n **A7. Communication Complexity** While deriving theoretically optimal communication strategies is highly non-trivial due to the complexity of multi-agent interactions, we provide comprehensive empirical analysis of the accuracy-performance trade-offs. Our results demonstrate that CALL only requires 50 times less bandwidth than centralized case while maintaining high performance. The revision includes detailed quantitative analysis in \\\\textbf{Appendix G} showing how different accuracy thresholds affect both system performance and communication overhead in autonomous driving scenarios.\\n\\n### Questions\\n\\n **A1. Prediction accuracy threshold.** The threshold c is a static hyperparameter that we determined through empirical validation. In our revision, we include comprehensive quantitative results in Appendix F demonstrating how different threshold values affect the trade-off between system performance and communication overhead. This analysis provides a clear guidance for parameter selection in practical deployments.\\n\\n **A2. Communication delays** While our current implementation assumes reliable communication, CALL's framework naturally accommodates extensions for handling delays and packet losses. The world model's prediction capability allows agents to continue operating even with temporary communication disruptions by relying on their learned dynamics models. In future work, we plan to incorporate explicit mechanisms for handling delays through prediction-based compensation and implement robust protocols for packet loss recovery.\\n\\n\\n **A3. Maximum number of agents.** CALL's communication protocol is inherently scalable because it's ego-centric and distributed. Each agent's observation space remains constant regardless of the total number of agents in the system, as it only depends on the agent's local sensing range. When an agent encounters others within its observation range, it dynamically decides whether to request information sharing based on prediction uncertainty. This approach doesn't require changes to model architecture or observation dimensions as the system scales, making it theoretically capable of handling any number of agents within bandwidth constraints.\\n\\n **A4. Bounds in Theorem 1.** The upper bound in Theorem 1 is tight as the quality can be achieved when the equality holds in Assumptions 1 (Action and policy bound), 2 (weight matrices bound) and $\\\\mathbf{E}\\\\_{\\\\pi}[{D}\\\\_{\\\\operatorname{TV}}(P || \\\\hat{P})] = \\\\mathcal{E}\\\\_P$ (line 274). As can be seen in Eqn (10), once those equalities hold, the resulting upper bound derived in Theorem 1 will be achieved. While lower bounds can provide theoretical completeness, the primary focus in generalization error analysis literature, e.g., [R1,R2,R3,R4] is on establishing upper bounds, as these directly inform algorithm design and practical implementation.\\n\\n- [R1] Wu et al., \\\"Statistical machine learning in model predictive control of nonlinear processes\\\", Mathematics, 2021\\n\\n- [R2] Lim et al., \\\"Noisy recurrent neural networks\\\", NeurIPS 2021\\n\\n\\n- [R3] Tu et al. \\\"Understanding generalization in recurrent neural networks.\\\" ICLR, 2020.\\n\\n- [R4 ] Chen et al., \\\"On generalization bounds of a family of recurrent neural networks.\\\", 2019\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"A Gentle Reminder\", \"comment\": \"Dear Reviewer,\\n\\nAs the rebuttal period is nearing its conclusion, we would like to kindly follow up to see if there are any additional questions or areas that need further clarification. Thank you once again for your time and valuable feedback.\\n\\nAuthors\"}", "{\"comment\": \"In terms of comparative experiments, in fact, some papers are also required to supplement experiments on multi-agent navigation in non autonomous driving fields, which is even more unreasonable. In my opinion, experiments that only supplement relevant methods are reasonable and necessary. Although many benchmark algorithms are not developed in specific fields, they have undergone extensive testing in multi-agent environments, and appropriate and representative benchmark algorithms should be selected for comparison.\"}", "{\"title\": \"Reply to Reviewer BEZm (1/2)\", \"comment\": \"### Weakness\\nWe thank reviewer BEZm for your careful reading and feedback. We would like to clarify our experiments design in details as follows.\\n\\n**A1. Baseline Comparison.** We appreciate the reviewer\\u2019s suggestion regarding additional comparisons. Our choice of baselines was guided by several important considerations:\\n\\nFirst, we focused on world model-based approaches specifically designed for autonomous driving tasks, given the unique challenges of the high-dimensional CARLA environment. Many conventional RL approaches struggle with the curse of dimensionality in such settings without substantial modifications (as noted in Line 42). We choose the SOTA work just published in 2024 [R1] on autonomous driving planning, which is based on DreamerV3, as our primary baseline (denoted as 'Local Obs.' in Figure 3(a)). Additionally, we included a variant without waypoint sharing (LSI) for ablation studies of the impact of our communication mechanism. To the best of our knowledge, CALL (this study) is the first multi-agent world model-based approach specifically designed for autonomous driving tasks.\\n\\nThe works in Table 1 either not use world model (hence not being able to effectively deal with high-dimensional inputs in CARLA), or are lack of intention sharing (which is essential for planning),or have requirements on for sharing all information among agents (hence impractical for a large multi-agent systems as considered in our work). While the works by [R2, R3] are world model based methods, they were developed for fundamentally different environments, i.e., the DeepMind Control Suite and SMAC benchmark respectively. Adapting these methods to CARLA's autonomous driving setting would require significant architectural modifications that could compromise their original design principles. For instance, both [R1,R2] do not have dedicated module for intention process, which is critical for autonomous driving to understand the potential actions of other agents in the environment.\\n\\nTo ensure fair comparison, we believe it's more appropriate to compare against methods specifically designed for similar autonomous driving scenarios, and in this case, Think2drve (Dreamerv3 based) approach is the SOTA on solving planning in CARLA benchmark. To our knowledge, CALL is the first multi-agent world model-based approach specifically designed for autonomous driving tasks. In our revision, we further articulate these benchmark selection criteria (ref. Section 4). \\n\\n\\n- [R1] Li et al. \\\"Think2drive: Efficient reinforcement learning by thinking in latent world model for quasi-realistic autonomous driving (in carla-v2).\\\", 2024\\n\\n- [R2] Pan et al., \\\"Iso-dream: Isolating and leveraging noncontrollable visual dynamics in world models.\\\", NeurIPS 2022\\n\\n- [R3] Liu et al., \\\"Efficient multi-agent reinforcement learning by planning\\\", ICLR 2024\\n\\n**A2. Method Superiority.** Our empirical results demonstrate CALL's effectiveness through several key metrics: \\n- (1) **Lightweight Information Sharing**. CALL uses well-crafted information sharing, which directly contributes to the significantly reduced communication overhead comparing with centralized setting as in [R2,R3] (about 50 times less bandwidth in 230-agent scenarios, ref. Figure 5). \\n- (2) **Improved Generalization**. CALL leverages the received information (i.e., latent state and latent intention) and the world model's generalization capability to achieve improved prediction accuracy when compared with the SOTA method Think2drive [R1] (ref. Figure 3 and Figure 13). Furthermore, in Figures 2 and 11, we show that CALL achieve better planning performance (in terms of the average return) thanks to the improved prediction accuracy. \\n- (3) **Scalability**. CALL is an ego-centric distributed learning algorithm, where each agent trains their own world model for planning. We show that such approach can achieve stable performance scaling from 150 to 230 vehicles while maintaining per-agent efficiency in Figures 3 and 13.\"}", "{\"metareview\": \"This paper proposed an ego-centric MARL method for autonomous driving, CALL. In this method, agents share their latent states and intentions encoded using a learned world model. They then improve their world models by sharing information. This method addresses the important problem of partial observability of agents in multi-agent systems. While the method is interesting, all reviewers raised concerns about the lack of evaluation:\\n\\n1) The experiments were only done on CARLA.\\n2) There is no comparison against SOTA MARL methods.\\n\\nThe AC thinks that 1) is fine for a paper that only focuses on autonomous driving. However, the explanation provided by the authors about why there are no other SOTA MARL baselines is not sufficiently convincing. Even though they are not originally designed for autonomous driving, it doesn't mean that comparison is not possible or meaningful. The paper can benefit from further evaluation.\", \"additional_comments_on_reviewer_discussion\": \"Two reviewers provided additional justification for their final recommendations of reject during the reviewer discussion. The concern about evaluation remains after the rebuttal and the AC-reviewer discussion.\"}", "{\"title\": \"Reply to Reviewer nMor's Follow-up Questions (1/2)\", \"comment\": \"We thank the reviewer for engaging in the discussion. Below are our detailed responses to each concern:\\n\\n**A1. (Ego-centric MARL)**\\n\\nWe first clarify that our work focus on **ego-centric MARL** (title and line 38) [R1,R2], where each agent chooses actions to maximize her *own interest*. Grounded by our key results in Theorem 1 and Proposition 1, our adaptive communication mechanism guide agents receive critical information when needed in order to minimize the sub-optimality gap. Moreover, theoretically, shared parameters in neural networks used for MARL can converge to globally optimal solutions [R3,R4] under certain conditions and such schemes has been widely used in practice . CALL consider each agent has their local world model, which processes agent-specific observations and intentions, ensuring heterogeneous behavior despite shared parameters. \\n\\n**(New Insights from CALL)** The CALL provide both **theoretical understanding** of MARL and a **practical solution**for autonomous driving applications, in particular, \\n\\n\\n- **World Model based MARL**. We propose CALL to demonstrate the significant benefits of integrating world models into MARL for autonomous driving applications. As stated in line 42-53, the generalization capability of the WM is especially desired for applications like autonomous driving, where the ego vehicle needs to interact within the intricate dynamics. Meanwhile, as stated in line 57-71, the usage of WM can also overcome the notorious challenges of dimensionality and communication overhead induced in previous works. \\n\\n- **Prediction-guided lightweight information sharing**. This adaptive approach optimizes the critical trade-off between communication overhead and performance, ensuring robust coordination while minimizing bandwidth usage. Furthermore, CALL implements efficient communication by directly sharing critical information such as intentions and latent states, eliminating the need for complex communication modules. This lightweight approach maintains low computational overhead, making it particularly valuable for resource-constrained autonomous systems.\\n\\n- **Challenging evaluation domains**. We validate CALL in CARLA, a high-fidelity autonomous driving simulator, demonstrating its effectiveness under realistic conditions including complex traffic dynamics, uncertain agent behaviors, and real-world driving rules and constraints. Our experimental results show significant improvements over existing methods while maintaining practical deployability. The validation in CARLA is particularly meaningful as it subjects our method to strict safety and efficiency requirements that mirror real-world autonomous driving challenges. Through these comprehensive evaluations, we demonstrate that CALL not only advances the theoretical understanding of MARL but also provides a practical solution for autonomous driving applications.\\n\\n- [R1] Asuman et al. Independent learning in stochastic games. arXiv preprint arXiv:2111.11743, 2021.\\n\\n- [R2] Zhang et al, \\\"Coordinating multi-agent reinforcement learning with limited communication.\\\" AAMAS, 2013.\\n\\n\\n- [R3] Agazzi et al. \\\"Global Optimality of Softmax Policy Gradient With Single Hidden Layer Neural Networks in the Mean-Field Regime.\\\" ICLR 2021\\n\\n- [R3] Hu et al. \\\"Scalable multi-agent reinforcement learning for dynamic coordinated multipoint clustering.\\\" ToC 2022\\n\\n**A2. Experiments.**\\n\\n (Baseline Choice) we clarify that our work specifically targets **autonomous driving**, where the complexity of road rules, physical constraints, and safety requirements create unique challenges distinct from general multi-agent environments. We deliberately chose Think2Drive as our primary baseline as it represents the current state-of-the-art in autonomous driving planning and has been extensively validated in the CARLA platform. The CARLA simulator is the standard testing environment in autonomous driving research, offering realistic physics, complex traffic scenarios, and standardized metrics. While there are many multi-agent algorithms in other domains, they typically lack the specialized components needed for handling structured road environments, traffic rules, and vehicle dynamics. This makes direct comparisons less meaningful for our specific use case. \\n\\nMeanwhile, we would be very interested in learning about specific benchmarks or model-based method that the reviewer has in mind that have demonstrated successful validation in CARLA while maintaining comparable functionality to our method.\\n\\n\\n**A3. Learning curve.**\\n\\nWe provide the curve visualization on how does the communication range change while the prediction errors exceed a threshold in our revision in Appendix I. We provide the comparison among three communication settings: CALL, Local observation only and full observation, and two scales: 150 agents and 250 agents.\"}", "{\"summary\": \"This paper proposes incorporating the world model into the multi-agent communication protocol to address the partial observability and non-stationarity issues in multi-agent reinforcement learning (MARL). In particular, this paper provides theoretical justification for how partial observability and non-stationarity can affect the world model performance. It derives the prediction-accuracy-driven information-sharing strategy for better communication. The paper shows the performance of its method in the autonomous driving setting.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is clear and well-written. It is very easy for me to follow.\", \"I like the theoretical analysis and how it can inform a better communication strategy.\", \"It provides the experimental results on how the method can help to solve the two issues in MARL.\"], \"weaknesses\": [\"This paper needs a comparison with other baselines (see Table 2) in your experiment result. Why is your proposed method superior to others? It is insufficient to show that you have specific components that others don't. The empirical results are essential to show that the method is effective.\", \"The experiment setting is limited. How would this method perform in other experiment settings besides CARLA?\"], \"questions\": \"I suggest having a deeper analysis of your information-sharing mechanism in your experiment. I am curious: if your information-sharing mechanism is replaced by another mechanism, how would the performance change?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply to Reviewer BEZm (2/2)\", \"comment\": \"**A3. Experiments beyond CARLA.** We thank the reviewer's suggestions on broader experimental settings and we would like to clarify our choice of CARLA as our primary testing environment.\\n\\nCARLA presents substantially more challenging scenarios compared to traditional multi-agent benchmarks like DeepMind Control Suite and SMAC, particularly due to its realistic vehicle dynamics and multi-agent interactions that follow traffic rules and safety protocols. Meanwhile, the planning in CARLA generally needs longer-horizon and prediction (3-5 seconds ahead) versus shorter planning horizons as in other benchmarks. \\n\\nWhile CALL's core principles of distributed learning, prediction-driven communication, and ego-centric world models, are indeed applicable to other multi-agent scenarios, we chose autonomous driving as our primary test case due to its compelling combination of real-world significance and rigorous requirements for safety, efficiency, and scalability. The successful demonstration of CALL in this challenging environment provides strong evidence for its potential effectiveness in other multi-agent settings.\\n\\n\\n\\n### Questions\\n**A1. (information-sharing mechanism)** We thank the reviewer for your suggestions on emphasizing the impact of the information-sharing mechanism. In our experiments, we conducted comprehensive comparisons between different communication strategies to demonstrate the effectiveness of our prediction-accuracy driven approach: \\n\\n**Case I.** No Communication: Agents rely solely on their local observations, which corresponds to the case with communication range is zero (or the case of accuracy threshold is large enough) \\n\\n**Case II.** Full Communication: Agents share information with all neighbors within range, which corresponds to the case with large communication range (or the case of prediction accuracy threshold is zero) \\n\\n**Case III.** Our Proposed Method: Selective communication based on prediction accuracy (with a finite value of prediction accuracy threshold). \\n\\nAs can be seen in Figure 2(c) and Figure 9(b), our proposed method can (1) achieve better prediction and planning comparing with sharing no information and also (2) achieve the same prediction performance more efficiently compared to the full information sharing case with far less bandwidth requirements. \\n\\n\\nFurthermore, our detailed analysis reveals a critical trade-off between prediction accuracy requirements and communication efficiency. Setting higher prediction accuracy requirements (lower error threshold) improves learning performance through more precise state estimation, but increases communication overhead in a larger communication range. Conversely, lower accuracy requirements reduce communication burden but can significantly degrade performance due to incomplete state information, particularly in complex scenarios requiring precise coordination. Through extensive experiments, we identified an operating point that balances these competing factors, achieving robust performance while maintaining efficient communication. In our revision, we present the quantitative results in **Appendix G** demonstrate this trade-off, showing how different accuracy thresholds affect both system performance and communication overhead in autonomous driving scenarios.\\n\\n\\nWe hope that we clarified these very helpful comments. We would be glad to engage in discussion if there are any other questions or parts that need further clarifications.\"}", "{\"comment\": \"And how does it perform compared to other shared and decentralized methods?\"}", "{\"comment\": \"Thanking the authors for addressing the comments.\\n\\nI went through the comments and discussions from other reviewers as well and it seems that the authors have tried to carefully address their concerns as well. \\n\\nThe authors were able to explain their views regarding some common concerns related to the choice of benchmarks, communication range and scalability of the multi-agent framework.\\n\\nIt would be nice if the addressed comments can get mentioned in the final version of the paper. \\nAppreciation for the authors was putting efforts in this direction of combining world models and adaptive communication to tackle the problems with MARL. I would like to maintain my score considering the limitations pointed out by some other reviewers. \\nBest wishes! :)\"}", "{\"title\": \"Gentle Reminder\", \"comment\": \"Dear Reviewer,\\n\\nAs the rebuttal period is nearing its conclusion, we would like to kindly follow up to see if there are any additional questions or areas that need further clarification. Thank you once again for your time and valuable feedback.\\n\\nAuthors\"}", "{\"title\": \"Reply to Reviewer nMor (2/2)\", \"comment\": \"**A3. Latent Intention**\\n\\nThe definition of intention in autonomous driving refers to the planned trajectory or waypoints that the vehicle aims to follow in the near future [R4,R5]. In our framework, latent intentions are derived from waypoint planning, as mentioned in line 410. Specifically, we leverage CARLA's API for waypoint planning, following established practices in autonomous driving literature. While equations 11-13 appear to omit explicit references to intentions and information sharing, these elements are actually encoded within the latent state $x$ for notational simplicity. We will revise the manuscript to explicitly articulate this relationship and provide a more comprehensive description of how latent intentions are learned and utilized.\\\"\\n\\n\\n- [R5] Casas, et al., \\\"Intentnet: Learning to predict intention from raw sensor data.\\\" Conference on Robot Learning. PMLR, 2018.\\n\\n- [R6] Claussmann et al. \\\"A review of motion planning for highway autonomous driving.\\\" IEEE Transactions on Intelligent Transportation Systems (2019)\\n\\n\\n\\n\\n **A4. Comparative Experiments**\\n\\nWe appreciate the reviewer's suggestion regarding additional comparisons. Our choice of baselines was guided by several important considerations:\\n\\nFirst, we focused on world model-based approaches specifically designed for autonomous driving tasks, given the unique challenges of the high-dimensional CARLA environment. Many conventional RL approaches struggle with the curse of dimensionality in such settings without substantial modifications (as noted in Line 42). We used DreamerV3 as our primary baseline (denoted as 'Local Obs.' in Figure 3(a)) as it represents the state-of-the-art in world model-based RL. Additionally, we included a variant without waypoint sharing (LSI) to isolate the impact of our communication mechanism.\\n\\nWhile the works by [R1, R2] are world model based method, they were developed for fundamentally different environments, i.e., the DeepMind Control Suite and SMAC benchmark respectively. Adapting these methods to CARLA's autonomous driving setting would require significant architectural modifications that could compromise their original design principles. For instance, both [R1,R2] do not have dedicated module for intention process, which is critical for autonomous driving to understand the potential actions of other agents in the environment (and to mitigate the non-stationarity).\\n\\nTo ensure fair comparison, we believe it's more appropriate to compare against methods specifically designed for similar autonomous driving scenarios, and in this case, Think2drve (Dreamerv3 based) approach is the SOTA on solving planning in CARLA benchmark. To our knowledge, CALL represents the first multi-agent world model-based approach specifically designed for autonomous driving tasks. We will revise the manuscript to better articulate these benchmark selection criteria.\\n\\n\\n**A5. Experimental Verification of Accumulative Error in Multi-step Prediction**\\n\\nWe thank the reviewer for this constructive suggestion regarding multi-step prediction analysis. While our current results focus on demonstrating the impact of information sharing on prediction error, we indeed provide the multi-step prediction visualtion in Figure 3, 13, 14,15,16 in Appendix E.4. In particular, we show the relationship between prediction accuracy and information (e.g., local information only) in Figures 13. We further provide the quantification results in our revision (\\\\textbf{Appendix E.4})\\n\\n\\n **A6. Prediction Accuracy-Driven Information Sharing**\\n\\nWe appreciate the opportunity to clarify our adaptive information sharing mechanism. As detailed in Section 3.3, CALL implements a dynamic communication scheme based on prediction accuracy:\\n\\n- Step 1: Each agent continuously monitors its prediction performance by comparing predicted latent states and intentions against actual observations over the past $K$ time-steps.\\n\\n- Step 2: When prediction errors exceed a threshold $c$, the agent automatically increase its communication range by 5 meters and initiates selective information exchange with relevant neighboring agents. Otherwise, the agent will remain its current communication range for information exchange.\\n\\nThis adaptive approach ensures that communication remains both minimal and targeted, and agents only share information when and with whom it's most crucial for accurate prediction. We enhanced Section 3.3 with additional details and concrete examples to better illustrate this mechanism in our revision.\\n\\n---\\n\\n\\nWe hope that we clarified these very helpful comments. We would be glad to engage in discussion if there are any other questions or parts that need further clarifications.\"}", "{\"title\": \"Reply to Reviewer nMor (1/2)\", \"comment\": \"We thank the reviewer nMor for your review and feedback. We address the concerns raised as follows.\\n\\n### Weakness\\n **A1. Description of Literature**\\n\\nWe appreciate the reviewer's careful attention to this point, which allows us to clarify the key distinctions of our work:\\n\\n1) Our statement in the original submission aimed to highlight differences in *communication architectures* between the existing work and CALL (this paper), rather than latent representation. While [R1,R2] indeed operate in latent space (as is the case in CALL), they employ centralized or static communication schemes where information is shared among *all agents*. In contrast, CALL introduces a dynamic, prediction-driven communication mechanism that selectively activates information sharing for distributed (ego-centric) learning.\\n\\n2) The comparison is conducted among world model based approach (as in our statement \\\"recent works on WM-based RL...\\\") such that the use of latent representations is a key technique across recent world model-based methods, including the cited works and CALL. However, the communication overhead in the approaches in the cited works scales significantly with network size due to their communication architecture. For instance, in our experimental evaluation with 230 agents (Appendix E), gathering all agents' latent information for centralized training requires bandwidth of nearly 50 times more than CALL's selective communication approach (as demonstrated in Figure 5). This substantive reduction in communication overhead, while maintaining performance, is a \\ncontribution of our work.\\n\\nWe have revised the manuscript to better articulate this distinction between latent representation and communication architecture, ensuring our contribution is more precisely positioned within the literature.\\n\\n- [R1] Pan et al., \\\"Iso-dream: Isolating and leveraging noncontrollable visual dynamics in world models.\\\", NeurIPS 2022\\n\\n- [R2] Liu et al., \\\"Efficient multi-agent reinforcement learning by planning\\\", ICLR 2024\\n\\n\\n**A2. Scalability and Adaptability** We thank the reviewer for your careful consideration of scalability and adaptability concerns, which gives an opportunity to better articulate the distributed nature of our approach. In fact, Scalability and Adaptability are two distinct advantages of CALL (this work) distinguishing this proposed algorithm from existing centralized or static mechanisms.\\n\\nWe clarify CALL's fundamental design principles as follows, which directly address the concerns.\\n\\nWe first clarify that CALL is inherently a distributed learning framework where each agent trains *independently with its own local world model*, which means there is *no need* to determine the total number of agents beforehand. In particular, as is standard [R3,R4], during the training stage, agents obtain a world model by leveraging privileged information such as BEV. As stated in line 183 and line 207, during testing, each agent leverages their own world model and the shared information for planning, which again does not need any predefined parameters on the numbers of agents. \\n\\nSecond, we clarify that different from [R1,R2], the changes in the numbers of agents will NOT result in scalability issue. Specifically, CALL is an ego-centric distributed learning method based on a learned world model of its local environment which depends on only the agents in the proximity, which would not have scalability issue. In particular, in our implementation, the shared information will all be fused into a unified BEV (with the same mask) first for planning and hence the agent doesn't need any change to its model architecture to cope with the increase of agents in the environment.\\n\\nThird, regarding computational complexity, our distributed approach actually offers superior scalability compared to centralized methods, as expected. Since each agent trains and operates independently, the computational load is naturally distributed across agents, which is fundamentally different from centralized approaches [R1, R2]. Moreover, each agent only processes in its proximity, making the per-agent computation efficient.\\n\\n- [R3] Li et al. \\\"Think2drive: Efficient reinforcement learning by thinking in latent world model for quasi-realistic autonomous driving (in carla-v2).\\\", 2024\\n\\n- [R4] Chen et al., \\\"Learning to drive from a world on rails.\\\" ICCV. 2021.\"}", "{\"comment\": \"The accumulative prediction error is best reflected intuitively through the curves.\"}", "{\"title\": \"Reply to Reviewer j41G\", \"comment\": \"### Weakness\\nWe appreciate reviewer j41G for these thoughtful comments that help improve our manuscript's clarity and completeness. We address your concerns in our revision and our clarification is outlined as follows.\\n\\n**A1. Notation.** In our revision, we introduce a consolidated notation section that clearly defines all variables and their relationships for clarity in Appendix B, Table 3. Specifically, we use consistent notation across sections, particularly in the transition from world model formulation to information sharing mechanism, making the mathematical flow more intuitive and easier to follow.\\n\\n\\n**A2. Uncertainty.** We thank the reviewer for highlighting this important aspect of uncertainty during communication. We first clarify that in the introduction (line 044), the uncertainty refers to the agent's prediction error on other agents state and action intention, and such uncertainty can be mitigated through information sharing among agents. To this end, our current framework handles uncertainty through prediction-accuracy driven communication (where agents share information when prediction uncertainty exceeds a threshold). To avoid confusion, we formally give the definition of uncertainty in the context of prediction error (Section 3.1)\\n\\n\\n\\n**A3. Privacy.** We agree that the privacy preservation could be a very important future direction. Our method's selective communication mechanism naturally provides a foundation for privacy preservation, as agents only share compact latent representations rather than raw sensory data. In the revision, we elaborate on how specific privacy-preserving techniques (such as differential privacy or secure multi-party computation) could be integrated into our framework without compromising its core functionality (in Section 5). We believe a full implementation of these techniques warrants dedicated future work to properly address the complex trade-offs between privacy, performance, and computational efficiency.\\n\\n### Questions\\n\\n**A1. Fig 1 notation $a$.** The action is sampled by using agent's current policy (ref. line 179).\\n\\n**A2. Fig 1 encoder.** We thank the author for pointing it out and we updated it in our revision.\\n\\n**A3. PPAD.** The PPAD (Polynomial Parity Arguments on Directed graphs) is formally defined through the 'End of the Line' problem on directed graphs, where each node has at most one predecessor and successor. PPAD is believed to be harder than P but easier than NP-complete. We include the definition in our revison.\\n\\n**A4. line 318.** Yes, we correct the typo in our revision.\\n\\n\\n**A5. line 377** Thank you for pointing it out and we corrected it in our revision.\\n\\n\\n**A6. Algorithm 1** The communication range is dynamically adjusted based on the agent's prediction accuracy. When the k-step prediction error exceeds threshold $c$, the range incrementally expands to include the next nearest neighbor until either the prediction improves or maximum range is reached. The range contracts when predictions remain accurate, ensuring minimal but sufficient communication for reliable performance. In our implementation, the distance between vehicles are known such that the expansion of the communication range is realized by including the next nearest vehicle's for information inquiring.\\n\\n\\n **A7. line 1322** Thank you for pointing it out and we corrected it in our revision.\\n\\n\\n**A8. Analogy to trajectory prediction** Yes, the analogy does make sense. The condition is indeed on the information shared by other agents.\\n\\nWe hope that we clarified these very helpful comments. We would be glad to engage in discussion if there are any other questions or parts that need further clarifications.\"}", "{\"title\": \"Reply to Reviewer nMor's Follow-up Questions (2/2)\", \"comment\": \"**A4. Where intention comes from?**\\n\\n(Intention) The intentions (waypoints) are generated through CARLA's built-in waypoint system, which provides a structured representation of the road network. The initial intention extraction from current state and goal using an MLP encoder.\", \"continuous_updates_based_on\": \"- Current observation (location)\\n \\n- Local planning objectives (e.g., destination)\\n\\nThe intention is updated every few timestep to reflect changing dynamics and goals. The specific code are open sourced and available at [R3].\\n\\n[R3] {https://carla.readthedocs.io/en/latest/core_map/#waypoints}\\n\\n**A5. Comparison with other methods**\\n\\nThank you for the question regarding comparative evaluation. Our work specifically on applying Ego-centric MARL to autonomous driving, a domain with *well-defined technical requirements and established testing environments*.\\n\\nOur comparative analysis operates on two levels. \\n\\n- First, as shown in Table 2, we examine existing shared and decentralized methods in the literature. While these methods typically focus on general control or gaming tasks, adapting them to autonomous driving scenarios would require *substantial modifications* to handle high-dimensional visual inputs and address non-stationary behavior inherent in multi-agent driving scenarios. The key distinction is that our method incorporates essential components specifically designed for autonomous driving, which notably the *latent world model* for processing high-dimensional sensory inputs and the *intention communication* for handling non-stationary agent behaviors.\\n\\n- Second, we conduct extensive empirical evaluation within the CARLA simulator, which provides an established testing environment with well-defined technical requirements for autonomous driving. We benchmark against Think2Drive (based on Dreamer v3), a state-of-the-art autonomous driving algorithm, across three different information sharing mechanisms: local observation only, shared state, and shared state with intention. \\n\\nWe believe our domain-specific evaluation addresses the core technical challenges of autonomous driving while validating our method's effectiveness through comprehensive comparisons against state-of-the-art baselines in this domain and different information sharing mechanisms in standardized autonomous driving environments.\\n\\n**A6. Accumulative prediction error.**\\n\\nWe add the prediction error curve in *Appendix H* in the revision, where we plot both the accumulation error and single-step prediction error for three communication settings: CALL, Local observation only and full observation.\\n\\nWe hope our responses have sufficiently addressed the issues raised. If there are any remaining concerns requiring further clarification, we would be happy to provide additional explanations or make further adjustments as needed. Thank you again for your time and effort.\"}", "{\"title\": \"Reply to Reviewer DLBj (3/3)\", \"comment\": \"**A5. Finite action space.** In practice, continuous action spaces can be effectively handled through discretization, following the successful approach demonstrated in DreamerV3. Our implementation discretizes the steering and acceleration actions while maintaining sufficient granularity for smooth control. This approach has proven effective in our CARLA experiments, achieving precise vehicle control without compromising performance. The theoretical results extend naturally to the discretized setting, making the finite action space assumption practical rather than limiting.\\n\\n\\n **A6. Heterogeneous agents** Yes, our theoretical framework can be extended to heterogeneous agents with additional assumptions. The key requirement would be that all agents' latent representations can be projected into a common latent space, allowing for meaningful information sharing across different agent types. This extension would require additional theoretical machinery to handle the mapping between different latent spaces, but the core principles of our analysis remain applicable. \\n\\n\\n **A7. Standard MARL benchmark** Please refer to our response in Weakness A1.\\n\\n\\n\\n **A8. Baselines** Traditional MARL methods like MADDPG and QMIX face fundamental limitations in the CARLA environment. MADDPG struggles with high-dimensional visual inputs and doesn't scale well to large numbers of agents. QMIX, while effective for value function factorization, lacks mechanisms for handling intention sharing and requires centralized training. Adapting these methods to CARLA would require significant modifications that would compromise their original design principles.\\n\\n **A9. Scalability limit** The 250-vehicle scenario is not a fundamental limit but rather a practical demonstration point. CALL's ego-centric distributed learning approach offers superior scalability compared to centralized methods. Since each agent trains and operates independently, the computational load scales linearly with the number of agents, unlike centralized approaches where complexity often scales exponentially. Each agent only processes its local observations and selective communications, making per-agent computation efficient and bounded regardless of the total system size.\\n\\n **A10. Computational requirements** Each agent requires approximately 12 hours of training on an A100 GPU, comparable to single-agent training requirements. This efficiency stems from our distributed approach, where each agent trains independently, and the computational load is naturally distributed. While the total number of parameters scales linearly with the number of agents, this is fundamentally different from centralized approaches where complexity often scales exponentially. The per-agent computation remains constant and bounded, making the approach practical for large-scale deployments.\\n\\n\\n\\nWe hope that we clarified these very helpful comments. We would be glad to engage in discussion if there are any other questions or parts that need further clarifications.\"}", "{\"title\": \"Reply to Reviewer DLBj (1/3)\", \"comment\": \"We thank the reviewer BEZm for your careful reading and constructive feedback. We address the concerns raised as follows.\\n\\n### Weakness\\n **A1. Testing Environment.** Our work mainly focus on the planning tasks of autonomous driving in the multi-agent environment. In this regard, our choice of CARLA as the primary testing environment is justified by its wide adoption in the research community and real-world relevance. CARLA presents substantially more challenging scenarios compared to traditional MARL benchmarks, featuring realistic vehicle dynamics, complex multi-agent interactions, and strict traffic safety protocols. Most importantly, CARLA requires longer-horizon planning (3-5 seconds ahead) compared to shorter planning horizons in other benchmarks. While CALL's core principles are indeed applicable to other multi-agent scenarios, the success in this challenging environment provides strong evidence for its effectiveness in other simpler settings.\\n\\n **A2. Baselines** We appreciate the reviewer\\u2019s suggestion regarding additional comparisons. Our choice of baselines was guided by several important considerations:\\n\\nFirst, we focused on world model-based approaches specifically designed for autonomous driving tasks, given the unique challenges of the high-dimensional CARLA environment. Many conventional RL approaches struggle with the curse of dimensionality in such settings without substantial modifications (as noted in Line 42). We choose the SOTA work [R1] on autonomous driving planning, which is based on DreamerV3, as our primary baseline (denoted as 'Local Obs.' in Figure 3(a)). Additionally, we included a variant without waypoint sharing (LSI) to isolate the impact of our communication mechanism.\\n\\nThe works such as QMIX, MADDPG, or other communication-based approaches either not using world model (hence not being able to effectively deal with high-dimensional inputs in CARLA), lack of intention sharing (which is essential for planning) or have requirements on for sharing all information among agents (hence impractical for a large multi-agent systems as considered in our work). While the works by [R2, R3] are world model based method, they were developed for fundamentally different environments, i.e., the DeepMind Control Suite and SMAC benchmark respectively. Adapting these methods to CARLA's autonomous driving setting would require significant architectural modifications that could compromise their original design principles. For instance, both [R1,R2] do not have dedicated module for intention process, which is critical for autonomous driving to understand the potential actions of other agents in the environment.\\n\\nTo ensure fair comparison, we believe it's more appropriate to compare against methods specifically designed for similar autonomous driving scenarios, and in this case, Think2drive (Dreamerv3 based) approach is the SOTA on solving planning in CARLA benchmark. To our knowledge, CALL represents the first multi-agent world model-based approach specifically designed for autonomous driving tasks. In our revision, we further articulate these benchmark selection criteria (ref.Section 4).\\n\\n\\n- [R1] Li et al. \\\"Think2drive: Efficient reinforcement learning by thinking in latent world model for quasi-realistic autonomous driving (in carla-v2).\\\", 2024\\n\\n- [R2] Pan et al., \\\"Iso-dream: Isolating and leveraging noncontrollable visual dynamics in world models.\\\", NeurIPS 2022\\n\\n- [R3] Liu et al., \\\"Efficient multi-agent reinforcement learning by planning\\\", ICLR 2024\\n\\n **A3. Ablation studies.** Our ablation studies were specifically designed to validate our key technical contributions (comparing with baseline SOTA method) and theoretical insights. They systematically evaluate the impact of state sharing on partial observability and intention sharing on non-stationarity, directly addressing our main technical claims. The studies demonstrate the effectiveness of our lightweight information sharing approach and validate our theoretical predictions about prediction accuracy and communication efficiency.\\n\\n **A4. World model training.** In Appendix E, we provide comprehensive information about the training process, including the curriculum design, handling of multi-modal sensory inputs (camera, LiDAR) through BEV representation, and network architecture specifications. We also include analysis of failure cases and their implications for real-world deployment.\"}", "{\"summary\": \"This paper presents CALL (Communicative World Model), a framework for addressing partial observability and non-stationarity challenges in multi-agent reinforcement learning (MARL) in high-dimensional environments. The key innovation lies in synergizing world models' generalization capabilities with lightweight information sharing. Each agent encodes its state and intentions into low-dimensional latent representations and selectively shares them with other agents based on prediction accuracy. The approach is theoretically analyzed and validated on the CARLA autonomous driving platform.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper makes several significant contributions to multi-agent reinforcement learning. The core innovation of synergizing world models with lightweight communication is both novel and practical, addressing two fundamental challenges in MARL: partial observability and non-stationarity. The technical approach is well-grounded in theory, with Theorem 1 providing a detailed analysis of prediction error structure and Proposition 1 establishing bounds on sub-optimality gaps. The implementation is particularly impressive, demonstrating substantial performance improvements in the CARLA autonomous driving platform - reducing communication bandwidth from 5MB to 0.11MB while increasing success rate from 52% to 87% (Table 5). The ablation studies thoroughly validate each component's contribution, and the method shows good generalization to unseen environments. The practical relevance of the work is clear, with the framework successfully handling realistic scenarios involving up to 250 vehicles while maintaining reasonable computational requirements.\", \"weaknesses\": \"1. The experimental validation is confined to autonomous driving scenarios, raising questions about generalizability\\n2. Lack of comparisons with state-of-the-art MARL methods is a significant omission. Specifically: No comparison with recent methods like QMIX, MADDPG, or other communication-based approaches; No evaluation on standard MARL benchmarks; Missing comparison with other world model-based methods\\n3. The ablation studies, while thorough for the proposed components, don't explore alternative design choices\\n4. The world model training process is not fully described in the main text: 1) Missing details about the training curriculum; 2) Unclear how the model handles different types of sensory inputs; 3) Limited discussion of failure cases and their analysis\\n5. The prediction-accuracy-driven communication mechanism needs more elaboration: 1) The threshold selection process is not well-explained; 2) The adaptation mechanism for the communication range isn't fully specified; 3) Missing analysis of communication overhead in different scenarios\\n6. The bounds in Theorem 1 might be loose, and there's no discussion of their tightness: 1) No lower bounds are provided for comparison; 2) The analysis assumes finite action spaces, limiting generality; 3) The impact of approximation errors isn't fully analyzed\\n7. Missing formal analysis of communication complexity: 1) No theoretical guarantees on the optimality of the information sharing strategy; 2) Limited analysis of the trade-off between communication cost and performance\", \"questions\": \"1. How is the prediction accuracy threshold c determined? Is it static or dynamically adjusted?\\n2. How does the system handle communication delays and packet losses in practice?\\n3. What's the maximum number of agents the communication protocol can efficiently handle?\\n4. Are the bounds in Theorem 1 tight? Can a lower bound be provided?\\n5. How does the finite action space assumption affect application to continuous action spaces?\\n6. Can the theoretical analysis be extended to handle heterogeneous agents?\\n7. Why weren't standard MARL benchmarks included in the evaluation?\\n8. Can performance comparisons with methods like MADDPG or QMIX be provided?\\n9. Is 250 vehicles the scalability limit? What's the performance in larger-scale scenarios?\\n10. What are the computational requirements for training the world model?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I want to know where intention w comes from? How to generate and update them?\"}", "{\"title\": \"Reply to Reviewer nMor\", \"comment\": \"We sincerely appreciate Reviewer nMor's original feedback and the time you've invested in reviewing our manuscript.\\n\\nWe hope that our responses have adequately addressed your comments and concerns. \\n\\nAs the rebuttal period is drawing to a close, we would like to check if there are any additional questions or points that would benefit from further clarification. \\n\\nWe would be very happy to continue the discussion and provide any necessary additional information.\"}", "{\"summary\": \"This paper developed CALL, Communicative World Model, for ego-centric MARL. The CALL allows agents to adaptively adjust their information sharing based on real-time evaluation of their prediction accuracy, and reduce unnecessary information transmission and improve system efficiency.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"**Pros:**\\n\\n1. Theoretical Support\\n\\nThe article provides theoretical analysis proving the impact of prediction errors on sub-optimality gaps and demonstrates the effectiveness of the CALL method through experiments.\\n\\n2. Experimental Validation\\n\\nExtensive experiments were conducted on the CARLA platform for trajectory planning tasks, showing that the use of CALL can significantly improve performance, particularly in lightweight communication scenarios.\", \"weaknesses\": \"**Cons:**\\n\\n1. Inaccurate Description of Literature\", \"the_article_states\": \"\\u201cRecent works on WM-based reinforcement learning still face significant limitations, and often rely on rigid, static information-sharing mechanisms, such as sharing information with all agents (Pretorius et al., 2020), using centralized frameworks (Krupnik et al., 2020; Pan et al., 2022; Liu et al., 2024).\\u201d In fact, Pan et al. (2022) and Liu et al. (2024), among others, learned and planned in the latent space. Therefore, lightweight information sharing is not unique to this paper, as most related works do this.\\n\\n2. Questions about Scalability and Adaptability\\n\\n The method described in the article establishes a world model for each agent. Since it sets the parameters and number of models beforehand, the method cannot adapt to changes in the number of agents during training or testing, especially during testing. Additionally, changes in the number of agents also change the observation dimensions for each agent, which the method cannot adapt to dynamically (dynamic changes in relevant information). When scaled up to large systems, the increase in the number of models and parameters with the number of agents leads to increased computational complexity and resource burden, thus limiting scalability. Although an experiment with 250 vehicles was conducted in Section E.3, there is no comparative analysis with the results from the experiment involving 150 vehicles, making it difficult to prove its good scalability.\\n\\n3. Latent Intention\\n\\n There is no description of the definition and learning method of latent intention throughout the paper. Moreover, equations 11-13 also do not have terms related to informations T and latent intentions w. Therefore, how to infer and learn these latent intentions?\\n\\n 4. Comparative Experiments\\n\\n There is a lack of comparative experiments with related methods, such as those by Pretorius et al. (2020), Krupnik et al. (2020), Pan et al. (2022), and Liu et al. (2024).\\n\\n 5. Experimental Verification of Accumulative Error in Multi-step Prediction\\n\\n Although the method theoretically proves its effectiveness, there is a lack of direct experimental verification. Specifically, there are no curves showing the relationship between the number of prediction steps and cumulative error, comparison curves of cumulative error with other methods, or performance change curves at different prediction step numbers.\\n\\n6. Prediction Accuracy-Driven Information Sharing\\n\\nThe article claims that CALL allows agents to adaptively adjust their information sharing based on real-time evaluation of their prediction accuracy. However, the specific mechanism of how the method adjusts information sharing according to prediction accuracy has not been experimentally verified, i.e., there are no graphs showing the relationship between prediction accuracy and information sharing (T).\", \"questions\": \"See the weakness for Questions.\\n\\n**Suggestions for Improvement:**\\n\\n1. Improve the accuracy of literature description \\n\\n2. Define intention clearly and explain learning mechanism\\n \\n3. Add comparative experiments.\\n\\n4. Supply experiments concerning accumulative errors in multi-step prediction and relationship between prediction accuracy and information (T)\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Prediction Accuracy-Driven Information Sharing\", \"comment\": \"You claim that this method can increase its communication range by 5 meters when prediction errors exceed a threshold , then can you validate the mechanism of your method through curve visualization?\"}" ] }
7ohlQUbTpp
Collab: Controlled Decoding using Mixture of Agents for LLM Alignment
[ "Souradip Chakraborty", "Sujay Bhatt", "Udari Madhushani Sehwag", "Soumya Suvra Ghosal", "Jiahao Qiu", "Mengdi Wang", "Dinesh Manocha", "Furong Huang", "Alec Koppel", "Sumitra Ganesh" ]
Alignment of Large Language models (LLMs) is crucial for safe and trustworthy deployment in applications. Reinforcement learning from human feedback (RLHF) has emerged as an effective technique to align LLMs to human preferences, and broader utilities, but it requires updating billions of model parameters which is computationally expensive. Controlled Decoding, by contrast, provides a mechanism for aligning a model at inference time without retraining. However, single-agent decoding approaches often struggle to adapt to diverse tasks due to the complexity and variability inherent in these tasks. To strengthen the test-time performance w.r.t the target task, we propose a mixture of agents-based decoding strategies leveraging the existing off-the-shelf aligned LLM policies. Treating each prior policy as an agent in the spirit of mixture of agent collaboration, we develop a decoding method that allows for inference-time alignment through a token-level selection strategy among multiple agents. For each token, the most suitable LLM is dynamically chosen from a pool of models based on a long-term utility metric. This policy-switching mechanism ensures optimal model selection at each step, enabling efficient collaboration and alignment among LLMs during decoding. Theoretical analysis of our proposed algorithm establishes optimal performance with respect to the target task represented via a target reward, for the given off-the-shelf models. We conduct comprehensive empirical evaluations with open-source aligned models on diverse tasks and preferences, which demonstrates the merits of this approach over single-agent decoding baselines. Notably, COLLAB surpasses the current SoTA decoding strategy, achieving an improvement of {up to 1.56x} in average reward and $71.89\%$ in GPT-4 based win-tie rate.
[ "Alignment", "Decoding", "RLHF", "Transfer Decoding", "LLM" ]
Accept (Poster)
https://openreview.net/pdf?id=7ohlQUbTpp
https://openreview.net/forum?id=7ohlQUbTpp
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yMIroNv9pV", "xrqYJEsqVB", "s7FAC4sher", "pz35m6Lf1c", "oowGOFGSf5", "nihmk6iQhD", "mUFeclsHuJ", "m2E33FSSay", "lSgbjA08xP", "jruTzMGZS7", "hpCykmix9M", "fgpdRyUofI", "fgMoArmS1j", "cKbipNSVZm", "ZbRLTy9yNr", "YV2oW7aCyw", "Y3XeSKvRn1", "WG12w9G04S", "VzkbpeSpV2", "VqzD7CNiFs", "VdlNa0defv", "SWv9rkOLtK", "PT4JAhHxRE", "InRqoEqqac", "HXfL3Nhwc6", "GsVRzIzP03", "Ga1X13He9A", "FZhVg8VZYH", "Add4saZEHj", "8NhEaLZO5Q", "0sQUYvnb3x" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "decision", "official_comment" ], "note_created": [ 1732216170148, 1732797983149, 1732631405844, 1732874046753, 1732527432354, 1732213330729, 1732527476825, 1732215637479, 1732324875972, 1733132177589, 1732407751387, 1732417757482, 1732210009440, 1732533554353, 1732533105465, 1732211359791, 1734665301729, 1730694513922, 1732549407813, 1730763459256, 1732396738402, 1733290824998, 1732872440432, 1732208156969, 1733288664589, 1732322182258, 1732209286377, 1730307555550, 1730719486525, 1737524219818, 1732212905320 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12855/Authors" ], [ "ICLR.cc/2025/Conference/Submission12855/Area_Chair_raYG" ], [ "ICLR.cc/2025/Conference/Submission12855/Reviewer_4fXt" ], [ "ICLR.cc/2025/Conference/Submission12855/Authors" ], [ "ICLR.cc/2025/Conference/Submission12855/Authors" ], [ "ICLR.cc/2025/Conference/Submission12855/Authors" ], [ "ICLR.cc/2025/Conference/Submission12855/Authors" ], [ "ICLR.cc/2025/Conference/Submission12855/Authors" ], [ "ICLR.cc/2025/Conference/Submission12855/Authors" ], [ "ICLR.cc/2025/Conference/Submission12855/Authors" ], [ "ICLR.cc/2025/Conference/Submission12855/Authors" ], [ "ICLR.cc/2025/Conference/Submission12855/Reviewer_d8Dt" ], [ "ICLR.cc/2025/Conference/Submission12855/Authors" ], [ "ICLR.cc/2025/Conference/Submission12855/Reviewer_4LCR" ], [ "ICLR.cc/2025/Conference/Submission12855/Reviewer_4LCR" ], [ "ICLR.cc/2025/Conference/Submission12855/Authors" ], [ "ICLR.cc/2025/Conference/Submission12855/Area_Chair_raYG" ], [ "ICLR.cc/2025/Conference/Submission12855/Reviewer_4LCR" ], [ "ICLR.cc/2025/Conference/Submission12855/Authors" ], [ "ICLR.cc/2025/Conference/Submission12855/Reviewer_d8Dt" ], [ "ICLR.cc/2025/Conference/Submission12855/Reviewer_Q9zn" ], [ "ICLR.cc/2025/Conference/Submission12855/Authors" ], [ "ICLR.cc/2025/Conference/Submission12855/Authors" ], [ "ICLR.cc/2025/Conference/Submission12855/Authors" ], [ "ICLR.cc/2025/Conference/Submission12855/Area_Chair_raYG" ], [ "ICLR.cc/2025/Conference/Submission12855/Reviewer_Q9zn" ], [ "ICLR.cc/2025/Conference/Submission12855/Authors" ], [ "ICLR.cc/2025/Conference/Submission12855/Reviewer_Q9zn" ], [ "ICLR.cc/2025/Conference/Submission12855/Reviewer_4fXt" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12855/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer 4LCR - Part 2\", \"comment\": \">Question 2 : Which algorithm does the SoTA decoding strategy refer to? Is it Transfer Q* (Souradip Chakraborty et a l. 2024)? Or is it controlled decoding (Sidharth Mudgal et al. 2024), which is called SoTA in Chakraborty et al.? A paper should use a term that uniquely identifies the subject rather than a term that changes over time.\\n\\n**Response to Question 2** We refer to Transfer Q* [C] as the SoTA for single-agent decoding (as mentioned in Line 503). We will make it explicit in the updated version to avoid confusion, thanks for the point.\\n\\n>Question 3 : (p.7 Example) Why can COLLAB correctly answer the question when both Agent-1 and 2 fail to answer the question? Is there a reason COLLAB decoding can acquire an ability neither of the Agents have?\\n\\n**Response to Question 3** This is an excellent question, and we thank the reviewer for identifying this subtle point. We agree and would like to begin by emphasizing that our theoretical analysis (Theorem 1) establishes that Collab will not perform worse than the best model in the policy set, which is consistently observed across all our experiments. However, in this specific example, Collab successfully produces the correct response even when both individual agents fail. This observation does not contradict our theoretical results; rather, it complements them by showcasing the additional strengths of Collab in leveraging the collective potential of multiple agents to achieve outcomes beyond the capabilities of individual agents. \\n\\nPractically, we believe that the mixture-of-agents token selection initiates a new trajectory after a few tokens, forming a unique trajectory that neither individual agent would produce on its own, ultimately leading to the correct answer. More specifically, if $x_1,y_1, z_1$ are the tokens generated by Agent-1, Agent-2, and Agent-3 up to a certain point (i.e $x_1$ is the chosen token at t =1 generated by agent 1, $x_2$ is chosen token generated by agent 2 and $x_3$ by agent 3 at t=3), the next token generated by Agent-i will be conditioned on this mixed history of states, i.e $\\\\pi_i(.|x1, y1, z1)$ which has not occurred in individual decoding, thereby creating a new trajectory that ultimately leads to a novel and potentially correct response.\\n\\nAs this work is among the first to explore a principled mixture of agents approach, we plan to further investigate these empirical benefits and provide a more comprehensive theoretical justification in future studies. We will add the detailed discussions in our final draft which will improve the clarity of the proposed method and we sincerely thank the reviewer for highlighting these critical points.\\n\\n[A] Mudgal, S., Lee, J., Ganapathy, H., Li, Y., Wang, T., Huang, Y., Chen, Z., Cheng, H.T., Collins, M., Strohman, T. and Chen, J., 2023. Controlled decoding from language models. arXiv preprint arXiv:2310.17022.\\n\\n[B] Khanov, M., Burapacheep, J. and Li, Y., 2024. ARGS: Alignment as reward-guided search. arXiv preprint arXiv:2402.01694.\\n\\n[C] Chakraborty, S., Ghosal, S.S., Yin, M., Manocha, D., Wang, M., Bedi, A.S. and Huang, F., 2024. Transfer Q Star: Principled Decoding for LLM Alignment. arXiv preprint arXiv:2405.20495.\"}", "{\"title\": \"How do rewards for baselines scale as we increase their inference time?\", \"comment\": \"Ideally, an apples-to-apples comparison with BoN and CD/TQ* so that they reach 68s for would be great to see how the proposed method scales with inference compute.\"}", "{\"comment\": \"The authors have addressed some of my doubts, so I have increased my score.\"}", "{\"title\": \"Response to Reviewer 4LCR\", \"comment\": \"We thank the reviewer for acknowledging our rebuttal and we are glad that it helped in improving the clarity of our proposed approach.\\n\\n> additional evaluations on the HH-RLHF dataset\\n\\nWe acknowledge that the specific experiment on HH-RLHF dataset helped in explicitly highlighting the superiority of our mixture of agents-based approach over baselines. \\n\\n> I think it is totally fine to be slower than the others, but it should be reported in the paper (maybe in the Appendix).\\n\\nWe absolutely agree on this point with the reviewer and will add the wall time and computational details in the updated draft. \\n\\n> Why can COLLAB correctly answer the question when both Agent-1 and 2 fail to answer the question?\\n\\nAs the reviewer correctly highlighted this is a very critical point and we will add a detailed description and limitations in the discussion section of our updated draft.\\n\\n**Remark**: We would like to specifically highlight that the key points raised by the reviewer are extremely insightful and have been very helpful in improving the overall presentation of our work. We thank the reviewer for understanding and appreciating the key contributions of our proposed approach.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nAs the deadline approaches, we wanted to humbly reach out to inquire if there are any remaining concerns or questions. We are more than happy to engage in further discussions and provide clarifications as needed.\\n\\nRegards,\\nAuthors\"}", "{\"title\": \"Response to Reviewer Q9zn - Part 3\", \"comment\": \">Weakness : Line 428 hint that you are doing top-K but Algorithm 1 (line 330) talks about top-p, which one is true?\\n\\n**Response to Weakness** Thanks for pointing this typo, it will be top-p only. Corrected in the updated draft.\\n\\n\\n> Question 3 : I find some of the metrics used in the experiment section are not related to the motivation behind the algorithm. The idea, as presented in the paper, is to use models trained on different reward functions to perform decoding that will maximize a new one. In the experiment section, however, you evaluate non-reward metrics like win rate, diversity, and coherence. Let\\u2019s take win rate, for example. Is your claim that the new reward function correlates with the win rate better than the ones that the original models were trained for? And therefore, a higher win rate is equivalent to a higher reward? If so, where is the evidence?\\n\\n**Response to Question 3** Thank you for raising this point. We would like to emphasize that we have thoroughly evaluated and compared our algorithm against the baselines in terms of average target reward, as illustrated in Figure 2 (Evaluations 1\\u20138) and Figure 4. These results demonstrate the superior performance of our algorithm over the baselines. [We believe the reviewer might have missed it due to its positioning along with Table-1 Win-rate evaluations].\\n\\nIn addition, we conducted GPT-4 win-rate-based comparisons with the baselines, which is a widely accepted standard in alignment research [1, 2, 3]. This approach is motivated by the fact that reward models are often biased toward spurious patterns (such as response length), whereas GPT-4, when appropriately prompted, offers a more unbiased, efficient, and fair evaluation framework.\\nFurthermore, we assessed our method on coherence and diversity metrics, as outlined in ARGS [3] and Transfer Q* [2]. Since our approach involves token switching, ensuring coherence is critical to maintaining the logical flow of responses. Diversity metrics, on the other hand, evaluate the breadth and novelty of the generated outputs, reflecting the robustness of our method.\\nIn summary, we conducted comprehensive evaluations using both reward-based and non-reward-based metrics. Our results consistently show that our algorithm outperforms the baselines across all metrics, reaffirming its effectiveness. We have also performed additional experiments on more complex benchmarks like Alpaca-farm, and HH (rebuttal) and shown the improvements of our algorithm in average reward values as well.\\n\\n\\n[1] Mudgal, S., Lee, J., Ganapathy, H., Li, Y., Wang, T., Huang, Y., Chen, Z., Cheng, H.T., Collins, M., Strohman, T. and Chen, J., 2023. Controlled decoding from language models. arXiv preprint arXiv:2310.17022.\\n\\n[2] Chakraborty, S., Ghosal, S.S., Yin, M., Manocha, D., Wang, M., Bedi, A.S. and Huang, F., 2024. Transfer Q Star: Principled Decoding for LLM Alignment. arXiv preprint arXiv:2405.20495.\\n\\n[3] Khanov, M., Burapacheep, J. and Li, Y., 2024. ARGS: Alignment as reward-guided search. arXiv preprint arXiv:2402.01694.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nAs the deadline approaches, we wanted to humbly reach out to inquire if there are any remaining concerns or questions. We are more than happy to engage in further discussions and provide clarifications as needed.\\n\\nRegards, \\nAuthors\"}", "{\"title\": \"Response to Reviewer 4LCR - Part 1\", \"comment\": \"**General response** We thank the reviewer for appreciating the novel and critical contributions of our work and for acknowledging the simplicity and intuitiveness of our approach.\\n\\n>Weakness 1 : I failed to understand the computational cost of the method. My understanding is that it requires the inference of the whole sequence for each poli.......It would be nice to have a walltime of the algorithm compared with BoN sampling and Agent-1 and 2.\\n\\n>Question 1 : What is the computational complexity and the walltime of the algorithm in the experiment? I understand that the walltime depends on the hardware and the system but would be a good reference to understand the effectiveness.\\n\\n**Response to Weakness 1:** Thank you for your question. We report the inference time required to generate a response for a single prompt in the table. To account for variability in prompt lengths, the inference-time has been averaged over 100 prompts. We observe that, Collab (2 agents) takes 68 seconds to generate a response for a prompt, compared to 38 seconds for TQ* and 8 seconds for naive decoding. However, this slight increase in inference latency is justified by the 2x improvement in average reward by Collab. \\n\\n| Algorithm | Inference Time | Avg Reward |\\n|------------------------|----------------|------------|\\n| Naive Decoding | 8s | 0.23 |\\n| BoN Sampling | 12s | 0.12 |\\n| CD/TQ* (Single Agent) | 38s | 0.45 |\\n| Collab (Multiagent) | 68s | 1.0 |\\n\\nWe agree with the reviewer and acknowledge that as the number of agents increases, the proposed multi-agent decoding approach does introduce additional computational overhead at inference time. However, we emphasize that the primary focus and contribution of this work is to provide a principled approach for achieving multi-agent alignment through decoding, supported by theoretical guarantees and empirical evaluations. However, for efficient implementation, similar to CD-FUDGE method in [A], one can train a small Q-function (implicit Q in our case) adaptor offline, which would allow for faster inference time. This significantly reduces time complexity, similar to ARGS [B], which only introduces a constant factor of p (top-p tokens) over classical decoding methods.\\n\\n\\n>Weakness 2 : Although the strength of the method is claimed to be able to adapt to a diverse set of tasks, the experiments are not designed to.....e model performs in the Harmlessness and Helpfulness subsets in HH-RLHF datasets? ... helpful if we have a post-hoc analysis of what makes the proposed method better than a single-LLM method in the empirical scenario.\\n\\n| Method | Normalized Avg Reward |\\n|------------------------|------------|\\n| BoN | 0.41 |\\n| Agent-1 (Helpfulness) | 0.3 |\\n| Agent-2 (Harmlessness) | 0.19 |\\n| Collab (Ours) | 1.0 |\\n\\n**Response to Weakness2**: Thank you for your suggestion. As recommended by the reviewer, we conducted additional evaluations on the HH-RLHF dataset. To explicitly highlight the importance of mixture-of-agents decoding, we considered two distinct LLM agents: one trained exclusively on the helpfulness subset of HH-RLHF (ChenmieNLP/Zephyr-7B-Beta-Helpful) and the other on the harmlessness subset of HH-RLHF (ChenmieNLP/Zephyr-7B-Beta-Harmless). The objective was to generate text that is both helpful and harmless. To achieve this, we utilized a reward model (Ray2333/reward-model-Mistral-7B-instruct-Unified-Feedback) trained on both subsets of the HH-RLHF dataset. The normalized average rewards (normalized as described in Appendix E.1) for responses generated from 300 prompts in the HH-RLHF dataset, across various decoding strategies, are reported in the table above.\\nThe result clearly demonstrates that our proposed decoding strategy significantly boosts the quality of the generated text, as indicated by higher rewards, compared to single-agent decoding. This also underscores the importance of token-level switching for tasks that involve balancing two distinct/diverse preferences, such as helpfulness and harmlessness, simultaneously, which is difficult for individual agents.\\n\\n**Post-hoc analysis** - Furthermore, as suggested by the reviewer, we have included a post-hoc analysis in Appendix G (Examples 3, 4, 5), providing a qualitative comparison of the text generated by single agents and Collab for various tasks in the updated draft. Agent-I, trained on the helpful subset, risks generating potentially harmful responses by providing information that could encourage illegal activity. Conversely, Agent-II, focused on harmlessness, avoids addressing the user's query altogether, offering only general suggestions for alternative activities. In contrast, our model (Collab) integrates both perspectives effectively, ensuring the response is both helpful in clarifying the legal status and harmless by discouraging illegal behavior.\"}", "{\"title\": \"Response to Reviewer Q9zn\", \"comment\": \"We thank the reviewer for appreciating our response and providing us with an opportunity for further clarification.\\n\\n> Question: However, papers 1 (CD) and 2 (TQ*) perform this estimation in very different ways. CD trains a value function using either FUDGE-style loss or Q learning loss, while TQ* uses an aligned LLM (and its reward model) to obtain an estimation. Which one of the two are you doing?.....\\n\\n**Response to Question 1**: Thanks for this question. In our algorithm, we estimate each policy's Q-function (implicit) with stochastic samples of the trajectory from the corresponding policy (ref eq 6). Specifically, for each token, we estimate the implicit Q-function of individual policy and then select the token (from the agent) with the maximum Q-value as shown in Equation 7, where there is a max over the agents (j) and the tokens (z). Q-function of the specific policy/agent (j) and the token (z) is estimated by sampling the trajectory and then evaluating the trajectory with reward $r\\\\_{\\\\text{target}}$. This design choice allows us to achieve tight sub-optimality bounds w.r.t the true/optimal Q* for the reward function $r\\\\_{\\\\text{target}}$ as shown in Theorem 1. Hope this clears the confusion.\", \"note\": \"We want to highlight that although the practical estimation approach of the Q-function is different in CD, TQ^* as the reviewer correctly mentioned. However, the basic definition and notion of Q-value are similar in both papers for example Eq-1 (value function in CD) Eq-2 in TQ^*, and can be estimated with stochastic unbiased sampling of the trajectory and evaluation with the reward.\\n\\nWe fully agree that estimating the Q-function on the fly adds computational overhead to inference time, which increases with the number of agents. However, in this work, we primarily focussed on developing a principled framework for multiagent alignment via decoding with theoretical guarantees and empirical evaluations. That said, it is indeed possible to train an offline Q-adapter (implicit-Q in our case) in a lightweight manner, similar to CD-Fudge, to enable faster inference and reduce the time complexity.\\n\\nHope we are able to address the reviewer's confusion and are happy to provide any additional clarifications if needed.\"}", "{\"title\": \"Response to Area Chair raYG\", \"comment\": \"Dear AC,\\n\\nThanks a lot for the important question and for your interest in the proposed approach. We provide a response to your point below.\\n\\n**Experimental Evidence**: We conducted additional evaluations to match the inference time of Collab with TQ* with an improved estimation of Q-star. Specifically, we increased the trajectory length for estimating the Q-star such that the inference latency scaled to approximately 70 seconds for both agent decoding. For this evaluation, we sampled 500 prompts from the Berkeley Nectar dataset and used Evaluation-4 (see Table 3 in Appendix): Dolphin-2.6-Mistral-7B-DPO as LLM-1, Starling-7B-$\\\\alpha$ as LLM-2, and Mistral-7B-$\\\\alpha$-IT as the reward model.\\n\\n\\n| Method | Avg. Normalized Reward | Inference-Time|\\n|------------------------|------------------------|---------|\\n| TQ* (Agent-I) | 0.45 | 70 sec\\n| TQ* (Agent-II) | 0.18 | 70 sec\\n| Collab | 1.0 | 68 sec \\n\\nFurther, in the same setup, we observed that Collab achieved a win rate of **58.19%** and **64.58%** against Agent-I and Agent-II, respectively. The results highlight that even via scaling inference time, Collab outperforms single-agent decoding methods, which our theory also suggests (as shown in Theorem 1). \\n\\nWe will include additional results for other benchmarks in our updated draft, by increasing the inference latency for baselines.\\n\\n\\nRegards\\nAuthors\"}", "{\"title\": \"Response to Reviewer Q9zn\", \"comment\": \">Question: Clarity and Reproducibility: The paper is written in a confusing manner, making it difficult to understand the details and reproduce the results. Furthermore, both the original and current versions contain mistakes. For instance, the authors claim to use top-p decoding in Algorithm 1, but as indicated on line 428, it appears they actually mean top-k decoding (see: Top-p sampling - Wikipedia).\\n\\n**Response** We apologize for the confusion in terminology. We meant sampling the top-K tokens (not nucleus sampling) and used $p$ instead of $K$ just as a variable to avoid overlap with the number of agents (denoted as $K$) inadvertently causing this misunderstanding. To clarify, we will update the draft to use $M$ for the number of agents and correctly refer to top-K sampling. However, we emphasize that this was a minor confusion in terminology and *not a technical issue or concern of our work*.\\n\\n\\n> Question: Computational Feasibility: The computational cost of the proposed algorithm is not just high\\u2014it is extraordinarily demanding. At each decoding step, the algorithm generates an entire trajectory from each policy for every top-p token. For example, in the experiments presented in the paper, 10 tokens per policy are used. This means that 20 full trajectories must be generated just to decode a single token. This raises serious concerns about the practicality of the method and questions its comparative performance against other approaches, such as BoN, when operating under equivalent computational or inference time constraints.\\n\\n**Response **: Thank you for this question and for providing an opportunity for detailed clarifications. However we believe there is slight confusion regarding the key contributions of our work,\\n\\n**Key Contribution of the work**: First we begin by highlighting that the key contribution of our work is to develop a principled approach of combining multiple off-the-shelf LLM optimally to general responses which maximize the target reward function, which was missing from existing literature. Our work thus not only formulated the problem in a principled manner but also developed the first algorithm with theoretical guarantees showing an optimal way of combining multiple aligned LLMs and the experiments were provided to serve as a proof of concept to demonstrate the practical optimality of our algorithm. Providing a computationally tractable mixture of agent algorithms is important but not the key focus or contribution of our work.\\n\\n**Empirical Performance & Time Comparison**: We agree with the reviewer that estimating the Q-function on the fly adds computational overhead to inference time, which scales with the number of agents. However, for Collab with 2 agents, we report an average inference time of 68 seconds per prompt (with efficient caching inspired by TQ*, ARGS) compared to 38 seconds for TQ* and 12 seconds for BoN sampling. This slight increase in latency is justified by the 2x improvement in average reward achieved by Collab (Figures 2, 4), which is consistent in all our experimental results. \\n\\n| Algorithm | Inference Time | Avg Reward |\\n|------------------------|----------------|------------|\\n| BoN Sampling | 12s | 0.12 |\\n| CD/TQ* (Single Agent) | 38s | 0.45 |\\n| Collab (Multiagent) | 68s | 1.0 |\\n\\nTo further improve computational traceability, one can train an offline Q-adapter (function-approximation) in a lightweight manner, similar to CD-Fudge, to enable faster inference (8s per prompt) and reduce the time complexity and **is not a bottleneck**.\\n\\n**Summary**: We want to remark the key contribution of our work lies in providing a principled method for combining multiple LLMs with provable guarantees in an optimal way, supported by empirical demonstrations to validate the optimality\\u2014one of the first works in this direction. Therefore, we feel it is not entirely fair to evaluate the contribution of our work based solely on the computational traceability of Q-function estimation, as it is not the primary focus of our research.\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for clarifying my questions.\\n\\nIt would be nice if you can explicitly clarify in the paper about the reference policy. Even after reading it again after you mentioning it is not super clear.\\n\\nI saw the compute requirements and i do believe in it's current state, the latency is too much to make this work usable. It's 2x (as compared to the other best SOTA and 5x compared to BON sampling -- which works generally very well in practice) with just 2 agents. I do feel that this idea can be helpful, maybe in some other way (not necessarily each token?). The main usability of this work is to merge skills of multiple specialized agents that might become prohibitively expensive if you consider 2+ agents. Other reviewers also point out this weakness.\\n\\nRegarding the other results, I'm not sure what the average reward is that you add for Alpaca Farm? How did you select the 300 prompts? You say that you will add more diverse agents and harder tasks in the final version, but can you provide more details? \\n\\nI would like to keep my score same.\"}", "{\"title\": \"Response to Reviewer 4fXt - Part2\", \"comment\": \">Weakness 3 : How were the agents initialized in the experiments? Were these agents explicitly trained for alignment?\\n\\n**Response to Weakness 3** Thank you for raising this point. We want to emphasize that our approach leverages already available fully open-source aligned LLMs (Huggingface), fine-tuned for a variety of tasks and open-sourced datasets. Specifically, we utilized a diverse set of open-source LLMs, including Zephyr - Creating Writing, Question-answering, Starling - Open chat, general purpose dialogue, DolphinQwen - Mathword puzzle, Cognitive and logical reasoning, TalinboyQwen- Creative writing, DolphinMistral- Coding instruction, reasoning, etc. \\n\\nThrough our experiments, we aimed to demonstrate that our multi-agent decoding approach is a purely inference-time algorithm capable of leveraging any existing off-the-shelf aligned LLMs. To underscore the generality of our approach, facilitate easy replication of results, and eliminate potential training biases, we exclusively used fully open-source, off-the-shelf models and datasets. Our experimental results demonstrate the benefits of our method, even when using off-the-shelf LLMs. We have also performed additional experiments on challenging tasks- Alpaca farm and HH which shows the efficacy of our approach.\\nAs suggested by the reviewer, we plan to include additional diverse agents and tackle more challenging tasks in the final version to further strengthen our findings.\\n\\n>Weakness 4: The experimental section lacks a comparison with alignment algorithms based on DPO and PPO.\\n\\nThank you for pointing this out. We have added a comparison with single-agent DPO for Agent-1 and Agent-2 for Evaluation 1 (Table 3 in the paper) and will include additional comparisons (DPO, PPO) for other evaluations in the final version as suggested by the reviewer. \\n\\n| Algorithms | Avg Reward (Normalized) |\\n|------------------------|------------|\\n| BoN | 0.09 |\\n| Agent-1 (Decoding) | 0.73 |\\n| Agent-2 (Decoding) | 0.52 |\\n| Agent-1 (DPO) | 0.69 |\\n| Agent-2 (DPO) | 0.41 |\\n| Collab (Ours) | 1.0 |\\n\\nIt is evident, that DPO's performance is slightly less (especially for Agent-2) than single-agent decoding approaches which have also been observed in several recent works [1,2,3,4]. Additionally, we would like to highlight that it has already been shown both theoretically and empirically that optimal inference-time decoding can perform as well as training-based alignment methods (or even improve) in terms of the reward function/win rate (refer to Theorem 1 in Transfer-Q [3]), which aligns with our experimental observations in this work.\\n\\n>Weakness 5 : The authors should consider more complex, objective tasks .......to avoid potential biases from the reward model and GPT-4 evaluations.\\n\\n**Response to Weakness 5** Thanks for this suggestion. As requested, we performed ablation on a more complex task - Alpaca Farm as shown below:\\n\\n| Method | Avg Reward |\\n|------------------------|------------|\\n| BoN | 25.08 |\\n| Agent-1 (Decoding) | 24.68 |\\n| Agent-2 (Decoding) | 24.29 |\\n| Collab (Ours) | 26.21 |\\n\\nIn the table above, we report the average reward obtained on GPT-4 preference split of Alpaca Farm (tatsu-lab/alpaca_farm) using various decoding strategies on 300 prompts. For this evaluation, we employed Zephyr-7B as Agent-1, Starling-7B as Agent-2, and LLAMA-3-8B as the reward model. The results clearly demonstrate that by utilizing a mixture of agents, Collab achieves a higher average reward compared to other baseline approaches. As per the reviewer's suggestion, we are also conducting evaluations on additional complex tasks such as math reasoning, and will update the final draft with these additional evaluations.\", \"references\": \"[1]. Controlled Decoding from Language Models\\n[2]. ARGS: Alignment as Reward-Guided Search\\n[3]. Transfer Q Star: Principled Decoding for LLM Alignment\\n[4]. From r to Q-star: Your Language Model is Secretly a Q-Function\"}", "{\"comment\": \"> Normalized Avg Reward for HH-RLHF\\n\\nThank you very much for the additional experiments on HH-RLHF. It would be nice if you could show the reward scores for the Helpfulness and Harmlessness subset separately rather than aggregated. I would like to see how much loss we get when we run COLLAB compared to the model specifically trained for one of the subsets. I would guess that there will be a trade-off and would like to know how much it costs.\"}", "{\"comment\": \"Thank you very much for the detailed response. Now I think I understand the position of the paper much better.\\n\\n> Inference Time\\n\\nI think it is totally fine to be slower than the others, but it should be reported in the paper (maybe in Appendix).\\n\\n> additional evaluations on the HH-RLHF dataset\\n\\nThank you very much for the effort. I think the additional experiment supports the claim that the method is also empirically effective.\\n\\n> Why can COLLAB correctly answer the question when both Agent-1 and 2 fail to answer the question?\\n\\nThank you very much for the additional experiments and for sharing your thoughts.\\nNow I see the benefit of the token-level mixture-of-agent approach.\\n\\nIn my stylistic preference, I would like to have your thoughts on the potential of the proposed approach written in the paper (maybe in Dicussion Section), even if you have yet to come up with the theoretical justification.\"}", "{\"title\": \"Response to Reviewer Q9zn - Part 1\", \"comment\": \"**General Response**: We thank the reviewer for highlighting and appreciating the novelty of our proposed approach as well as acknowledging that reward maximization through collaborative decoding is an important and underexplored problem.\\n\\n>Weakness 1: There is no discussion about how the Q functions are trained, although this is a crucial part of the algorithm.\\n\\n**Response to Weakness 1** Thank you for raising this point. We would like to emphasize that our approach is a completely training-free method. We estimate the Q-function based on its definition in Equation 1 (Eq. 6), using stochastic unbiased samples of the Q-estimate (similar to [1, 2]), and select the token with the maximum implicit Q-value, as outlined in Equation 7. \\nThat said, it is indeed possible to train an offline Q-adapter (implicit-Q in our case) in a lightweight manner, similar to CD-Fudge [3], to enable faster inference and reduce the time complexity to a constant factor of \\ud835\\udc58 over naive decoding. We appreciate you highlighting this point and will include a detailed description in the updated draft.\\n\\nWe also want to highlight that the primary focus of this work is to propose a principled method for achieving multi-agent alignment (as also highlighted by the reviewer) using implicit-Q, supported by theoretical guarantees and extensive evaluations.\\n\\n>Weakness 2: The only description of the experiments (models, rewards, etc.) is in the appendix. It is also unclear why the author chose these specific experiments.\\n\\n**Response to Weakness 2** : Thanks for this point. We provide details and motivation for the choice of the experiments below.\\n\\n**Details of Agents** : Our approach leverages readily available open-source aligned LLMs, fine-tuned for a variety of tasks and datasets. Specifically, we utilize a diverse set of open-source LLMs, including Zephyr: Creative writing and question-answering, Starling: Open-chat and general-purpose dialogue, DolphinQwen: Math word puzzles, cognitive reasoning, and logical reasoning, TalinboyQwen: Creative writing, DolphinMistral: Coding instructions and reasoning. \\n\\n**New Experiment** : To explicitly highlight the impact of our approach over single-agent decoding (in addition to Fig 4 in paper), we perform an additional experiment (Rebuttal) with two distinct LLM agents: one trained exclusively on the helpfulness subset of HH-RLHF (ChenmieNLP/Zephyr-7B-Beta-Helpful) and the other on the harmlessness subset of HH-RLHF (ChenmieNLP/Zephyr-7B-Beta-Harmless) to generate text that is both helpful and harmless. The results clearly demonstrate that our proposed decoding strategy provides a significant boost in the quality of the generated text, as indicated by higher rewards, compared to single-agent decoding\\n\\n| Method | Normalized Avg Reward |\\n|------------------------|------------|\\n| BoN | 0.41 |\\n| Agent-1 (Helpfulness) | 0.3 |\\n| Agent-2 (Harmlessness) | 0.19 |\\n| Collab (Ours) | 1.0 |\\n\\n**Motivation**: To underscore the generality of our approach, ensure the reproducibility of results, and mitigate potential training biases, we exclusively leveraged fully open-source, off-the-shelf models and datasets. However, we have additional experiments requested by Reviewers and keep on updating with additional experiments with more diverse agents and tasks. We will also move the experimental details and descriptions in the main body as suggested by the reviewer for better clarity.\"}", "{\"metareview\": \"The paper introduces a novel controlled decoding strategy employing a mixture of expert LLMs for improved alignment in language models. This multi-agent approach is claimed to surpass single-agent decoding methods in adapting to diverse tasks and preferences, enhancing test-time performance without retraining. The findings support this claim, demonstrating COLLAB's superior performance over single-agent baselines and achieving notable improvement in average reward and win-tie rate against GPT-4.\", \"strengths\": \"The paper presents a principled approach to controlled decoding by leveraging a mixture of expert agents. It offers a computationally efficient alternative to traditional RLHF methods, allowing for inference-time alignment without retraining. The theoretical analysis and empirical evaluations are comprehensive, demonstrating the effectiveness of COLLAB in various tasks and preferences.\", \"weaknesses\": \"The computational requirements, particularly the inference latency introduced by multi-agent decoding, are not thoroughly analyzed and discussed. More thorough apples-to-apples comparisons in terms of inference budget with baselines (BoN and others), both in terms of inference time as well as number of LLM calls. The absence of human evaluation and the reliance on GPT-4 for alignment claims are also limitations. \\u00a0Finally, while the paper claims the use of domain expert policies, it remains unclear how the chosen policies demonstrate expertise in the evaluated areas (the authors did run experiments on AlpacaFarm).\\n\\nDespite the identified weaknesses, some reviewers agree that the paper should be accepted, based on the paper's theoretical grounding, and empirical support. Indeed, COLLAB offers a promising direction for improving LLM alignment and I hope the authors would address the weaknesses in the revision.\", \"additional_comments_on_reviewer_discussion\": [\"The rebuttal period focused on clarifying the paper's claims and addressing reviewers' concerns.\", \"Reviewer d8Dt questioned the clarity of the reference policy and the computational cost. The authors clarified that the reference policy is the base model from which other policies are fine-tuned, and they discussed the computational efficiency of their approach compared to RLHF.\", \"Reviewer 4fXt asked about the Q-function, agent initialization, and comparison with other methods. The authors explained that the Q-function is estimated directly from its definition, the agents are initialized from existing LLMs, and they provided comparisons with DPO.\", \"Reviewer 4LCR inquired about computational complexity, experimental design, and the SOTA decoding strategy. The authors reported the inference time, conducted additional evaluations on the HH-RLHF dataset, and clarified the SOTA decoding strategy as Transfer Q*.\", \"Reviewer Q9zn questioned the training of Q-functions, experimental details, and the sampling method. The authors clarified that their approach is training-free, provided details about the experiments, and corrected a typo in the sampling method.\", \"I also requested an apples-to-apples comparison with BoN and TQ* by increasing their inference time to match that of Collab. The authors conducted additional evaluations, and promised to run BoN experiments. Overall, they partially demonstrated that Collab still outperforms single-agent decoding methods even when the inference time is scaled.\", \"Overall, the authors' responsiveness to the reviewers' concerns and their efforts to improve the paper during the rebuttal period contributed significantly to my decision to accept the paper.\"]}", "{\"summary\": \"The paper proposes a method to elaborate multiple LLMs at the time of inference to combine the strengths of the models. For each token generation, the most promising policy to generate the rest of the sequence is selected. The method is evaluated on two generic alignment tasks using multiple generic LLMs, outperforming single-LLM decoding algorithms.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"How to elaborate multiple LLMs is an important research question. Given that we have little knowledge on how to investigate the strengths and weaknesses of the LLMs (as of now), ensemble methods should be useful.\", \"The method is simple and intuitive.\", \"The theoretical result is nice to have, yet its practical implication seems not immediate to me.\"], \"weaknesses\": [\"> (p.10 l.536) Empirical evaluations demonstrate its superiority over traditional single-agent decoding baselines, providing a robust and computationally efficient method for model alignment in complex, real-world scenarios\", \"I failed to understand the computational cost of the method. My understanding is that it requires the inference of the whole sequence for each policy, per each token. So the computational cost of generation a sequence of length N would be O(N^2) times of query to LLMs. It would be nice to have a walltime of the algorithm compared with BoN sampling and Agent-1 and 2.\", \"Although the strength of the method is claimed to be able to adapt to a diverse set of tasks, the experiments are not designed to evaluate in such a scenario. It would be beneficial to evaluate the method for each subtask rather than showing the aggregated result. For example, how does the model perform in the Harmlessness and Helpfulness subsets in HH-RLHF datasets? What kinds of tasks benefit from the ensemble? It would be helpful if we have a post-hoc analysis of what makes the proposed method better than a single-LLM method in the empirical scenario.\"], \"questions\": [\"What is the computational complexity and the walltime of the algorithm in the experiment? I understand that the walltime depends on the hardware and the system but would be a good reference to understand the effectiveness.\", \"Which algorithm does the SoTA decoding strategy refer to? Is it Transfer Q* (Souradip Chakraborty et a l. 2024)? Or is it controlled decoding (Sidharth Mudgal et al. 2024), which is called SoTA in Chakraborty et al.? A paper should use a term that uniquely identifies the subject rather than a term that changes over time.\", \"(p.7 Example) Why can COLLAB correctly answer the question when both Agent-1 and 2 fail to answer the question? Is there a reason COLLAB decoding can acquire an ability neither of the Agents have?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer d8Dt\", \"comment\": \"We are happy that our response could clarify the reviewer's concerns and we deeply appreciate the reviewer's assessment and understanding of the core of work.\\n\\n> Point 1: I saw the compute requirements and i do believe in it's current state, the latency is too much to make this work usable. It's 2x ... .oken?)..... might become prohibitively expensive if you consider 2+ agents\\n\\n**Response to Point 1**: Thanks for this important point. We agree that performing principled multiagent decoding adds computational overhead to inference time, however, this increase in latency is justified by the 2x improvement in the average reward which is consistent in all our experimental results including the additional results on harder tasks. \\n\\nTo further improve computational traceability, one can train an offline Q-adapter (function approximation) in a lightweight manner, similar to CD-Fudge, to enable faster inference for Collab (8s per prompt).\\n\\nHowever, we want to highlight that the **key contribution of this work** is to develop a principled approach of combining multiple off-the-shelf LLM optimally to general responses that maximize the target reward function with provable guarantees, which was missing from existing literature. \\n\\n\\n>Point 2: Regarding the other results, I'm not sure what the average reward is that you add for Alpaca Farm? How did you select the 300 prompts? You say that you will add more diverse agents and harder tasks in the final version, but can you provide more details?\\n\\n**Reponse to Point 2** : Thanks for this point. For the Alpaca Farm experiment, we selected 300 prompts randomly from the \\\"GPT-4 preference split\\\" of Alpaca Farm (tatsu-lab/alpaca_farm). We evaluate various decoding strategies using LLAMA-3-8B reward model and report the average reward obtained over the 300 prompts, which clearly shows Collab outperforms existing baselines. We are currently performing additional experiments on 1. other splits on Alpaca farm 2. MT-bench 3. Mathematical reasoning task for GSM-8k and will report the results in the updated draft. We would be happy to incorporate any additional tasks or benchmarks the reviewer may suggest.\\n\\n\\n**Remark**: We want to highlight that our discussion with the reviewer has been extremely insightful. Several points raised by the reviewer not only helped the key ideas but also improved the overall presentation of our work. We will update the final version of our draft with detailed discussions of the rebuttal.\"}", "{\"summary\": \"This work proposes to extend decoding / inference time alignment using a mixture-of-experts based decoding strategy. For this, the authors make use of existing LLMs as policy experts. The decoding method proposed by the authors uses a token level selection strategy where an LLM is selected from the experts / agents for each token. This selection is performed using a utility metric which is based on the implicit Q function. The authors also provide a theoretical justification of their approach and support it using empirical results.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Dont need to update the billions of parameters that RLHF needs.\", \"Their approach can make use of specialized models which are experts in a subset of capabilities that are desired.\", \"Current solutions to mixing multiple experts relies on either tuning a model, or explicit formulas, and rely on expert demonstrations. This work dynamically merges models\", \"This approach is especially beneficial in settings where reward models / policy parameters of models are not available readily thus impacting a lot of industry applications.\", \"The theoretical justification and details are sound.\", \"The paper is generally well written and easy to follow.\", \"The evaluations are diverse and across 7 different setups, however, only 2 datasets: Berkeley Nectar and HH-RLHF.\"], \"weaknesses\": [\"It is unclear what is the reference policy used in the KL terms that is used to obtain the objective of each policy (J).\", \"\\\"The sub-optimality gap will be lower when 1) the best agent\\u2019s reward function is close to the target reward function, and 2) when the regularization terms are properly controlled so that both the reference policy and the optimal policy are close\\\" -> The choice of the reference policy seems to be important.\", \"I would have liked to see analysis on compute requirements.\", \"Minor point: There is no human evaluation done in this work. Evaluating using GPT4 should not be used to claim alignment with humans.\", \"The main usage of this work seems to be in using domain expert policies that are good in different aspects / tasks. It is unclear how the chosen policies are experts in different areas that are evaluated. For this, I would suggest using evaluation on harder tasks maybe like reasoning / coding datasets (alpaca eval, arena hard, mt bench).\"], \"questions\": [\"What is the distribution of the selection of agents.\", \"How much is the compute difference between doing RLHF + single decoding vs mixture of decoding?\", \"Line 97-99 seems repeated.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the explanation. I now fully understand the proposed algorithm. However, I still hold the opinion that the paper, in its current state, does not meet the standard for acceptance. My concerns are as follows:\\n1. **Clarity and Reproducibility**: The paper is written in a confusing manner, making it difficult to understand the details and reproduce the results. Furthermore, both the original and current versions contain mistakes. For instance, the authors claim to use top-p decoding in Algorithm 1, but as indicated on line 428, it appears they actually mean top-k decoding (see: [Top-p sampling - Wikipedia](https://en.wikipedia.org/wiki/Top-p_sampling)).\\n2. **Computational Feasibility**: The computational cost of the proposed algorithm is not just high\\u2014it is extraordinarily demanding. At each decoding step, the algorithm requires generating an entire trajectory from each policy for every top-p token. For example, in the experiments presented in the paper, 10 tokens per policy are used. **This means that 20 full trajectories must be generated just to decode a single token**. This raises serious concerns about the practicality of the method and questions its comparative performance against other approaches, such as BoN, when operating under equivalent computational or inference time constraints.\"}", "{\"title\": \"Further Clarification on Scaled Inference time comparison\", \"comment\": \"Dear AC,\\n\\nThank you for your insightful comment and we truly appreciate the depth of your engagement. We take this opportunity to provide a detailed response as follows:\\n\\n**Theoretical Justification:**\\n**Our benefit comes from the proposed optimal collaboration:** We would like to emphasize that our results align with the theoretical insights presented in our work. We note that just increasing inference time does not yield significant improvement because the performance of single-agent algorithms is fundamentally constrained by their reward difference relative to the target reward. In contrast, our proposed approach performs better because it optimally combines multiple agents, as evidenced by the $\\\\min_j \\\\delta_{ij}$ term in Theorem 1. This highlights the novelty and importance of our contribution: demonstrating a principled approach for combining LLMs to generate optimal responses.\\n\\n> Previously you posted that TQ* gets 0.45 with an inference time of 38s and now it seems increasing inference time to 70s doesn't improve the results. Is that correct?\\n\\n**Response** : Thanks for this point. However, we want to point out a minor clarification. Specifically, we wanted to clarify that the previous results, where TQ* obtained an average reward of 0.45, were based on the setup mentioned in: *Evaluation 1:* Dataset: Berkeley Nectar; Agent-I: Zephyr-7B-$\\\\alpha$; Agent-II: Starling-7B-$\\\\alpha$; Reward Model: Mistral-7B-$\\\\alpha$-IT\\n\\nFor the new results with increased latency, we used the setup in: *Evaluation 4:* Dataset: Berkeley Nectar; Agent-I: Dolphin-2.6-Mistral-7B-DPO; Agent-II: Starling-7B-$\\\\alpha$; Reward Model: Mistral-7B-$\\\\alpha$-IT (as mentioned in the previous response).\\n\\nWe observed that increasing the inference latency from 38 seconds to 70 seconds indeed slightly improves the TQ* average reward for both agents. For clarification, we have posted both the original results and the ones with increased latency.\\n\\n| Method | Avg. Normalized Reward | Inference-Time|\\n|------------------------|------------------------|---------|\\n| TQ* (Agent-I) | 0.38 | 38 sec\\n| TQ* (Agent-II) | 0.09 | 38 sec\\n| Collab | 1.0 | 68 sec \\n\\n| Method | Avg. Normalized Reward | Inference-Time|\\n|------------------------|------------------------|---------|\\n| TQ* (Agent-I) | 0.45 | 70 sec\\n| TQ* (Agent-II) | 0.18 | 70 sec\\n| Collab | 1.0 | 68 sec\\n\\n\\n**Remark**: We initially compared our approach with the SoTA transfer decoding method (TQ*) since BoN sampling performance was similar to Agent-1 (Figure 2, Evaluation 4). However, we completely acknowledge and agree that BoN is an extremely crucial baseline and simple to scale. As per your suggestion, we are now incorporating BoN sampling with increased test time compute and will add them in the updated draft.\\nWe thank the AC for this very insightful question and for taking such a deep interest in our work. \\n\\n\\nRegards\\n\\nAuthors\"}", "{\"title\": \"Final Response to Reviewer 4fXt\", \"comment\": \"We thank the reviewer for acknowledging our rebuttal and increasing the score. The discussion with the reviewer has helped to improve the presentation of our proposed approach and we will add the detailed discussion in our updated draft.\\n\\nRegards\\nAuthors\"}", "{\"title\": \"Response to Reviewer d8Dt\", \"comment\": \"**General Response** We thank the reviewer for their thoughtful feedback and for recognizing the central contribution of our work\\u2014a principled and novel approach to token selection using a mixture-of-experts decoding strategy. We also greatly appreciate the acknowledgment of the theoretical justifications and the clarity of our presentation.\\n\\n>Weakness 1 & 2 : It is unclear what is the reference policy used in the KL terms that is used to obtain the objective of each policy (J). The sub-optimality gap will be lower when 1) the best agent\\u2019s reward function is ..... The choice of the reference policy seems to be important.\\n\\n**Response to Weakness 1 & 2** Thank you for highlighting this point. Indeed, the reference policy plays a crucial role in the sub-optimality analysis. In our case, the reference policy corresponds to the base policy (pre-trained/reference model) from which the policies $\\\\pi_k, \\\\forall k \\\\in K$, have been fine-tuned or aligned.\\n\\n>Weakness 3 : I would have liked to see analysis on compute requirements. How much is the compute difference between doing RLHF + single decoding vs mixture of decoding?\\n\\nThank you for your question. First, we would like to clarify that RLHF involves fine-tuning the model on the target reward, making it significantly more compute-intensive compared to our proposed mixture-of-decoding approach. Specifically, for the Zephyr-7B model, RLHF training requires 6-7 A6000 GPUs, even with techniques like LoRA and 8-bit quantization. In contrast, Collab requires only 2 A6000 GPUs for decoding (1 A6000 for single agent decoding), making it substantially more efficient in terms of compute requirements. In terms of inference time, Collab takes 68 seconds (2 agents) to generate a response for a prompt, compared to 38 seconds for TQ* and 8 seconds for naive decoding. However, this slight increase in inference latency is justified by the 2x improvement in average reward by Collab (Figure 2, 4) as also highlighted by the reviewer.\\n\\n\\n>Weakness 4: The main usage of this work seems to be in using domain expert policies that are good in different aspects/tasks. It is unclear how the chosen policies are experts in different areas that are evaluated. For this, I would suggest using evaluation on harder tasks maybe like reasoning/coding datasets (alpaca eval, arena hard, mt bench).\\n\\n**Response to Weakness 4** Thank you for your suggestion. As requested, we performed the ablation on Alpaca Farm shown below:\\n\\n\\n| Method | Avg Reward |\\n|------------------------|------------|\\n| BoN | 25.08 |\\n| Agent-1 (Decoding) | 24.68 |\\n| Agent-2 (Decoding) | 24.29 |\\n| Collab (Ours) | 26.21 |\\n\\nIn the table above, we report the average reward obtained on GPT-4 preference split of Alpaca Farm (tatsu-lab/alpaca_farm) using various decoding strategies on 300 prompts. For this evaluation, we employed Zephyr-7B as Agent-1, Starling-7B as Agent-2, and LLAMA-3-8B as the reward model. The results clearly demonstrate that by utilizing a mixture of agents, Collab achieves a higher average reward compared to other baseline approaches. As per the reviewer's suggestion, we are also conducting evaluations on additional tasks such as MT-Bench and Arena Hard. We will update the final draft with these additional evaluations.\\n\\n>Question: What is the distribution of the selection of agents\\n\\n**Response to Question** In our current experiments, we consider a diverse set of open-source LLMs, (Huggingface) including *Zephyr* (creative writing and question-answering), *Starling* (open chat and general-purpose dialogue), *DolphinQwen* (math word puzzles, cognitive reasoning, and logical QA), *TalinboyQwen* (creative writing), and *DolphinMistral* (coding instructions and reasoning). To underscore the generality of our approach, and eliminate potential training biases, we exclusively used fully open-source, off-the-shelf models and datasets. As suggested by the reviewer, we plan to incorporate more diverse agents and introduce harder tasks in the final version. \\n\\nWe thank the reviewer for the positive feedback and acknowledging the key contributions of our work.\"}", "{\"title\": \"Clarification about results and Missing BoN comparison\", \"comment\": \"Previously you posted that TQ* gets 0.45 with an inference time of 38s and now it seems increasing inference time to 70s doesn't improve the results. Is that correct?\\n\\nAlso, BoN is a much simpler and widely used baseline -- so it is probably quite important to compare to it.\"}", "{\"comment\": \"I thank the authors for their response. However, I'm still confused. You mention that:\\n> We estimate the Q-function based on its definition in Equation 1 (Eq. 6), using stochastic unbiased samples of the Q-estimate (similar to [1, 2])\\n\\nHowever, papers 1 (CD) and 2 (TQ*) perform this estimation in very different ways. CD trains a value function using either FUDGE-style loss or Q learning loss, while TQ* uses an aligned LLM (and its reward model) to obtain an estimation. Which one of the two are you doing?\\n\\nIf it is CD, please provide more information on the training process. If it is TQ*, please provide information on which LLM and reward model you used for estimating the Q (and on the sampling process involved in estimating the Q- do you sample separately for every $\\\\pi$? or multiple times?).\"}", "{\"title\": \"Response to Reviewer 4fXt - Part1\", \"comment\": \"**General Response** We thank the reviewer for recognizing the novelty of our approach in achieving alignment without re-training and for appreciating our use of a mixture of agents for controlled decoding, which offers a fresh perspective.\\n\\n>Weakness 1 : Is the implicit Q-function an additional trained language model? How does its cost compare to methods based on DPO, PPO, and RLHF?\\n\\n**Response to Weakness1** Thank you for raising this crucial point. We want to emphasize that our method is entirely training-free, where the Q-function is estimated directly from its definition in Equation 6 by taking stochastic unbiased samples of the Q-value, similar to the approach in [1, 3]. The token with the maximum implicit Q-value is then selected, as shown in Equation 7.\\n\\nWe acknowledge that this approach introduces additional computational overhead during inference - 68s per prompt (2 agents) in comparison with 38s for single-agent decoding methods [1, 3]. However, this overhead is justified by the significant improvement in average reward achieved by Collab, as shown in Figure 2. Additionally, one can train an offline Q-adapter (implicit-Q in our case) to enable faster inference, reducing time complexity to a constant factor of p (top-p tokens) over classical decoding methods [3]. As also highlighted in [1, 2], training a Q-function adapter (CD-Fudge) is lightweight as it involves supervised prediction of a scalar Q-value rather than training a generative policy with DPO/PPO. We provide further details on computational cost below :\\n\\n**Computational Cost Comparison with RLHF/DPO** : RLHF/DPO involves fine-tuning the model on the target reward and requires significantly more compute than our proposed mixture-of-agent decoding approach. For instance, RLHF training for the Zephyr-7B model requires 6 A6000 GPUs, even with techniques like LoRA and 8-bit quantization, whereas Collab requires only 2 A6000 GPUs for decoding, making it substantially more efficient in terms of compute. \\n\\n>Weakness 2 : Can the implicit Q-function directly guide a policy model in decoding? Is it necessary to use multiple agents collaboratively for decoding?\\n\\n**Response to Weakness2** \\n\\nYes, the implicit Q-function directly guides the policy model in decoding, as shown in Equation 5, (similar to CD [1], Equation-1) where it updates the probability of the token (under the reference policy) with exponential weighting based on the implicit-Q value. Ideally, if we had access to the true Q-star, Equation 3 would provide the optimal solution to the alignment problem. However, since Q-star is never available in practice, using multiple agents collaboratively helps provide a closer estimate of Q-star through the implicit Q, as demonstrated in Theorem-1, where the sub-optimality gap is measured with respect to the true Q-star. Below, we provide a detailed justification of the use of multiple agents for decoding.\\n\\n\\n**Importance of Multiple Agents in Decoding** : The importance of using multiple agents for decoding can be understood from the upper-bound in Theorem 1, particularly the term $\\\\min_j \\\\delta_{\\\\ast j}$ which represents the minimum difference between the target reward and the reward on which the best policy in the set has been aligned to. If only a single agent is used, the sub-optimality will remain constant and proportional to the reward difference between the target reward and that single agent's reward, which cannot be improved, as shown below :\\n$\\\\Delta \\\\leq \\\\delta_{\\\\ast j} + \\\\alpha KL\\\\left(\\\\pi_j(\\\\cdot|s), \\\\pi_{\\\\text{ref}}(\\\\cdot|s)\\\\right) - \\\\alpha KL\\\\left(\\\\pi_{\\\\text{alg}}(\\\\cdot|s), \\\\pi_{\\\\text{ref}}(\\\\cdot|s)\\\\right)$\\n\\nConsequently, if the single agent ($j^{\\\\text{th}}$) is highly sub-optimal for the target reward i.e $\\\\delta_{\\\\ast j}$ is very high, the performance will suffer significantly. In contrast, leveraging a diverse set of policies can mitigate this gap, as $\\\\min_j \\\\delta_{\\\\ast j}$ can be low for at least one of the policies in the set (even though $\\\\delta_{\\\\ast j}$ is high for some specific agents). This advantage of collaborative multi-agent decoding has been consistently observed in our experimental evaluations.\"}", "{\"summary\": \"The paper presents the idea of switching between different LLMs during the decoding process in order to maximize the reward of the generated response. To find which policy should be used at each generation step, they define the problem as KL-constrained RL and sample a token greedily with respect to a Q function (regularized by the KL constraint).\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"4\", \"strengths\": [\"The idea of using a Q function to perform routing between different models is novel. Moreover, doing collaborative decoding to maximize reward is an interesting problem that does not have a lot of literature about it.\", \"The empirical results seem strong, with the proposed algorithm outperforming the baselines.\"], \"weaknesses\": [\"The paper is hard to read and understand; in addition, some important details are not discussed at all. Some examples:\", \"There is no discussion about how the Q functions are trained, although this is a crucial part of the algorithm.\", \"The only description of the experiments (models, rewards, etc.) is in the appendix. It is also unclear why the author chose these specific experiments.\", \"The authors explain their method as an extension of CD, which proposes sampling according to equation 3. However, in practice they sample according to equation 5. This is not the same, as the probability under pi_ref is not taken into consideration in equation 5.\", \"Line 428 hint that you are doing top-K but Algorithm 1 (line 330) talks about top-p, which one is true?\", \"I believe that the technical work presented in the paper is good, but the writing needs to be improved substantially.\"], \"questions\": \"Control Decoding and other similar works [1] use $Q^{\\\\pi_{ref}}$ to augment the decoding. This is not an approximation of $Q^*$ as mentioned in lines 260-261 of the paper, but an alternative solution to the KL-constrained RL problem, known as RWR [2]. The advantage of using $Q^{\\\\pi_{ref}}$ is that it can be easily learned using trajectories from $\\\\pi_{ref}$.\\nCollab, on the other hand, uses $Q^{\\\\pi_i}$ as an approximation to $Q^*$. Does the author have an explanation (either empirical or theoretical) as to why this is better than just using $Q^{\\\\pi_{ref}}$? This is an important design choice in the algorithm that I feel hasn\\u2019t been discussed enough.\\n\\nIn the related work section, I\\u2019m missing a discussion about the connections to hierarchical RL. Collab is very close to it, as it learns a policy (parametrized as Q function) to choose which one of several policies to sample from in each state.\\n\\nI find some of the metrics used in the experiment section are not related to the motivation behind the algorithm. The idea, as presented in the paper, is to use models trained on different reward functions to perform decoding that will maximize a new one. In the experiment section, however, you evaluate non-reward metrics like win rate, diversity, and coherence. Let\\u2019s take win rate, for example. Is your claim that the new reward function correlates with the win rate better than the ones that the original models were trained for? And therefore, a higher win rate is equivalent to a higher reward? If so, where is the evidence?\\n\\n[1] Han, Seungwook, et al. \\\"Value Augmented Sampling for Language Model Alignment and Personalization.\\\" arXiv preprint arXiv:2405.06639 (2024).\\n[2] Peters, Jan, and Stefan Schaal. \\\"Reinforcement learning by reward-weighted regression for operational space control.\\\" Proceedings of the 24th international conference on Machine learning. 2007.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes Controlled Decoding using Mixture of Agents for LLM Alignment, a method that achieves alignment at inference time without retraining the policy model. However, the comparison with classical alignment methods is insufficient and some critical experimental details are unclear.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The approach of achieving alignment without retraining the policy model is innovative and could significantly reduce computational costs.\\n\\n2. The concept of using a mixture of agents for decoding provides a fresh perspective on controlling language model outputs.\", \"weaknesses\": \"1. Is the implicit Q-function an additional trained language model? How does its cost compare to methods based on DPO, PPO, and RLHF?\\n\\n2. Can the implicit Q-function directly guide a policy model in decoding? Is it necessary to use multiple agents collaboratively for decoding?\\n\\n3. How were the agents initialized in the experiments? Were these agents explicitly trained for alignment?\\n\\n4. The experimental section lacks a comparison with alignment algorithms based on DPO and PPO.\\n\\n5. The authors should consider more complex, objective tasks such as those involving reasoning and math to avoid potential biases from the reward model and GPT-4 evaluations.\", \"questions\": \"please see weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer Q9zn - Part 2\", \"comment\": \">Weakness 3: The authors explain their method as an extension of CD, which proposes sampling according to equation 3. However, in practice, they sample according to equation 5. This is not the same, as the probability under pi_ref is not taken into consideration in equation 5.\\n\\n**Response to Weakness 3** : Thanks for this catch. There is a typo in Eq.5, which will be $Q^*$ (not $Q^{\\\\pi}$) w.r.t to the target reward function (in-line with CD[1] and TransferQ*[2]) where our key objective is to best estimate the $Q^*$ with implicit-Q function leveraging multiple-agents, as shown in Theorem-1, where we quantify the gap w.r.t the true $Q^*$ .\\n\\n>Question 1 : Control Decoding and other similar works [1] use $Q^{\\\\pi_{\\\\text{ref}}}$ to augment the decoding. This is not an approximation of $Q^*$ as mentioned in lines 260-261 of the paper, but an alternative solution to the KL-constrained RL problem, known as RWR cite{2}. The advantage of using $Q^{\\\\pi_{\\\\text{ref}}}$ is that it can be easily learned using trajectories from $\\\\pi_{\\\\text{ref}}$. Collab, on the other hand, uses $Q^{\\\\pi}$ as an approximation to $Q^*$. Does the author have an explanation (either empirical or theoretical) as to why this is better than just using $Q^{\\\\pi_{\\\\text{ref}}}$? This is an important design choice in the algorithm that I feel that I feel hasn\\u2019t been discussed enough.\\n\\n**Response to Question 1** Thanks for this question! Below, we provide a comprehensive explanation:\\n\\nFirst, we highlight that if we have access to the true $Q^*$ w.r.t to the $r_{\\\\text{target}}$, then one can directly leverage the closed form of the optimal policy in Equation 1, due to the strong convexity of the KL regularized problem. However, $Q^*(s_t,z)$ is never available in practice, hence CD[1] leverages samples from $\\\\pi_{\\\\text{ref}}$ to estimate $Q^*$ whereas TransferQ*[3] relies on an aligned model along with the corresponding reward function to estimate $Q^*$ using their indirect transfer method. In this paper, we take a different route where we leverage multiple off-the-shelf LLMs to best estimate Q-star without any information of individual reward functions (on which these off-the-shelf LLMs are aligned to). Specifically for selecting the next action, we compute the Q-value for each agent and token with respect to $r_{\\\\text{target}}$ and select the token (corr. agent) with the highest implicit Q-value as shown in Eq 7, providing a much better estimate of $Q^*$ under the given conditions.\\n\\nWe agree with the reviewer that it is our design choice. However, it is important to note that in Theorem 1, we estimate the sub-optimality w.r.t the true $Q^*$ as shown in equation as $\\\\Delta = Q^{\\\\pi^*}\\\\_\\\\{\\\\text{target}}(s\\\\_t,z) - Q^{\\\\pi\\\\_{\\\\text{alg}}}\\\\_{\\\\text{target}}(s\\\\_t,z) $ and we upper-bound this term w.r.t $min\\\\_j \\\\delta\\\\_{*j}$ which indicates that our design policy by maximizing the implicit Q-function, will always do equally or better than the best policy in the policy set, thereby justifying our design choice mathematically. \\nWe agree with the reviewer and will add a detailed justification of the design choice connecting with the theoretical results in the main paper.\\n\\n> Question 2: In the related work section, I\\u2019m missing a discussion about the connections to hierarchical RL. Collab is very close to it, as it learns a policy (parametrized as Q function) to choose which one of several policies to sample from in each state.\\n\\n**Response to Question 2** Thanks for this very insightful comment and we acknowledge that its a very interesting connection. We agree that HRL is a general framework and Collab can indeed be formulated as a special case for HRL, where the upper agent provides the goal (reward/Q). We will definitely add this very interesting connection to HRL as a potential scope for future research in the final draft. \\nHowever, since this is one of the first works to provide a principled method of mixture of agent based decoding, we focused on establishing a robust theoretical framework and conducting empirical evaluations to demonstrate the effectiveness of our approach\\u200b\"}" ] }
7oaWthT9EO
A Differential Equation Approach for Wasserstein GANs and Beyond
[ "Zachariah Malik", "Yu-Jui Huang" ]
This paper proposes a new theoretical lens to view Wasserstein generative adversarial networks (WGANs). To minimize the Wasserstein-1 distance between the true data distribution and our estimate of it, we derive a distribution-dependent ordinary differential equation (ODE), which represents the gradient flow of the Wasserstein-1 loss, and show that a forward Euler discretization of the ODE converges. This inspires a new class of generative models that naturally integrates persistent training (which we call W1-FE). When persistent training is turned off, we prove that W1-FE reduces to WGAN. When we intensify persistent training appropriately, W1-FE is shown to outperform WGAN in training experiments from low to high dimensions, in terms of both convergence speed and training results.
[ "Generative modelling", "finite elements", "gradient flow", "persistent training" ]
Reject
https://openreview.net/pdf?id=7oaWthT9EO
https://openreview.net/forum?id=7oaWthT9EO
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yx0SaXP6MK", "xOMFmciwMi", "ulLhkMz0dJ", "uQNhHShEeh", "jCf5O192l3", "afNIhtxAMB", "XZVgT6V5V2", "UK7f5fpz9G", "HxvpglFWC8", "AS1g3s9YR5", "0ZR2YLAoKh" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_review", "decision", "official_comment", "official_review", "official_comment", "meta_review", "official_comment" ], "note_created": [ 1732727732337, 1732729249980, 1730710597197, 1732732043074, 1730700017663, 1737523438534, 1732793018106, 1730343958189, 1733306316010, 1734404384702, 1732785437759 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1162/Authors" ], [ "ICLR.cc/2025/Conference/Submission1162/Authors" ], [ "ICLR.cc/2025/Conference/Submission1162/Reviewer_BHdK" ], [ "ICLR.cc/2025/Conference/Submission1162/Authors" ], [ "ICLR.cc/2025/Conference/Submission1162/Reviewer_GQDK" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1162/Reviewer_y5eR" ], [ "ICLR.cc/2025/Conference/Submission1162/Reviewer_y5eR" ], [ "ICLR.cc/2025/Conference/Submission1162/Authors" ], [ "ICLR.cc/2025/Conference/Submission1162/Area_Chair_fXmA" ], [ "ICLR.cc/2025/Conference/Submission1162/Authors" ] ], "structured_content_str": [ "{\"comment\": \"We sincerely thank the reviewer for providing valuable constructive feedback. We respond to the comments point by point below.\\n\\n**Weaknesses (W):**\\n\\nA new training experiment on the CIFAR-10 dataset is now included (see Q6 below). \\n\\nWe went through every mathematical derivation carefully and made changes or added explanations if necessary to enhance clarity. For the specific inequality the reviewer mentioned, please see Q2 below. \\n\\nThe section on \\\"Mathematical Preliminaries\\\" (i.e., Section 2) has been made much more concise and focused than before. Every concept, definition, and stated result is now supported by clear citations. \\n\\nThe introduction has been substantially rewritten to highlight (i) the theoretical challenges unique to the present case of Wasserstein-1 loss and (ii) how we made new theoretical progress to overcome these challenges. Please see Q4 below for details. \\n\\n\\n\\n**Questions (Q):** \\n\\n1.\\tIn Appendix A.1, we first prove that $\\\\mu\\\\mapsto W_1(\\\\mu,\\\\mu_d)$ is convex in $\\\\mathcal{P}_1(\\\\mathbb{R}^d)$ (Proposition A.1) and then explain from the proof arguments that in most cases it can actually be \\u201cstrictly\\u201d convex (Remark A.1). \\n\\n2.\\tThe proof of Proposition 3.1 has been moved to Appendix A.2 and the inequality in question is now (A.3). We explain in detail above (A.3) that this inequality results from the definition of $J$ in (3.3), the duality formula of the $W_1$ distance in (2.2), and the fact that the involved Kantorovich potential $\\\\varphi$ is 1-Lipschitz.\\n\\n3.\\tIn the last two paragraphs of Section 6, we first explain in detail the two common limitations of persistent training (i.e., the exacerbation of \\u201cgarbage in, garbage out\\u201d and overfitting) and discuss potential methods to mitigating them. Specifically, we suggest (i) using a better estimation method for Kantorovich potential in our algorithm and (ii) carrying out numerical investigation to find a persistency level that best balances the benefits of persistent training against its limitations. These two methods are already incorporated into our numerical experiments. \\n\\n4.\\tWe have substantially rewritten the introduction to better convey the contributions of our proposed work. First, we mention that it is tempting to suspect that generative modeling using Wasserstein-1 loss can be achieved by slightly modifying the previous work that minimizes Wasserstein-2 loss (the second paragraph). Yet, there are two major theoretical hurtles unique to the Wasserstein-1 case (detailed in the third paragraph). First, it is not even clear how \\u201cgradient\\u201d should be defined under the Wasserstein-1 distance, as subdifferential calculus for the space of probability measures is well-developed under the Wasserstein-$p$ distance for any $p>1$ but breaks down exactly for $p=1$. Second, when showing that a discretization of the gradient-flow ODE under the Wasserstein-2 distance converges, Huang and Malik (2024) crucially rely on an interpolation result from optimal transport, which again holds under the Wasserstein-$p$ distance for any $p>1$, excluding exactly $p=1$. The fourth paragraph explains how we overcome the above two hurdles. First, we observe that a general gradient notion can be defined, independently of subdifferential calculus, by using linear functional derivatives from the mean field game literature. This allows us to precisely formulate the gradient-flow ODE under the Wasserstein-1 distance and devise a corresponding discretization. Next, without relying on any interpolation result from optimal transport, we prove the convergence of the ODE discretization using a refined Arzela-Ascoli argument. This argument can be applied because the ODE coefficient is uniformly bounded (which is unique to the $p=1$ case), allowing us to prove appropriate compactness and equicontinuity of the flow of measures induced by the discretization. \\n\\n5.\\tThe second last paragraph in the introduction now cites a list of recent, relevant papers that also feature \\u201cWasserstein gradient flows.\\u201d We point out that all these studies leverage on the well-developed gradient flow theory under the Wasserstein-2 distance. Our study is distinct from theirs, as we focus on Wasserstain-1 gradient flows, which are much less understood but necessary for making a connection to WGAN (Recall that WGAN by construction minimizes the Wasserstein-1 distance). \\n\\n6.\\tWe add to Section 5 a new experiment where we train our algorithm on the CIFAR-10 dataset. The results are presented in Figures 4 and 5.\"}", "{\"comment\": \"We sincerely thank the reviewer for providing valuable constructive feedback. We respond to the comments point by point below.\\n \\n**Weaknesses (W):**\\n\\n1.\\tThe convergence result (Theorem 4.1) actually holds quite generally, even when we replace the Kantorovich potential function $\\\\varphi$ in the discretization (4.1) by another 1-Lipschitz function. Indeed, as the proof of Theorem 4.1 relies on only the fact that $\\\\varphi$ is 1-Lipschitz, instead of the specific form of $\\\\varphi$, the same convergence result still holds when $\\\\varphi$ is replaced by another 1-Lipschitz function. This suggests that our discretization scheme is robust in the following sense: in actual computation, as long as the estimated $\\\\varphi$ is 1-Lipschitz (which is facilitated by the discriminator\\u2019s regularization in Gulrajani et al. (2017) and Petzka et al. (2018)), the scheme remains stable for small time steps and there is a well-defined limit. All the explanations above are now presented in the newly added Remark 4.1. \\n\\n2.\\tThe introduction has been substantially rewritten to highlight the key differences between this work and the W2-FE paper. First, as we mention the second paragraph, it is tempting to suspect that generative modeling by minimizing Wasserstein-1 loss (our paper) can be achieved by slightly modifying the W2-FE paper, which minimizes Wasserstein-2 loss. Yet, there are two major theoretical hurtles unique to the Wasserstein-1 case (which are detailed in the third paragraph). First, it is not even clear how \\u201cgradient\\u201d should be defined under the Wasserstein-1 distance, as subdifferential calculus for the space of probability measures is well-developed under the Wasserstein-$p$ distance for any $p>1$ but breaks down exactly for $p=1$. Second, when showing that a discretization of the gradient-flow ODE under the Wasserstein-2 distance converges, the W2-FE paper crucially relies on an interpolation result from optimal transport, which holds under the Wasserstein-$p$ distance for any $p>1$, excluding exactly $p=1$. The fourth paragraph explains how we overcome the above two hurdles. First, we observe that a general gradient notion can be defined, independently of subdifferential calculus, by using linear functional derivatives from the mean field game literature. This allows us to precisely formulate the gradient-flow ODE under the Wasserstein-1 distance and devise a corresponding discretization. Next, without relying on any interpolation result from optimal transport, we prove the convergence of the ODE discretization using a refined Arzela-Ascoli argument. This argument can be applied because the ODE coefficient is uniformly bounded (which is unique to the $p=1$ case), allowing us to prove appropriate compactness and equicontinuity of the flow of measures induced by the discretization.\\n\\n3.\\tThe minor notes on notation have all been addressed. Please note that Section 2 has been made much more concise and focused than before and every equation has been examined closely to avoid possible confusion. \\n\\n\\n**Questions (Q):**\\n\\n1.\\tIn Figure 1, the number of steps used when training the discriminator is 10. This is now mentioned in the second paragraph of Section 5.\\n \\n2.\\tOur analysis suggests that it is more important to have accurate approximations of $\\\\varphi$ (i.e., the Kantorovich potential) when the persistency level $K\\\\in\\\\mathbb{N}$ is larger. As explained in detail in the last two paragraphs of Section 6, any inaccuracy in $\\\\varphi$ will be amplified by persistent training, which in turn exacerbates the \\u201cgarbage in, garbage out\\u201d issue. Hence, when persistent training is intense (i.e., $K\\\\in\\\\mathbb{N}$ is large in our algorithm) and one enjoys the resulting accelerated convergence, accurate estimates of $\\\\varphi$ are more pressingly needed to avoid \\u201cgarbage in, garbage out.\\u201d\"}", "{\"summary\": \"This paper presents a new variant of Wasserstein generative adversarial networks (WGANs) based on a distribution-dependent ordinary differential equation (ODE). It introduces a method called W1 Forward Euler (W1-FE), which includes persistent training to improve efficiency. When persistent training is not utilized, the approach reverts to standard WGAN algorithms. By appropriately increasing the level of persistent training, the model's performance is enhanced.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"This paper presents a novel framework for Wasserstein generative adversarial networks (WGANs) based on distribution-dependent ordinary differential equations (ODEs). It utilizes persistent training to enhance the training process of WGANs and provides a solid mathematical foundation for this approach. Additionally, the paper includes numerical results that demonstrate the effectiveness of persistent training at various levels.\", \"weaknesses\": \"The experiments are insufficient; the paper should include additional datasets or real-world applications. It is better to present empirical results using well-known standard datasets.\\n\\nThe mathematical framework needs further clarification. For instance, the inequality at the top of page 4 should be explained more thoroughly.\\n\\nThe section on \\\"MATHEMATICAL PRELIMINARIES\\\" could benefit from more references. Please include citations for definitions and related concepts.\\n\\nThe contributions seem to lack originality, as mentioned in paragraph 2 of the Introduction. This section suggests that the differential equation approach outlined in the paper is already well-established in the existing GANs literature. Therefore, the question to the author is: Is the approach a variant of Generative Modeling using Wasserstein-1 loss, achieved by minimizing Wasserstein-2 loss (Huang and Malik, 2024)? If the novelty is something else, please clarify this in the Introduction. See references from the papers itself\\n\\nY.-J. Huang and Z. Malik. Generative modeling by minimizing the wasserstein-2 loss, 2024. URL\", \"https\": \"//arxiv.org/abs/2406.13619.\\n\\nY.-J. Huang and Y. Zhang. GANs as gradient flows that converge. Journal of Machine Learning\\nResearch, 24(217):1\\u201340, 2023. URL http://jmlr.org/papers/v24/22-0583.html.\", \"questions\": \"I have a few questions and suggestions as follows.\\n\\n1.\\tFor completeness, please include a proof for the statement, \\u201c$\\\\mu \\\\rightarrow W_1(\\\\mu,\\\\mu_d)$ is strictly convex on $P_1(X)$.\\u201d\\n\\n2.\\tIn Proposition 3.1, it would be helpful to explain the first inequality (at the top of page 4).\\n\\n3.\\tThe manuscript mentions the use of persistent training, which is known to have certain limitations. Could you elaborate on how these limitations are addressed or mitigated in your training approach?\\n\\n4.\\tThe manuscript's writing could benefit from some improvements, particularly in conveying the contributions of the proposed work more clearly and concisely.\\n\\n5.\\tPlease consider including some recent, relevant citations to enhance the manuscript's context within the field.\\n\\n6.\\tThe numerical experiments are limited in scope; please include more real-world datasets (such as CIFAR-10 or CIFAR-100) in the empirical results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank the reviewer for providing valuable constructive feedback. We respond to the comments point by point below.\\n\\n**Weaknesses (W):**\\n\\n1.\\tLet us separate our response into three parts. \\n\\n - Indeed, there are many recent papers that feature \\u201cWasserstein gradient flows\\u201d and consider discretization and persistent training, including the two mentioned by the reviewer. But a closer examination reveals that they don\\u2019t actually have a direct connection to Wasserstein GAN (WGAN). The common theme of these papers is to minimize a loss function using the well-developed gradient flows under the Wasserstein-2 distance. In contrast to this, WGAN by construction minimizes the Wasserstein-1 distance and cannot be easily analyzed by the standard Wasserstein-2 framework. A main contribution of our paper is to provide a rigorous study on Wasserstein-1 gradient flows, which are much less understood in the literature. This allows us to build up a theoretical framework that not only recovers WGAN but suggests improvements to it (which leads to our algorithm W1-FE). \\n\\n - The second last paragraph in the introduction now presents the explanations above, to distinguish our paper from others that also feature \\u201cWasserstein gradient flows.\\u201d In addition, the second, third, and fourth paragraphs in the introduction discuss in detail the major theoretical challenges we face in studying Wasserstein-1 gradient flows, as opposed to using the full-fledged theory of Wasserstein-2 gradient flows directly (as is done by other papers), and how we overcome these challenges in this paper. \\n\\n - On the other hand, we stress that our algorithm W1-FE is fundamentally different from WGAN. While Proposition 4.1 shows that W1-FE with persistency level $K=1$ (i.e., no persistent training at all) reduces to WGAN, this is in fact the only case where they coincide. In the newly revised Remark 4.2, we explain in detail that even if we also enforce persistent training in WGAN, W1-FE and WGAN will differ from each other starting from $K=2$. This is because the equality in the proof of Proposition 4.1 holds under $K=1$, but fails in general for any $K>1$. It is actually not surprising to see this general distinction between W1-FE and WGAN. As they are built from fundamentally different ideas for generative modeling (see the discussion below Proposition 4.1), how generators are updated in these two algorithms are quite different and only coincide in the simplest case $K=1$. \\n\\n2.\\tRemark 4.2 has been substantially rewritten to possibly avoid any confusion. As mentioned above, what we hope to convey in Remark 4.2 is that even when persistent training is incorporated into WGAN, WGAN is still quite different from our algorithm W1-FE (except the simplest case $K=1$, i.e., no persistent training). In particular, W1-FE is not the same as adding persistent training to the generator update in WGAN. \\n\\n3.\\tIn Section 5, we consider a new experiment where we train our algorithm on the CIFAR-10 dataset. The results, presented in Figures 4 and 5, particularly show that W1-FE with $K>1$ outperforms W1-FE with $K=1$ (i.e., WGAN) in terms of both the eventual FID achieved and convergence speed. \\n\\n4.\\tLet us separate our response into three parts. \\n\\n - The \\u201c$m$\\u201d that appears in Definition 3.1 is only a common notation to remind us that the variable in discussion is a probability measure. There is no formal mathematical definition of it. This is now mentioned below Definition 3.1. \\n\\n - To better explain the meaning of $\\\\nabla\\\\frac{\\\\delta J}{\\\\delta m}$, we now clearly state a gradient-type property satisfied by $\\\\nabla\\\\frac{\\\\delta J}{\\\\delta m}$ (i.e., the first equation on p. 4). It shows that when points $y\\\\in\\\\mathbb{R}^d$ are moved by a vector field $\\\\xi:\\\\mathbb{R}^d\\\\to\\\\mathbb{R}^d$, $\\\\nabla\\\\frac{\\\\delta J}{\\\\delta m}(\\\\mu,y)$ serves to specify how moving along $\\\\xi(y)$ instantaneously changes the value of $J$. \\n\\n - To explain how the ODE (3.5) is obtained from our problem (3.1), we first recall the classical gradient-descent ODE in (3.2) for minimizing a convex function $f:\\\\mathbb{R}^d\\\\to\\\\mathbb{R}$. The essence of this classical ODE is to move along the negative gradient of the convex function $f$. For our problem (3.1), which is the minimization of the convex function $J:\\\\mathcal{P}_1(\\\\mathbb{R}^d)\\\\to\\\\mathbb{R}$, we hope to derive a gradient-descent ODE similar to (3.2). Once we take $\\\\nabla\\\\frac{\\\\delta J}{\\\\delta m}$ to be the gradient of $J$ (as argued in the previous paragraph), the idea of moving along the negative gradient of $J$ then yields the ODE (3.5). In the line above (3.5), we particularly mention that the derivation of (3.5) is in analogy to the classical ODE (3.2). \\n\\n**Questions (Q):** \\n\\n1.\\tPlease see our explanations above in the last part of W1 and also in W2.\\n\\n2.\\tWe have added a new experiment where our algorithm is trained on the CIFAR-10 dataset. Please see W3 above.\"}", "{\"summary\": \"This work analyzes Wasserstein GAN training through the lens of gradient flow dynamics derived from the optimal transport map between initial and data distributions. Authors demonstrate in Theorem 4.1 that by applying a sequence of updates to the generator distribution according to the gradient of the W1 witness function (Kantorovich potential), the resulting distribution in the limit matches the optimal one.\\nThis framework motivates the use of *persistent training* of the generator, where at each substep of training the discriminator and generator noise $z$ is frozen while the generator $G_\\\\theta$ is updated for K steps. Lastly, the authors use a few simplified experimental settings (e.g. 2D mixture of Gaussians) to demonstrate how persistent training can improve the rate of training convergence.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Utilizes tools from optimal transport for analyzing WGANs, providing a generalized WGAN training framework.\", \"Theory provides good insight into generator training hyperparameters, which is corroborated by experiments.\"], \"weaknesses\": [\"It seems to me that most of the uncertainty about how well WGAN training follows idealized gradient flow dynamics lies with the discriminator. Can you still arrive at a similar conclusion to thoerem 4.1 when the distance between approximated + true potential function is bounded?\", \"I'd like to see more discussion in the introduction about key differences between contributions of this work and the W2-FE paper.\"], \"minor_notes\": [\"Eq 2.2 \\\\mu_t, \\\\mu^d not defined initially\", \"both \\\\varphi and \\\\phi used on line 087\", \"Eq. 2.3 variable i is not defined\"], \"questions\": [\"What was the number of steps used when training the discriminator in figure 1? Does your theory suggest when it is more or less important to have accurate $\\\\varphi$ approximations during training?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Thanks for the author's response and clear explanation of the difference between WGAN and WGAN-FE. I recognize the theoretical contribution of this paper for building a connection between WGAN and W1 gradient flow. However, to meet ICLR's standard, I'm afraid some aspects\\u2014 experimental or theoretical\\u2014still require further improvement or clarification. Below, I outline my concerns:\\n\\n1. This paper trains a generator using the gradient flow of W1 distance. Similarly, other works trained generators using the Wasserstein gradient flow of KL divergence or F-divergence [1, 2]. In [1], the training approach appears very similar to the method presented in this paper (see Eq.(17) in [1]). This raises the question: what distinguishes using the gradient flow of the Wasserstein distance from that of F-divergence? Furthermore, does employing the Wasserstein distance offer any specific advantages? In [2], for instance, using the Langevin dynamic can alleviate mode collapse, but it appears that the proposed method does not exhibit this property. Understanding why using the Wasserstein distance rather than other divergences is important and meaningful for training generative models. However, to the best of my knowledge, this question has not been sufficiently addressed so far. Can the authors give some insights into this question? Or at the very least, some experimental results are necessary to provide a comparison between these approaches.\\n\\n2. The experiments are still too simple. Given the rapid development of generative models in recent years, the experimental setup and results in this paper are not sufficient to meet the current standards of research in this field. For instance, this paper provides only qualitative results on CIFAR-10 but does not include quantitative evaluations. Are there measurable advantages over WGAN-GP or other GAN-based methods? Additionally, Figures 4 and 5 focus solely on ablation studies and do not include comparisons with other types of models, which limits the scope of the analysis.\\n\\n3. Although the optimization of the generator in this paper is theoretically different from that in WGAN, what happens if the number of generator training iterations is increased within each mini-max optimization step? Would the proposed method still offer any advantages in such a scenario?\\n\\nBased on the above concerns, in the current stage, I tend to maintain my score.\\n\\n[1] MonoFlow: Rethinking Divergence GANs via the Perspective of Differential Equations. ICML 2023\\n[2] Cooperative Learning of Energy-Based Model and Latent Variable Model via MCMC Teaching. AAAI 2018\"}", "{\"summary\": \"This paper provides a new perspective on WGAN from the view of ODE associated with the gradient of Wasserstein derivative, demonstrating that persistent training on a generator can improve WGAN training.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. This paper provides a clear and novel explanation of WGAN's training from the perspective of the gradient flow of Wasserstein distance.\\n\\n2. Both theoretical explanations and experiments on toy and MNIST datasets demonstrate the effectiveness of persistent training, a common trick for training WGAN.\", \"weaknesses\": \"1. The main contributions in this paper, i.e. discretization and persistent training, are common tricks for training WGANs[1,2], which are not novel enough in practical implementation. For example, as is shown in the proof of Proposition 4.1, persistent training seems equal to just increasing the generator's iterations in the original WGAN's training. Thus please clarify how discretization and persistent training differ from the above existing methods.\\n\\n2. Obtaining Kantorovich potential is a challenging and important step in ODE-based WGAN's training, but in this paper, it's still the same as the original WGAN, leaving some problems for persistent training as discussed in Remark 4.2, harming the consistency between theory and practice.\\n\\n3. Experimental validation is not comprehensive enough, the used datasets are too small scale, like in WGAN and WGAN-GP, more common and large-scale baseline datasets should also be verified. For example, how does the proposed method perform on CIFAR-10 or CelebA compared to baseline WGAN methods in terms of FID, IS, and convergence speed?\\n\\n4. Some mathematical explanations and notations should be modified. For example, there is no definition of $m$ in Eq.(2.4). Besides, more introduction of $\\\\nabla \\\\frac{\\\\delta J}{\\\\delta m}$ are suggested to be added, since $\\\\nabla \\\\frac{\\\\delta J}{\\\\delta m}$ is very important in the whole method, how to understand this gradient and how to get Eq(3.2) from Eq(3.1) should be added in the main paper.\\n\\n[1] Variational Wasserstein gradient flow. ICML 2022.\\n\\n[2] Scalable Wasserstein Gradient Flow for Generative Modeling through Unbalanced Optimal Transport. ICML 2024.\", \"questions\": \"1. please clarify how persistent training differs from simply increasing generator iterations.\\n\\n2. As discussed in the third point of the above weaknesses, more common large-scale datasets should also be verified.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer again for the feedback, and we respond to the comments point by point below.\\n\\n1.\\tWe agree with the reviewer that it is important to compare gradient flows of Wasserstein-1 distance and those of $f$-divergence. A logical implementation of this should contain three steps. \\n\\n **Step 1:** Show that gradient flows of Wasserstein-1 distance are theoretically well-defined and design a corresponding algorithm (i.e., W1-FE in our paper). \\n\\n **Step 2:** Show that W1-FE, as an extension of WGAN, outperforms WGAN.\\n\\n **Step 3:** Compare W1-FE with algorithms induced by gradient flows of $f$-divergence.\\n\\n - Note that Step 2 is important: if W1-FE actually underperforms WGAN, one should instead compare WGAN with gradient flows of $f$-divergence in Step 3. After all, logically what\\u2019s the point of comparing an inferior algorithm with other kinds of algorithms? One hopes to compare a superior algorithm with other kinds of algorithms to find an even better one. \\n\\n - Our paper precisely focuses on Steps 1and 2, which already require nontrivial theoretical development and numerical analysis. Including also Step 3 will likely make our paper too lengthy and less focused, which need not meet the usual ICLR standard. As a result, in terms of presentation, it seems reasonable to report our findings under Steps 1 and 2 in the present paper and carry out Step 3 comprehensively in a separate follow-up paper. \\n\\n\\n - We can provide some insights into why gradient flows of Wasserstein-1 distance could outperform those of $f$-divergence. Wasserstein GAN (WGAN) [1] and $f$-divergence GAN ($f$-GAN) [2] are two well-known extensions of the original GAN framework. While they were proposed roughly at the same time (in 2017 and 2016) and have both been popular since then, the popularity of WGAN is particularly phenomenal. For instance, [1] has been cited 17,600 times and some modified versions of WGAN are also highly cited, such as [3] (12,200 times). Such popularity of WGAN stems from its enhanced stability, which results from its ability to alleviate mode collapse and thus facilitate convergence [1] [4]. By contrast, [2] has been cited only 1,980 times and many versions of $f$-GAN are known to suffer mode collapse severely. In view of this, it is not unreasonable to conjecture that gradient flows of Wasserstein-1 distance (which build upon and improve WGAN) can outperform gradient flows of $f$-divergence (which build upon and improve $f$-GAN). \\n\\n2.\\tWe are somewhat caught off guard by this comment. \\n\\n - First, in weakness #3, the reviewer only suggested that we use a more common and large-scale dataset, such as CIFAR-10. And we did exactly that in our revision. If the newly-raised questions had been communicated in the first place, we would have had more time to properly address them in our revision.\\n\\n - Second, we don\\u2019t exactly understand the comment \\u201c\\u2026provides only qualitative results on CIFAR-10 but does not include quantitative evaluations.\\u201d In fact, we quantitatively evaluate algorithms\\u2019 performance on CIFAR-10 by computing the evolution of Fr\\u00e9chet inception distance (FID) in Figure 4, alongside qualitative results in Figure 5.\\n - Third, as mentioned in the paper, we use WGAN-LP instead of WGAN-GP because the former is known to outperform the latter in the literature. \\n - Fourth, as mentioned under 1, our paper focuses on Steps 1 and 2. For the purpose of Step 2, ablation studies are enough and there is no need to have comparisons with many other types of models. Such comparisons belong to Step 3 and, as argued above in 1, it could be reasonable to relegate Step 3 to a separate follow-up paper. \\n\\n3.\\tYes, our proposed method still offers advantages, based on an additional training experiment on CIFAR-10. \\nSpecifically, we allowed persistent training in the generator update of WGAN and trained this modified WGAN on CIFAR-10. The results are significantly worse than those under our proposed method W1-FE. If there is a chance, we can add this additional experiment in the final version of our paper. \\n\\n\\n[1] Wasserstein generative adversarial networks. ICML 2017.\\n\\n[2] $f$-GAN: Training Generative Neural Samplers using Variational Divergence Minimization. NIPS 2016.\\n\\n[3] Improved training of Wasserstein GANs. NIPS 2017.\\n\\n[4] Towards principled methods for training generative adversarial networks. ICLR 2017.\"}", "{\"metareview\": \"This paper is concerned with the Wasserstein generative adversarial network (WGAN). It introduces a notation of gradient flow associated with WGAN leveraging the linear functional derivative. Based on it, the authors propose an algorithm to train WGAN. The major criticism is on the contribution. The proposed method resembles existing methods that utilizes gradient flow associated with Wasserstein-2 metric or f-divergence. The only difference is on the computation of the potential function (discriminator). In addition, the experiments are insufficient to demonstrate the advantages of the proposed method over existing ones. Finally, the theoretical result is weak. It (Theorem 4.1) claims convergence to a continuous trajectory, but the property of this trajectory is unknown. Its convergence to the target distribution is important for it to be useful in applications. The non-uniqueness of the Kantorovich potential is also overlooked in the theoretical development.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers raise some questions on the results and presentations. The authors reply by modifying the paper, adding experiments in the paper, and adding clarifications in the response. Several reviewers are not convinced and keep their original evaluation of this work.\"}", "{\"title\": \"Revision Submitted\", \"comment\": \"We would like to thank all the reviewers for your valuable comments from various angles. We have carefully revised our paper according to every single comment you provided and have just submitted the revived version of our paper. While each one of you can see our detailed response to your comments right below your review, we would like to point out here two major changes made to our paper.\\n\\n1. A new training experiment on the CIFAR-10 dataset is added.\\n2. In the introduction, we now explain in detail our contributions relative to other recent studies that also feature \\\"Wasserstein gradient flows.\\\" In short, while almost all other papers rely on the well-developed gradient flow theory under the Wasserstein-2 distance, we focus on Wasserstein-1 gradient flows, which are much less understood and necessary for making a connection to Wasserstein GAN. \\n\\n\\nSincerely,\\n\\nThe authors\"}" ] }
7oT1X8xjIk
On the Identifiability of Nonlinear Representation Learning with General Noise
[ "Yujia Zheng", "Yingyao Hu", "Kun Zhang" ]
Noise is pervasive in real-world data, posing significant challenges to reliably uncovering latent generative processes. While evolution may have enabled the brain to solve such problems over millions of years, machine learning faces this task in just a few years. Most prior identifiability theories, even under restrictive assumptions like linear generating functions, are limited to handling only additive noise and fail to address nonparametric noise. In contrast, we study the problem of provably learning nonlinear representations in the presence of nonparametric noise. Specifically, we show that, under certain structural conditions between latent and observed variables, latent factors can be identified up to element-wise transformations, even when both the generative processes and noise are nonlinear and lack specific parametric forms. We further present extensions of the general framework, demonstrating trade-offs between different assumptions and the identifiability of latent variables in the presence of both noise and distortions. Moreover, we prove that the underlying directed acyclic graph can be recovered even with nonlinear measurement errors, offering independent insights into structure learning. Our theoretical results are validated on both synthetic and real-world datasets.
[ "Latent Variable Models", "Identifiability", "Noise" ]
https://openreview.net/pdf?id=7oT1X8xjIk
https://openreview.net/forum?id=7oT1X8xjIk
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zh3vSLReCw", "zYRKzLFnZ9", "z308MjwmJu", "xbKi8dbkCU", "vl9HG3NSMN", "viecl9Twta", "sT7OKTEsVc", "sDuykQypL7", "rqAyg2h34f", "r2vJWu660T", "qNufuOGZQl", "pwzHxJBpua", "oacH3FhFQG", "o3aXOdPZqi", "juN78m2LpY", "j9RJ1tjCA8", "fXvh1CkuUF", "duljJgkfa6", "cAREfcfS0i", "acvL7J8FUI", "YWfVN41T8O", "WHPzi6call", "Uo7ztY5wYC", "RgP6ng19ld", "PeDwknBEBN", "PI0WW7g50x", "OfDcEcMYwB", "OBFea7ltCs", "Kr5bUMHwag", "JPrRGkUxIB", "IKqLQssx05", "ByZMVAgOj6", "BhKfN33vUK", "AnkE8dwaQi", "6hELOICrbx", "42wVJoiiC2" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730167013156, 1732637141177, 1732175002747, 1732175106859, 1732555684118, 1732175408756, 1732620817804, 1732175435099, 1732174939685, 1732174978562, 1732175324181, 1732285678221, 1737687437161, 1732174801995, 1733183700770, 1732582544436, 1730440835827, 1732555784614, 1732303377777, 1733076731186, 1732303426782, 1732175160865, 1733176745991, 1732555570803, 1730754166915, 1733185404979, 1730697281164, 1730134823777, 1732174836723, 1732174858316, 1732580059695, 1732175238286, 1732175140403, 1732175353094, 1732175264377, 1732206654159 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7585/Reviewer_LShV" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Reviewer_KLTp" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Reviewer_88C8" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Reviewer_88C8" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Reviewer_QWn8" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Reviewer_Vhc1" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Reviewer_88C8" ], [ "ICLR.cc/2025/Conference/Submission7585/Reviewer_KLTp" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Reviewer_QWn8" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ], [ "ICLR.cc/2025/Conference/Submission7585/Authors" ] ], "structured_content_str": [ "{\"summary\": \"This paper clearly sets up a set of nonparametric assumptions that allow for identifiability of latent factors, a fundamental problem in theoretical machine learning.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"This paper clearly sets up a set of nonparametric assumptions that allow for identifiability of latent factors, a fundamental problem in theoretical machine learning. The assumptions are somewhat strong but are nicely set up and create a framework where proofs seem relatively straightforward.\\n\\nThe framework is applied to a wide variety of special cases that have arisen in the literature, yielding strong results in each case.\\n\\nIf the questions below are addressed, I believe this result would be a strong contribution to the space.\", \"weaknesses\": \"While notation is thoroughly defined and overall there is a lot of discussion, still there are many key places where details are either missing or difficult to ascertain (see questions).\\n\\nThe structural sparsity assumption is extremely strong, albeit somewhat understandable. That said, the paper could be somewhat more realistic when discussing the assumptions - many references are made to real-world data types that do not immediately seem to necessarily follow this framework exactly. \\n\\nThe assumptions on nondegeneracy preclude linear functions, which is also understandable. \\n\\nTechnically, the variability assumption is not satisfiable, since A = Z x E is not disallowed (since Z, E are not subsets of themselves). Hence p(A|d_1) = p(A|d_2) = 1 identically, violating the assumption. I think this is possibly a typo.\", \"questions\": \"Several parts of the assumptions are not clear to me. For instance, is it assumed that the 2 domains are labeled? Also, in Eq (30), it is stated that \\\\hat{F} is estimated under a sparsity constraint - what is this constraint and where is it listed in the assumptions?\\n\\nEq (35) - why is this without loss of generality?\\n\\nIn the experiment section, can more details be provided on this split between latent factors and noise factors in the synthetic data, or is this as extremely simple as it sounds? What is the \\\"base\\\" experiment in this case?\\n\\nFor the variability assumption - why are rectangular sets not included? Where is this assumption used in the proof of Thm 1, for instance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you very much for your thoughtful suggestion\", \"comment\": \"Dear Reviewer KLTp,\\n\\nThank you very much for your thoughtful suggestion. Indeed, extending the current framework to the non-invertible case would be a highly interesting direction for future research. At the same time, given the nonparametric nature of the setting, it remains unclear whether disentangling noise from a non-invertible mapping\\u2014where information is fundamentally lost\\u2014is feasible. By significantly relaxing the existing conditions for block-wise disentanglement (e.g., from $O(n)$ domains to just $2$ domains), our results represent an essential step toward that ultimate goal.\\n\\nMeanwhile, as highlighted throughout the paper, some of our results use the structural conditions from Zheng et al. and the variability condition from Kong et al. At the same time, we would like to emphasize the unique contributions of our work, which, to the best of our knowledge, **have not been achieved in any prior research**, even from a purely technical perspective:\\n\\n- **Representation Identification:** Previous theories and techniques typically require **$O(n)$ domains** to disentangle the changing latent components. In contrast, our theory demonstrates that as few as **$2$ domains** may suffice.\\n\\n- **Structure Identification:** We establish the identifiability of the hidden causal graph for **nonparametric structural equation models** (with general nonlinear functional relations and non-additive exogenous noise) under **nonlinear measurement error**. To the best of our knowledge, no prior work has introduced identifiability results in such a general setting.\\n\\nLastly, we are sincerely grateful for your time and insightful suggestions. We genuinely hope that you might reconsider your assessment and give our work another chance. Of course, we deeply respect your perspective and appreciate the thoughtful effort you have put into reviewing our work.\\n\\nBest regards,\\n\\nAuthors of Submission7585\"}", "{\"title\": \"We deeply appreciate the time and effort you invested and your insightful comments (3/3)\", \"comment\": \"**References**\\n\\n[1] Peters et al., Causal discovery with continuous additive noise models, JMLR, 2014\\n\\n[2] Kong et al., Partial identifiability for domain adaptation, ICML 2022\\n\\n[3] Hyv\\u00e4rinen and Pajunen, Nonlinear independent component analysis: Existence and uniqueness results. Neural networks, 1999\\n\\n[4] Taleb and Jutten, Source separation in post-nonlinear mixtures, IEEE Transactions on signal Processing, 1999\\n\\n[5] Hyv\\u00e4rinen and Morioka, Unsupervised feature extraction by time-contrastive learning and\\nnonlinear ICA, NeurIPS 2016\\n\\n[6] Khemakhem et al., Variational autoencoders and nonlinear ICA: A unifying framework, AISTATS 2020\\n\\n[7] Sorrenson et al., Disentanglement by nonlinear ICA with general incompressible-flow networks (GIN), ICLR 2020\\n\\n[8] H\\u00e4lv\\u00e4 et al., Disentangling identifiable features from noisy data with structured nonlinear ICA, NeurIPS 2021\\n\\n[9] Buchholz et al., Function classes for identifiable nonlinear independent component analysis, NeurIPS 2022\\n\\n[10] Lachapelle et al., Disentanglement via mechanism sparsity regularization: A new principle for nonlinear ICA, CLeaR 2022\\n\\n[11] Zheng et al., On the identifiability of nonlinear ICA: Sparsity and beyond, NeurIPS 2022\\n\\n[12] Hyv\\u00e4rinen et al., Identifiability of latent-variable and structural-equation models: from linear to nonlinear, Annals of the Institute of Statistical Mathematics, 2024\\n\\n[13] Locatello et al., Challenging common assumptions in the unsupervised learning of disentangled representations, ICML 2019\\n\\n[14] Xie et al., Unpaired image-to-image translation with shortest path regularization, CVPR 2023\\n\\n[15] Lachapelle et al., Additive decoders for latent variables identification and cartesian-product extrapolation, NeurIPS 2023\\n\\n[16] Yao et al.,Temporally Disentangled Representation Learning, NeurIPS 2022\"}", "{\"title\": \"We sincerely appreciate your insightful feedback and valuable comments (1/3)\", \"comment\": \"We sincerely appreciate your insightful feedback and valuable comments, which have been immensely helpful in improving our manuscript. In response, we have included **additional clarifications**, **expanded discussions**, and a **new section** in the updated manuscript. Moreover, we have conducted **additional experiments** to further validate our theoretical results in more diverse settings. Please find our detailed, point-by-point responses below. If you have any further feedback, please do not hesitate to share, and we would be more than happy to address it.\\n\\n**Q1:** It is unclear how Definition 1 (element-wise identifiability) is commonly used in the field of latent variable identification, as only two papers (by the same author) are cited below this definition.\\n\\n**A1:** Thanks for raising this point. Element-wise identifiability is the most common objective in the related field, and arguably the best we can achieve without additional information. In the original submission, we only listed two papers since (Hyv\\u00e4rinen and Pajunen, 1999) was one of the first to explore the problem of identifying nonlinear latent variable models with a focus on existence and uniqueness, while (Hyv\\u00e4rinen et al., 2024) provides a comprehensive survey on the development in the last two decades.\\n\\nAt the same time, we agree that introducing more works would further highlight the importance of Definition 1 in the literature, and we have updated the manuscript as follows:\\n\\n> \\u201cElement-wise identifiability guarantees that the estimated factors correspond to the true generating factors without any mixture or entanglement. Standard ambiguities such as permutations and rescaling may remain after identification, which are fundamental indeterminacies commonly noted in the literature (Hyv\\u00e4rinen & Pajunen, 1999; Khemakhem et al., 2020a; Sorrenson et al., 2020; H\\u00e4lv\\u00e4 et al., 2021; Yao et al., 2021; Lachapelle et al., 2022; Buchholz et al., 2022; Zheng et al., 2022; Lachapelle et al., 2024; Hyv\\u00e4rinen et al., 2024) and represent the best achievable outcome without imposing further restrictive assumptions.\\u201d\\n\\nWe hope that our explanation and the update addresses your concern and clarifies the relevance of Definition 1 within the context of latent variable identification. Thanks again for your comment.\\n\\n**Q2:** In all the theorems, the author notes that the dataset should be sufficiently large. However, it is unclear what qualifies as 'large' and how this requirement is applied in the proofs.\\n\\n**A2:** Thank you for your great question. Following standard practice in the identifiability literature (see e.g., a recent survey (Hyv\\u00e4rinen et al., 2024)), our theoretical results are derived under the asymptotic setting, which is why we note \\u2018sufficiently large\\u2019 in the theorem. This ensures that finite sample errors do not interfere with the results, allowing the proofs to focus on the asymptotic case.\\n\\nTo clarify and avoid potential confusion, we have removed this phrasing from the theorems and instead highlighted the asymptotic setting in the preliminaries as follows:\\n\\n> \\u201cFollowing the previous works (see e.g., a recent survey (Hyv\\u00e4rinen et al., 2024)), all of our results are in the asymptotic setting.\\u201d\\n\\nWe hope this could help to avoid potential misunderstanding, and we sincerely appreciate your constructive feedback.\"}", "{\"title\": \"We sincerely thank you for the time you\\u2019ve dedicated to reviewing our work\", \"comment\": \"Dear Reviewer QWn8,\\n\\nWe sincerely thank you for the time you\\u2019ve dedicated to reviewing our work. As the discussion period comes to a close, we would be truly grateful if you could let us know whether our responses have adequately addressed your concerns.\\n\\nWith appreciation,\\n\\nAuthors of Submission7585\"}", "{\"title\": \"We deeply value your detailed review and the insightful feedback you have shared (3/4)\", \"comment\": \"- **Q2.2:** In existing works on structural sparsity, e.g., (Zheng et al., 2022). In this work, the authors do not seem to require that assumption. What is the assumption that the authors make that allows them to get rid of the independence assumption?\\n\\n- **A2.2:** Thanks for raising this interesting point. In fact, the structural sparsity assumption in both (Zheng et al., 2022) and our work directly implies independence. We conjecture that this might be the reason that, although independence is mentioned in (Zheng et al., 2022), the proof in (Zheng et al., 2022) never utilizes the most common property of it, i.e., the factorization of the joint density.\\n\\n Specifically, if two latent variables $\\\\mathbf{z}_k$ and $\\\\mathbf{z}_v$ are dependent, then it is either ${(\\\\mathcal{F}\\\\_{z})}\\\\_{:, q} \\\\subseteq {(\\\\mathcal{F}\\\\_{z})}\\\\_{:, v}$ or ${(\\\\mathcal{F}\\\\_{z})}\\\\_{:, v} \\\\subseteq {(\\\\mathcal{F}\\\\_{z})}\\\\_{:, q}$. Without loss of generality, suppose ${(\\\\mathcal{F}\\\\_{z})}\\\\_{:, q} \\\\subseteq {(\\\\mathcal{F}\\\\_{z})}\\\\_{:, v}$. In this case, it becomes impossible to find a set $\\\\mathcal{C}\\\\_{q}$ s.t. $\\\\bigcap\\\\_{i \\\\in \\\\mathcal{C}\\\\_{q}} {(\\\\mathcal{F}\\\\_{z})}\\\\_{i, :}=\\\\{q\\\\}$. This is because for every row in the Jacobian where the $q$-th column is nonzero, the $v$-th column is also nonzero, making it impossible to uniquely disentangle the $q$-th column by the intersection of rows.\\n\\n To highlight this point, we have also added the following note in the updated manuscript:\\n\\n > \\u201cIt might be worth noting that the structural sparsity implies the independence among latent variables $\\\\mathbf{z}$. Specifically, if two latent variables are dependent, it becomes impossible to disentangle one of them by the intersection of a set of observed variables that are influenced by these latent variables.\\u201d\\n\\n\\n- **Q2.3:** Can the authors better tell if there are really new proof techniques that were needed to solve the problem in Theorem 1, other than a direct combination of results from (Kong et al., 2022) and (Zheng et al. 2022)?\\n\\n\\n- **A2.3:** Thanks a lot for the question, which provides an opportunity for us to further clarify our results. As discussed in detail in A2.1 and A2.2, there is no previous work that can disentangle the changing latent part with only two domains. Thus, for block identification, our proof of Theorem 1 is not a direct combination of results from (Kong et al., 2022) and (Zheng et al., 2022).\\n\\n Specifically, in (Kong et al., 2022), given only two domains, their results show that the estimated invariant part does not depend on the changing part. However, the changing part itself is not disentangled; its estimation may still be mixed with the invariant variables. In contrast, our proof demonstrates block identifiability for both the invariant and changing parts with only two domains (see Eqs. 27\\u201336). This distinction is critical, as without disentangling the changing latent variables $\\\\mathbf{z}$, we could not achieve the component-wise identifiability results for $\\\\mathbf{z}$ presented in Theorem 1.\\n\\n**Q3:** I would like to understand why is Theorem 2 any different from a generalization of the identifiability based on structural sparsity to additive noise models?\\n\\n**A3:** Thank you for raising this point. Indeed, Theorem 2 can be seen as a generalization of identifiability based on structural sparsity to additive noise models, which can be addressed using the deconvolution strategy presented in (Khemakhem et al., 2020). At the same time, as highlighted in the remark (L293-297), additive noise becomes even simpler to remove when adopting a structural perspective. Specifically, the derivative of the observed variables with respect to the latent variables is unaffected by additive noise, making it easier to disentangle. Thus, Theorem 2 is introduced primarily to establish a connection between our work and existing methods for handling noise. This connection has been emphasized in the manuscript (L300-304) to align our contributions with the broader context of identifiability research.\\n\\n\\n**Q4:** It seems that Theorem 3 can be derived based on Theorems 1 and 2, and thus it could just be a corollary.\\n\\n**A4:** Thank you for your valuable suggestion. In response, we have updated the manuscript to present Theorem 3 as a corollary (Corollary 1 in the updated manuscript) to avoid potential confusion. The result was intended to formally introduce the concept of distortion, which serves as a natural transition to exploring a new setting\\u2014structure learning with nonlinear measurement error.\"}", "{\"comment\": \"I thank the reviewers for their time and efforts in their responses. Based on the responses, I am still quite convinced that the framework is not as different from existing works as it is made out to be.\\n\\n1. I still stand by the remark that existing works (Kugelgen et al.) to disentangle content from style are quite relevant to this setting as well. You could say that they use style variables in their story and style variables are semantically meaningful. But those style variables mathematically speaking could as well be noise variables and then you will be able to isolate content from noise. \\n\\n2. I find the proofs to be a combination of the techniques already introduced in Kong et al. and Zheng et al. It is not bad to combine existing proofs but if that is the main contribution then it feels somewhat incremental and not very surprising. The results with additive noise can be done with standard strategies of deconvolution. \\n\\nI believe that authors should present a more convincing case for their contributions. Here is a concrete theoretical suggestion I have for authors. Based on existing frameworks of identifiability, it is quite clear one can tackle additive noise or one can tackle noise via treating it as latent variable in the diffeomorphism. But can we go beyond these two settings in a very clear way, especially when it is not a diffeomorphism. Consider x <-- f(z,e), where e is noise, x could be used to invert back z but not e. In such a framework developing identification guarantees would be a novel departure from existing works. \\n\\nIn view of the above, I maintain my rating.\"}", "{\"title\": \"We deeply value your detailed review and the insightful feedback you have shared (4/4)\", \"comment\": \"**Q5:** It would have been nice if there were experiments for the other specific settings.\\n\\n**A5:** We sincerely appreciate your constructive feedback. In light of it, we have conducted **additional experiments** and included the results in Appx. D.2. As shown in Figure 11, latent variables are identifiable in the presence of noise and distortion in the considered settings. Additionally, we can recover the structure of the Jacobian; however, since we have not proposed a practical causal discovery algorithm, we focus here on the identification of latent variables.\\n\\n**Q6:** I found the writing quite verbose in many places. The authors seem to give examples without giving citations.\\n\\n**A6:** Thanks a lot for your suggestions on the writing. In light of your feedback, we believe that removing some examples would provide more space for including proof sketches in the main paper, which will highlight the unique technical contributions in the proof as well as making the discussion more concise. We have made the changes in the updated manuscript accordingly (e.g., L168-175). We sincerely appreciate your time devoted to improving the manuscript, and looking forward to any further feedback.\\n\\n\\n---\\n\\n**References**\\n\\n\\n[1] Von K\\u00fcgelgen et al., Self-supervised learning with data augmentations provably isolates content from style, NeurIPS 2021\\n\\n[2] Kong et al., Partial identifiability for domain adaptation, ICML 2022\\n\\n[3] Lachapelle et al., Nonparametric partial disentanglement via mechanism sparsity: Sparse actions, interventions and sparse temporal dependencies. arXiv 2024\\n\\n[4] Zheng et al., On the identifiability of nonlinear ICA: Sparsity and beyond, NeurIPS 2022\\n\\n[5] Zheng and Zhang, Generalizing nonlinear ICA beyond structural sparsity, NeurIPS 2023\\n\\n[6] Khemakhem et al., Variational autoencoders and nonlinear ICA: A unifying framework, AISTATS 2020\"}", "{\"title\": \"We deeply appreciate the time and effort you invested and your insightful comments (1/3)\", \"comment\": \"We deeply appreciate the time and effort you invested and your insightful comments, which have greatly enhanced the clarity of the manuscript. Accordingly, we have added **several new discussions**, **full details of previous results**, and **additional experiments** in the updated manuscript, with a special focus on the **relation with previous works** and **clarification on conditions**. Please kindly find our response to each point detailed below. If you would like to offer any further suggestions, please feel free to let us know; we would be more than happy to make further adjustments as needed.\\n\\n**Q1:** The paper is missing the citation of some important theoretical results from the field, e.g., causal discovery with continuous additive noise models.\\n\\n**A1:** Thank you for raising this point. Upon carefully reviewing the mentioned results\\u2014which are indeed fundamental to causal discovery\\u2014we believe there may be some misunderstanding regarding the distinction between our work and existing causal discovery methods, such as those based on additive noise models (e.g., Peters et al., 2014). The key differences are as follows:\\n\\n- In existing works, they only consider the general SEM with *additive* noise, *without* any measurement error. That is, $\\\\mathbf{x}_i = f\\\\_{1,i}(\\\\textbf{Pa}(\\\\mathbf{x}_i)) + \\\\boldsymbol{\\\\xi}_i$.\\n\\n- In our work, we consider the general SEM with *general* noise, *with* nonlinear measurement error. That is, $\\\\mathbf{z}_i = f\\\\_{1,i}(\\\\textbf{Pa}(\\\\mathbf{z}_i), \\\\boldsymbol{\\\\xi}_i)$ and $\\\\mathbf{x}_i = f\\\\_{2,i}(\\\\mathbf{z}_i) + \\\\boldsymbol{\\\\eta}_i$.\\n\\nIn summary, the \\\"additive noise\\\" in existing works refers to noise strictly within the SEM, whereas we allow for more general, nonlinear, and non-additive noise in the SEM while also addressing scenarios with additional nonlinear measurement error. Thus, we are actually dealing with a more general setting compared to existing works.\\n\\n\\n**Q2:** Lack of sufficient explanatory details of the theoretical constructs and manipulations.\\n\\n\\n**A2:** Thank you very much for your time and effort in reviewing our manuscript. We greatly appreciate your thoughtful feedback. After carefully considering your questions, we believe there may be potential misunderstandings, some of which likely stem from a typo that we can immediately clarify. Your constructive comment has been invaluable in helping us identify and correct this typo to improve the clarity of our work. We have addressed these concerns in the revised manuscript to ensure better understanding and avoid further confusion. Please find our detailed response below:\\n\\n\\n- **Q2.1:** The assumption of \\u201cVariability exists\\u201d seems to be much stronger than the assumption of \\u201cDomain Variability\\u201d (Kong et al. 2022), and it is not clear how restrictive it is.\\n\\n- **A2.1:** Thanks a lot for raising this point. We sincerely appreciate you bringing this to our attention. We conjecture that the confusion stems from a typo in our original submission. Our intention was for the assumption of \\u201cVariability exists\\u201d to be equivalent to the \\u201cDomain Variability\\u201d assumption as defined in Kong et al. (2022). To address and rectify this, we have implemented the following revisions in our manuscript:\\n\\n - Notation:\\n\\t- Our domain variables are exactly the same as these in (Kong et al., 2022). To avoid potential misunderstanding, we have changed the notation from $\\\\mathbf{d}$ to $\\\\mathbf{u}$.\\n\\n - Corrected Typo:\\n\\t- Identical to (Kong et al., 2022), the assumption requires that for any set $A$ there exists two domains, and these domains can change across different sets of $A$. The original manuscript contained a typo in the assumption statement by switching \\u2018for any\\u2019 and \\u2018there exists\\u2019, which we have corrected in the updated manuscript:\\n \\n \\t> \\u201c Suppose for any set $A \\\\subseteq \\\\mathcal{Z} \\\\times \\\\mathcal{E}$ \\u2026, there exist two domains $u_1$ and $u_2$ that are independent of $\\\\boldsymbol{\\\\epsilon}$ s.t\\u2026.\\u201d\\n\\n - Enhanced Explanation:\\n\\t- To further prevent confusion, we have included the following statement next to the assumption:\\n\\n \\t> \\u201cThe same as in (Kong et al., 2022), these two domains can differ for different values of $A$, providing great flexibility.\\u201d\\n \\t \\n \\tAs a result, it is clearer that the assumption holds as long as the conditional probabilities do not cancel with each other over all pairs of domains. We have also highlighted it in the example, hoping that could also address your concerns:\\n\\n \\t>\\u201c... Let us consider the two domains where the integrals do not cancel with each other (the domains can change across different sets $A$):...\\u201d\\n\\n These updates enhance the overall clarity of our presentation and make it straightforward that the assumption of variability is natural, which is consistent with the conclusions in (Kong et al. 2022). We sincerely apologize for the oversight and any confusion it may have caused. Your feedback was instrumental, and we are grateful for your diligence in improving our manuscript.\"}", "{\"title\": \"We deeply appreciate the time and effort you invested and your insightful comments (2/3)\", \"comment\": \"- **Q2.2:** In the proof of theorem 1 the authors refer to steps 1, 2, and 3 in the proof of Theorem 4.2 in (Kong et al., 2022) from which they conclude that some part of the Jacobian is zero. However, it is not clear how the assumptions of (Kong et al., 2022) correspond to the assumptions considered in this work and how the setting of the problems maps to each other.\\n\\n\\n- **A2.2:** Thanks so much for your comment. As detailed in A2.1, the related assumption is identical to that in (Kong et al. 2022) after correcting the typo. In light of your suggestion, we have included the proof from (Kong et al.) in our appendix for ease of reference (Lemma 1) with notational changes.\\n\\n\\n- **Q2.3:** The latent variables $\\\\mathbf{z}$ and noise $\\\\boldsymbol{\\\\epsilon}$ are symmetric with respect to the generating mechanism. Therefore I would like to ask the authors if there is anything that can stop noise to follow the same assumptions considered for $\\\\mathbf{z}$.\\n\\n\\n- **A2.3:** Thank you for your question. We would like to clarify that the latent variables $\\\\mathbf{z}$ and the noise $\\\\boldsymbol{\\\\epsilon}$ are, in fact, *not symmetric* in the generating process. As mentioned in the assumption, $\\\\mathbf{z}$ changes across different domains $\\\\mathbf{u}$ while $\\\\boldsymbol{\\\\epsilon}$ is independent of $\\\\mathbf{u}$. In other words, in our setting, noise stays invariant across different domains while latent variables do not.\\n\\n\\n**Q3:** The paper proposes only identifiability guarantees, but there is no method proposed to identify the latent variable. Moreover, guarantees of identifiability of the latent variables up to element-wise invertible transformation are quite restrictive since the invertible transformation can be very complex and infeasible to learn.\\n\\n**A3:** Thank you for raising this important point. Since our work focuses on identifiability theory, the results are intentionally estimator-agnostic. The theory addresses the conditions under which latent concepts can be identified from observations but does not prescribe a specific algorithm. Therefore, the specific estimation procedure is deferred in the experimental setup.\\n\\nMoreover, we fully agree with you that we cannot recover the exact value of each latent variable without any indeterminacy. At the same time, it might be worth noting that, in the identifiability literature, identifiability up to element-wise invertible transformation is the most common objective, and, arguably, the *strongest achievable* result without additional constraints (Hyv\\u00e4rinen and Pajunen, 1999; Taleb and Jutten 1999; Hyv\\u00e4rinen & Morioka, 2016; Khemakhem et al., 2020; Sorrenson et al., 2020; H\\u00e4lv\\u00e4 et al., 2021; Buchholz et al., 2022; Lachapelle et al., 2022; Zheng et al., 2022, Hyv\\u00e4rinen et al., 2024). This objective ensures full disentanglement of the latent variables, which has addressed a longstanding challenge in unsupervised disentangled representation learning (Locatello et al., 2019). Such guarantees have direct applications in various fields, including computer vision (Xie et al., 2023), extrapolation (Lachapelle et al., 2023), and dynamical systems (Yao et al., 2022). Therefore, we believe our identifiability result is both theoretically significant and practically impactful.\\n\\n\\n**Q4:** It is not clear what is the new contribution of the results presented in Section 4.2 with respect to Theorem 1.\\n\\n**A4:** Thank you for your question. Compared to Theorem 1, the result in Section 4.2 introduces additional contributions by demonstrating that component-wise identifiability can still be achieved even in the presence of an unknown nonlinear distortion and corresponding noise. Notably, in this extended setting, the new noise does not need to depend on the domains, and the same structural condition on the original mapping (prior to distortion) remains sufficient.\\n\\nThis result serves to formally introduce the concept of distortion, which provides a natural transition to exploring a new setting\\u2014structure learning with nonlinear measurement error. To make this distinction clearer and avoid potential confusion, we have updated the manuscript by presenting the result in Section 4.2 as a corollary. We hope this clarification highlights their role in extending the scope of our framework and connecting different settings. We have also conducted additional experiments to validate the results in different settings (Appx. D.2). Please let us know if there are additional aspects we can further clarify.\"}", "{\"title\": \"We deeply value your detailed review and the insightful feedback you have shared (1/4)\", \"comment\": \"We deeply value your detailed review and the insightful feedback you have shared. All of these constructive suggestions have further improved our manuscript. In light of these, we have incorporated **several new discussions** and a **new section** in the updated manuscript, particularly focusing on **clarifying theoretical results** within the context of broader literature. Moreover, we have also conducted **additional experiments** as you suggested. Please kindly find our detailed response to each of your comments below. If you have any additional feedback, we would be truly grateful to hear it and would be delighted to provide further clarifications.\\n\\n\\n**Q1:** Difference on the noise modeling and the separation of content and style.\\n\\n**A1:** This is an excellent point, which inspired us to add a new section for detailed discussion and clarification. These works are highly interesting and contribute significant theoretical advances in identifying latent variable models. Regarding the two considered settings, we believe the key distinction lies in the nature of the noise and content variables.\\n\\nFor content, they are usually semantically meaningful and are often explicitly compared with style variables. This makes it natural for existing techniques to disentangle content from style. For instance, in contrastive learning (Von K\\u00fcgelgen et al., 2021), one can find pairs of observations differing only in their styles and use contrastive learning to disentangle the content. Similarly, in multi-domain settings with $O(n)$ domains (Kong et al., 2022), the content remains invariant while there are $O(n)$ (e.g., $2n+1$) distinct domains characterized by different styles. Similarly, in scenarios involving actions or interventions (Lachapelle et al., 2024), agents/domains/environments act as auxiliary variables, inducing multiple changes in the conditional distribution by intervening/impacting on latent variables.\\n\\nHowever, for noise, they often lack semantic meaning and cannot be explicitly changed across multiple domains like actions or styles, nor can they be paired in observations as in contrastive learning. Thus, it is challenging to assume the existence of $O(n)$ conditional distributions based on $O(n)$ auxiliary variables or to define contrastive objectives for noise modeling. This is a specific challenge that we face and cannot be directly addressed by previous frameworks. Moreover, for learning SEMs with measurement error, we have more noises compared to observed ones, which cannot be solved by previous methods.\\n\\nTherefore, for the modeling of general noise, we propose assuming only the existence of variability in the noise distribution, specifically requiring two distinct distributions as a minimal degree of change. This relaxation, from $O(n)$ to two distributions, not only quantitatively weakens the assumption but also qualitatively broadens its applicability. It shifts from explicitly controlling different types of variables to achieve a required degree of change or transition, to accommodating any scenarios where the distribution exhibits variability and is not completely invariant. This is clearly a significant technical contribution that addresses the unique challenges in modeling general noise.\\n\\nIn light of your great question, we have also added a new section for detailed discussion, with a particular focus on the difference with previous work in content-style separation:\\n\\n> \\u201cIn this section, we discuss the challenge of modeling general noise, emphasizing the distinctions between noise and content variables as explored in previous works.\\u201d\\n\\n> \\u201cContent variables are typically semantically meaningful and are often explicitly contrasted with style variables. This enables existing techniques to disentangle content from style through structured variability. For example, contrastive learning frameworks (Kugelgen et al., 2021) use paired observations differing only in style (e.g., images with the same object but different backgrounds) to disentangle content. In multi-domain settings (Kong et al., 2022), content remains invariant across $O(n)$ distinct domains characterized by different styles. Similarly, in intervention-based settings (Lachapelle et al., 2024)}, agents or environments serve as auxiliary variables that induce changes in the conditional distributions of latent variables. These structured variations provide the foundation for effective disentanglement.\\u201d\\n\\n> \\u201cIn contrast, noise variables often lack semantic meaning and cannot be explicitly manipulated across multiple domains or paired in observations. This makes it infeasible to assume the existence of $O(n)$ conditional distributions or define contrastive objectives. As a result, existing frameworks designed for content-style disentanglement cannot be directly applied to general noise modeling.\\u201d\"}", "{\"title\": \"Response to the authors\", \"comment\": [\"Thank you for the deliberate response and for giving deliberate details on my questions!\", \"With the additional details on the environment properties my concerns related to the generating function and distinguishability of latent variables and noise resolved.\", \"However, I still have a few questions:\", \"In the manuscript (Kong et al. 2022) there is an assumption that each domain $u$ is the component-wise monotonic transformation of the latent variables $z$. In (Kong et al. 2022) this is an important assumption, so they conclude some specific properties for the estimations, which makes sense. There is no such assumption in this manuscript that feels concerning.\", \"I could not find in this manuscript how the function $\\\\hat{f}$ is defined except that it is an estimation of function f. How one can obtain such an estimation and what properties this estimation has?\", \"\\\"In contrast, our theory only necessitates two domains, regardless of the number of variables, representing what could be considered a minimal level of required variability.\\\" This should be equivalent to asking the existence of just one set $A$ such that there are two domains $u_1$ and $u_2$ exist in the assumption (Variability exists). However (Variability exists) requires that for any set $A$ there exist two domains $u_1$ and $u_2$ that could be different for different sets $A$. Does it mean that the assumption (Variability exists) can be simplified?\", \"If the assumption (Variability exists) is equivalent to the assumption (Domain Variability) (Kong et al. 2022), why do they have different names? It may be confusing.\", \"I was not able to find example 1 from the old version of the manuscript in which they propose a model that should satisfy the assumption \\\"Variability exists\\\". I still don't understand why the conditional probabilities should cancel out each other to make the integral equal to zero. If I remember correctly the integral was something like $\\\\int_{A} p(\\\\mathbf{\\\\epsilon})(p(..|u_1) - p(..|u_2)) d \\\\mathbf{z} d \\\\mathbf{\\\\epsilon}$. Where the difference in conditional probabilities can be as negative as positive. So the overall integral in general may be equal to zero even if $p(..|.., u_1) - p(..|..,u_2) \\\\neq 0$.\"]}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"We greatly respect the depth of your review and the meaningful perspectives you offered (1/3)\", \"comment\": [\"We greatly respect the depth of your review and the meaningful perspectives you offered. All of these constructive comments have not only improved the manuscript but also inspired us to think more deeply about the future challenges of the field. In light of these, we have incorporated **new experiments**, **new discussions**, and **additional clarification** in the updated manuscript. Our detailed responses are outlined below for your review. Please feel free to share any additional suggestions; we would be more than happy to make further adjustments as needed.\", \"**Q1:** Explanation of several details.\", \"**A1:** We sincerely appreciate all these insightful questions, which have helped us to further improve the presentation. Please kindly find our point-by-point responses below.\", \"**Q1.1:** Where in the proof the assumption that variability exists is used?\", \"**A1.1:** Thank you for your question. The variability assumption is utilized to demonstrate that the bottom-left block of the Jacobian, $\\\\dfrac{\\\\partial \\\\boldsymbol{\\\\epsilon}}{\\\\partial \\\\hat{\\\\mathbf{z}}}$, is zero. As noted in the manuscript, this portion of the proof directly follows steps 1, 2, and 3 of the proof of Theorem 4.2 in (Kong et al., 2022), where the variability assumption is applied. In light of your great suggestion, we have now included the relevant result from (Kong et al., 2022) as Lemma 1 in Appendix B.1, along with the complete proof, adapted to align with our notation.\", \"**Q1.2:** How does the approach avoid the requirement of at least $O(n)$ environments? What are you leveraging to avoid that? Is it that you only require two environments to separate the noise from the latents or something else?\", \"**A1.2:** Exactly, we only require two environments to disentangle the noise and latent variables, where the dimensions of both can be significantly larger than two. As you pointed out, this approach is fundamentally more general than prior work that assumes at least $O(n)$ environments. Requiring only two environments essentially reduces to ensuring the presence of variability. This is not only practically significant but also theoretically intriguing, as it avoids introducing any additional assumptions. The relevant part of the proof can be found in L888-1057. Essentially, we found that the variability assumption, previously used to disentangle invariant variables from changing ones, can also imply the disentanglement of the changing part $\\\\mathbf{z}$ from the invariant part $\\\\boldsymbol{\\\\epsilon}$ by leveraging the generative process where $\\\\mathbf{z}$ is independent of $\\\\boldsymbol{\\\\epsilon}$, and thus establish block-wise identifiability.\", \"**Q1.3:** I think giving a proof sketch in the main paper is a far better use of space than some of the examples that were given.\", \"**A1.3:** Thanks so much for your great suggestion. In light of this, we have removed some real-world examples and added a proof sketch for the main result as follows:\", \"> \\u201cProof Sketch. We leverage distributional variability across two domains of the latent variables $\\\\mathbf{z}$ to disentangle $\\\\mathbf{z}$ and $\\\\boldsymbol{\\\\epsilon}$ into independent subspaces. To separate general noise from latent variables, we use the independence between $\\\\mathbf{z}$ and $\\\\boldsymbol{\\\\epsilon}$ alongside the variability within $\\\\mathbf{z}$. The structural sparsity condition is then employed to identify individual components of $\\\\mathbf{z}$ in the nonlinear setting. Specifically, for each latent variable, the intersection of parental sets from a subset of observed variables uniquely specifies it. Since we only achieve relations among supports due to the nonparametric nature of the problem, an unresolved element-wise transformation remains. Consequently, we achieve element-wise identifiability for the latent variables $\\\\mathbf{z}$ (Defn. 1).\\u201d\"]}", "{\"title\": \"Would increase the score from 3 to 5\", \"comment\": [\"Thanks for the deliberate answer to all my questions. Therefore I am willing to increase my score to 5. Finally, the main reasons for such a score are the following:\", \"although the identifiability result is established, however, it is not clear how it can be recovered algorithmically\", \"I agree with a reviewer KLTp concerns regarding the novelty of the proofs\", \"additionally, the authors emphasize in their responses that 2 domains may suffice for identifiability. However, it would happen only under very specific conditions in the assumption (Domain Variability). But it is not clear how this would be restrictive. Moreover, it is not explicitly clear if even O(n) domains would be enough for the general case, and whether any bound for it exists.\"]}", "{\"title\": \"Thanks a lot for your response\", \"comment\": \"Thank you for your response. We believe the technical contribution of Theorem 1 is both significant and unique, as it establishes identifiability requiring only two domains, compared to the $O(n)$ domains necessary in previous techniques. We sincerely hope you might reconsider your assessment, though we fully respect and value your opinion. Thanks again for your time.\"}", "{\"summary\": \"This paper explores learning the latent structure of data in the presence of non-parametric noise. The author derived conditions for model generation under which the latent variables can be identified. The analysis was then extended to various data generation models, establishing identifiable conditions for each.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The motivations and results are clearly presented, despite the theoretical focus and extensive mathematical derivations. The assumptions and extensions of the results are explained in detail. The main result (Theorem 1) is effectively extended to several commonly encountered models in Theorems 2, 3, and 4.\", \"weaknesses\": \"1. It is unclear how Definition 1 (element-wise identifiability) is commonly used in the field of latent variable identification, as only two papers (by the same author) are cited below this definition.\\n2. In all the theorems, the author notes that the dataset should be sufficiently large. However, it is unclear what qualifies as 'large' and how this requirement is applied in the proofs.\\n3. The main challenge of proving Theorem 1, as compared to previous works, is not clearly explained.\", \"questions\": \"See the weaknesses above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"We greatly appreciate the time and effort you\\u2019ve dedicated to reviewing our work\", \"comment\": \"Dear Reviewer KLTp,\\n\\nWe greatly appreciate the time and effort you\\u2019ve dedicated to reviewing our work. As the discussion period draws to a close, we would be truly grateful if you could let us know whether our responses have sufficiently addressed your concerns.\\n\\nWith gratitude,\\n\\nAuthors of Submission7585\"}", "{\"title\": \"Thanks so much for your prompt and insightful feedback (1/2)\", \"comment\": \"Dear Reviewer 88C8,\\n\\nThanks so much for your prompt and insightful feedback. These new questions, again, helped us to further improve the manuscript, especially in presentation and clarification. We sincerely appreciate your time and effort. Please kindly find our point-by-point responses:\\n\\n**Q5:** In the manuscript (Kong et al. 2022) there is an assumption that each domain $\\\\mathbf{u}$ is the component-wise monotonic transformation of the latent variables $\\\\mathbf{z}$. In (Kong et al. 2022) this is an important assumption, so they conclude some specific properties for the estimations, which makes sense. There is no such assumption in this manuscript that feels concerning.\\n\\n**A5:** Thanks for raising this point. By \\u201ceach domain $\\\\mathbf{u}$ is the component-wise monotonic transformation of the latent variables $\\\\mathbf{z}$\\u201d, if we understand correctly, perhaps you mean style variables $\\\\mathbf{z}_s$ are generated from a high-level invariant part $\\\\tilde{\\\\mathbf{z}}_s$ by a component-wise monotonic function $f\\\\_\\\\mathbf{u}$, i.e., $\\\\mathbf{z}\\\\_s = f\\\\_{\\\\mathbf{u}} (\\\\tilde{\\\\mathbf{z}}\\\\_s)$. The assumption is only used for the application in domain adaptation but not in the proof of identifiability in (Kong et al. 2022). Since our study does not address domain adaptation, this assumption is unnecessary for our theoretical results.\\n\\nPlease let us give a bit more details to also describe our understanding of the role of assumption in (Kong et al. 2022). In that work, for the purpose of domain adaptation, they need to find a high-level invariant representation across domains to learn an optimal classifier over domains. Thus, although $p(\\\\mathbf{z}_s)$ change but $p(\\\\tilde{\\\\mathbf{z}}_s)$, $p(y|\\\\tilde{\\\\mathbf{z}}_s)$, and $p(y|\\\\tilde{\\\\mathbf{z}}_c)$ stay the same ($\\\\mathbf{z}_s$ denotes changing variables while $\\\\mathbf{z}_c$ denotes the invariant part). Specifically, if they assume that transformation is monotonic, then they can directly find the information of $\\\\mathbf{z}_s$ in each domain, and thus domain adaptation is achieved. In our work, since the focus is not on domain adaptation, this treatment is not relevant to our setting or results.\\n\\n**Q6:** I could not find in this manuscript how the function is defined except that it is an estimation of function f. How one can obtain such an estimation and what properties this estimation has?\\n\\n**A6:** Thanks so much for your question. In light of it, in addition to the details in the experimental setup (L465-476), we have also added the following sentence earlier in the preliminary to avoid any potential confusion (L128-130):\\n\\n> \\u201cThe estimated model $(\\\\hat{f}, \\\\hat{\\\\mathbf{z}}, \\\\hat{\\\\boldsymbol{\\\\epsilon}})$ follows the data-generating process and matches the observed distributions, i.e., $p(\\\\hat{\\\\mathbf{x}}) = p(\\\\mathbf{x})$ ($p(\\\\hat{\\\\mathbf{x}}|\\\\mathbf{u}) = p(\\\\mathbf{x}|\\\\mathbf{u})$ if there exists a domain variable $\\\\mathbf{u}$).\\u201d\\n\\nPlease kindly let us know if you have any further suggestions. Thanks again for your valuable feedback.\\n\\n**Q7:** Certain aspects of the discussion regarding the (Variability exists) assumption are unclear or potentially misleading.\\n\\n**A7:** We sincerely appreciate your detailed reading and constructive suggestion, which helped us to further improve the manuscript to avoid potential confusion. You are totally right, that sentence could be misleading. Therefore, in light of your insightful suggestion, we have changed it as follows (L219-222) to highlight the specific condition that needs to be satisfied (note that we changed the name of the assumption):\\n\\n> \\u201cDifferently, our theory does not put a hard constraint on requiring $O(n)$ domains, as long as the specific assumption of domain variability holds. However, since the conditions are different, the assumption of domain variability is not strictly weaker than the previous assumptions.\\u201d\\n\\nWe hope this clarification could address your concern and ensure the discussion is more precise. If you have further additional feedback, please feel free to let us know, and we would be more than happy to make corresponding adjustments.\\n\\n\\n**Q8:** If the assumption (Variability exists) is equivalent to the assumption (Domain Variability) (Kong et al. 2022), why do they have different names? It may be confusing.\\n\\n**A8:** Thank you for your excellent suggestion. Accordingly, we have updated the terminology throughout the paper to use \\\"Domain Variability\\\" for consistency. While we initially chose the previous name to emphasize that the problem is not domain adaptation, we fully agree that having different names could be confusing. We sincerely appreciate your constructive feedback, which has helped us improve the clarity of our manuscript.\"}", "{\"title\": \"Sincerely appreciate your insightful questions; Looking forward to your further feedback\", \"comment\": \"Dear Reviewer 88C8,\\n\\nWe sincerely appreciate your insightful questions and suggestions. We have provided additional responses above to address these in detail. As the discussion period is nearing its conclusion, we would greatly appreciate it if you could kindly let us know whether our responses have adequately addressed your questions.\\n\\nThank you again for your time and thoughtful feedback.\\n\\nWith gratitude,\\n\\nAuthors of Submission7585\"}", "{\"title\": \"Thanks so much for your prompt and insightful feedback (2/2)\", \"comment\": \"**Q9:** I was not able to find the example of the variability exists assumption from the old version of the manuscript. The old example does not seem to work.\\n\\n**A9:** Thanks so much for your feedback. We fully agree with you that some parts were missing in the previous example. Specifically, in the example, an additional constraint should have been included regarding the choice of domains:\\n\\n>\\u201c...Let us consider the two domains where the integrals do not cancel with each other (the domains can change across different sets $A$):\\u201d\\n\\nThat said, even with this clarification, we feel the example remains uninformative. The condition essentially serves as a generic faithfulness assumption, ruling out specific cases where specific combinations of parameters make these two integrals cancel out each other. As such, the example does not add significant insight. We have therefore decided to remove the example in the updated manuscript to avoid unnecessary confusion, as well as compensating the space for other discussions and clarifications. If possible, please kindly let us know if you think this would be a reasonable choice. We sincerely appreciate your thoughtful suggestions, which have been invaluable in improving the manuscript from various perspectives.\\n\\n---\\n\\nTo summarize, we are sincerely grateful for your thoughtful and constructive questions, suggestions, and comments. It is truly a privilege to have you dedicate your time and effort to helping us improve our manuscript. We deeply appreciate your insights and sincerely look forward to any additional feedback you may have.\\n\\n\\n\\nMany thanks,\\n\\nAuthors of Submission7585\"}", "{\"title\": \"We sincerely appreciate your insightful feedback and valuable comments (3/3)\", \"comment\": \"**References**\\n\\n[1] Hyv\\u00e4rinen and Pajunen, Nonlinear independent component analysis: Existence and uniqueness results. Neural networks, 1999\\n\\n[2] Hyv\\u00e4rinen et al., Identifiability of latent-variable and structural-equation models: from linear to nonlinear, Annals of the Institute of Statistical Mathematics, 2024\\n\\n[3] Khemakhem et al., Variational autoencoders and nonlinear ICA: A unifying framework, AISTATS 2020\\n\\n[4] Sorrenson et al., Disentanglement by nonlinear ICA with general incompressible-flow networks (GIN), ICLR 2020\\n\\n[5] H\\u00e4lv\\u00e4 et al., Disentangling identifiable features from noisy data with structured nonlinear ICA, NeurIPS 2021\\n\\n[6] Yao et al., Learning temporally causal latent processes from general temporal data, ICLR 2022\\n\\n[7] Lachapelle et al., Disentanglement via mechanism sparsity regularization: A new principle for nonlinear ICA, CLeaR 2022\\n\\n[8] Buchholz et al., Function classes for identifiable nonlinear independent component analysis, NeurIPS 2022\\n\\n[9] Zheng et al., On the identifiability of nonlinear ICA: Sparsity and beyond, NeurIPS 2022\\n\\n[10] Lachapelle et al., Nonparametric partial disentanglement via mechanism sparsity: Sparse actions, interventions and sparse temporal dependencies. arXiv 2024\\n\\n[11] Von K\\u00fcgelgen et al., Self-supervised learning with data augmentations provably isolates content from style, NeurIPS 2021\"}", "{\"title\": \"Gentle Reminder: Feedback Appreciated Before Discussion Ends in 12 Hours\", \"comment\": \"Dear Reviewer 88C8,\\n\\nWe hope this message finds you well. As the discussion period is nearing its conclusion in approximately **12 hours**, we wanted to kindly follow up on our earlier responses. We would greatly appreciate your feedback on whether our clarifications have addressed your additional questions. If they have, we would be most grateful if you could consider updating your rating accordingly.\\n\\nThank you once again for your time and thoughtful input throughout the review process.\\n\\nSincerely,\\n\\nAuthors of Submission7585\"}", "{\"title\": \"Thank you so much for taking the time to provide your valuable feedback\", \"comment\": \"Dear Reviewer Vhc1,\\n\\nThank you so much for taking the time to provide your valuable feedback. As the discussion period draws to a close, we would greatly appreciate it if you could kindly let us know whether our responses have adequately addressed your concerns.\\n\\nMany thanks,\\n\\nAuthors of Submission7585\"}", "{\"summary\": \"This paper gives conditions for identifying latent variables up to an element-wise transformation under general noise (i.e. there's no assumption of additive noise in the mixing / generative function). The show results under general noise with three main assumptions (Nondegeneracy, variability and structural sparsity), and then show how assuming additive noise allows them to drop the variability assumption. Finally in theorem 3 they give a more general noise condition, which can be extended to learning a causal DAG.\", \"soundness\": \"4\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The theory gives strong results: they have a more general noise condition (in that they do not need to assume additivity), while reducing the number environments needed to identify the latents from n (where n is the number of latents) to 2.\\n\\nThey are also able to identify causal DAGs of the form given in figure 5, by leveraging the equivalence between the mixing function and the DAG for problems of that form (there are many problems in causal representation learning where that won't be true because there isn't a fixed DAG that connects latents & observations; but it is still a useful observation).\", \"weaknesses\": \"My complaints are mostly around presentation. I felt that the paper does a poor job of explaining its results & it tends to oversell the practical relevance.\\n\\n* **Explanation** I am very familiar with this literature and I found it hard to see why the assumptions leveraged in this paper lead to just strong identifiability results. For example, whether or not *(Variability exists)* is assumed, is the key difference between Theorem 1 and 2 (assuming additive noise allow one to drop the assumption that variability exists), but I couldn't see where in the proof the assumption that variability exists is used (it isn't referenced anywhere). \\n \\n Lines 226 - 230 explain (correctly) that most identifiability results need at least O(n) (where n is the number of latents) to disentangle latents, but it doesn't explain *how* this approach avoids that requirement? What are you leveraging to avoid that? The example doesn't give any insight because it is a case where there's only two parameters & two environments, so it's not surprising that its sufficient. Is it that you only require two environments to separate the noise from the latents or something else?\\n \\n Finally, I think giving a proof sketch in the main paper is a far better use of space than some of the examples that were given. I think the reader should be able to understand the main steps of the proofs in the main text without having to go through all the details in the appendix.\\n* **Overselling applications** the paper lists many applications from medical imaging to finance where general noise models are necessary, and makes it sound like extending identifiability results to these settings is all that stands in the way of applications. E.g. in the statement *\\\"the capacity to handle general, nonparametric noise extends the applicability of the proposed theory across a wide range of real-world scenarios, regardless of the complexity of the noise\\\"*. Or *\\\"This is crucial for applications such as dependable patient monitoring systems. [line 302]\\\"*. \\n\\n While it's good to know that these settings are identifiable, we need to be far more upfront about the limitations of causal representation learning methods. There is a big gap between theory and practice. These methods all assume that we perfectly fit the data distribution with no constraints on model capacity & it's not at all clear how they perform in finite samples with estimation error. That is likely to be a far bigger block to practical application than identifiability. Or put differently, identifiability is necessary, but far from sufficient for practical application, and making that clear is important (or alternatively, provide convincing experiments in real world settings, not toy problems like MNIST / fashion MNIST). \\n* **Unconvincing experiments** Simple simulations are fine as a sanity check, but we have to move past these synthetic problems in evaluation. The simulation results don't give enough detail to properly evaluate, but the data generating process appears to be just a simple diffeomorphism defined by a normalizing flow model (I'm guessing here). If that's the case, you're in the well-specified case, so it's far more simple than any real world scenario. \\n* I liked the MNIST and FashionMNIST examples a little more - but they also expose just how unreliable and hard-to-work-with these methods are in practice. For example, in the MNIST example, panel 2 and 3 are interpreted as capturing hight and \\\"right slope\\\" respectively, but they're also clearly entangled with stroke width (the right most columns uses thinner pen strokes than the left most columns in both images). And I couldn't understand the latents in the FashionMNIST examples. I think its better to be upfront about these limitations in the main text rather than bury it in the appendix.\", \"questions\": \"What makes the general noise setting fundamentally different from the noiseless setting? If I have method that can identify latents in the noiseless setting with n variables, can I not just treat the noise variable as the n+1'th latent and apply an existing approach?\\n\\nAssuming the above is true, why didn't you empirically compare to any existing methods?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks so much for your follow-up response\", \"comment\": \"Dear Reviewer 88C8,\\n\\nThanks so much for your follow-up response and the update on the score. We fully respect your opinions on the assumption. At the same time, we would like to highlight the following points that may provide additional context:\\n\\n- From a theoretical perspective, we believe the condition is **not overly restrictive**, as it only excludes probability measures that cancel each other out within specific rectangular sets. This is analogous to the generic faithfulness assumption in the literature, which avoids specific parameter combinations that cancel each other. While we fully agree that further exploration (e.g., providing bounds) for such generic conditions would be highly valuable, we humbly feel it is beyond the current scope of our asymptotic identifiability setup.\\n\\n- In all of our experiments, we use **only two domains**, and the results consistently indicate that all latent variables can be identified in the considered setting. Empirically, this suggests that two domains are sufficient for identification in our framework.\\n\\nMoreover, we believe it is worth noting that we have proposed structural identifiability results for **general nonlinear SEMs with nonlinear measurement error**, addressing a long-standing open problem in causal discovery. As such, we believe our contributions to both representation and structure identification are novel and valuable.\\n\\nLastly, we deeply appreciate the time and effort you have dedicated to reviewing our manuscript. While we may hold differing perspectives on certain aspects, we have the utmost respect for your feedback. We sincerely hope you might reconsider your assessment in light of our clarifications and contributions, and we would be more than happy to address any remaining concerns you may have.\\n\\nWith gratitude,\\n\\nAuthors of Submission7585\"}", "{\"summary\": \"In this work, the authors consider the problem of Element-wise Identifiability of the latent variables $\\\\mathbf{z}$, i.e. up to the permutation and element-wise (coordinate-wise) invertible transformation of these latent variables. As the main contribution authors propose sufficiency conditions under which the latent variables $\\\\amthbf{z}$ are element-wise identifiable for the general case of generative models. Additionally, they consider two subcases of this generative model one of which is generative models with additive noise, and prove similar results. Also, the authors provide the result of causal structure learning for the specific subcase of the causal model with additive noise.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"As the main result, authors propose the sufficiency conditions for the element-wise identifiability of the latent variables under general assumptions, i.e:\\n- no restrictions on the noise (e.g. as non-Gaussian, etc);\\n- no restrictions on the generative mechanism f() (e.g. as additive noise model, etc).\\nThis generality of the result is very interesting from a theoretical point of view that the latent space can be learned uniquely up to some permutation and element-wise invertible transformation.\", \"weaknesses\": [\"The paper is missing the citation of some important theoretical results from the field. For example:\", \"Peters, Jonas, et al. \\\"Causal discovery with continuous additive noise models.\\\" (2014). This work solves the causal discovery problem for the general SEM with additive noise, while the authors in section 4.3 consider a subcase of the general SEM with additive noise. It is important to specify the limitations of both works and the importance of the proposed results compared to already existing ones.\", \"The paper is hard to read due to the lack of sufficient explanatory details of the theoretical constructs and manipulations. More details in the questions below.\", \"The paper proposes only identifiability guarantees, but there is no method proposed to identify the latent variable $\\\\mathbf{z}$. Moreover, guarantees of identifiability of the latent variables up to element-wise invertible transformation are quite restrictive since the invertible transformation can be very complex and infeasible to learn.\", \"The generative model in Section 4.2 is just a specific case of the model considered in theorem 1, therefore it is not clear what is new contribution of the results presented in section 4.2 with respect to theorem 1.\", \"It is not clear how restrictive the assumption \\\"Variability exists\\\". More specifically, it is not obvious for me why Example 1 satisfy this assumption. More details in the question part.\"], \"questions\": [\"The assumption \\\"Variability exists\\\" is an analogy for the assumption of \\\"Domain Variability\\\" (Kong et al. 2022). However, (Kong et al. 2022) require that for any set A there exist two realizations of unobserved variables $\\\\mathbf{u}$ (domains) such that two integrals are not equal. However, in this work authors require that there exist two domains $d_1$ and $d_2$ such that for any set A the two specific integrals are not equal, that is much much stronger assumption. Additionally, it is not clear what these domains $d_1$ and $d_2$ are in the given model. Finally, the authors provide example 1 in which they propose a model that should satisfy the assumption \\\"Variability exists\\\". However, it is not clear from the given explanations why the considered integral would not be equal to 0. Moreover, the statement is only given for the set A of a specific structure, but it has to be true for arbitrary set A that satisfies the conditions from the assumption \\\"Variability exists\\\".\", \"In the proof of theorem 1 the authors refer to steps 1, 2, and 3 in the proof of Theorem 4.2 Kong et al. (2022) from which they conclude that some part of Jacobian is zero. However, it is not clear how the assumptions of (Kong et al. (2022)) correspond to the assumptions considered in this work and how the setting of the problems maps to each other. Therefore it's not clear and hard to follow how the results of steps 1, 2, and 3 (which take a few pages in Kong et al. (2022)) adjust to the proof considered in this work. I think all of these steps should be rewritten explicitly in the proof so anyone can follow it.\", \"Also, the latent variables $\\\\mathbf{z}$ and noise $\\\\epsilon$ are symmetric with respect to the generating mechanism $\\\\mathbf{x}=f(\\\\mathbf{z}, \\\\mathbf{\\\\epsilon})$. Therefore I would like to ask the authors if there is anything that can stop noise $\\\\mathbf{\\\\epsilon}$ to follow the same assumptions considered for $\\\\mathbf{z}$. And if not, then how we can make sure that our algorithm recovers latent variables $\\\\mathbf{z}$ instead of noise $\\\\mathbf{\\\\epsilon}$?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In recent years, there has been an active development on theoretical and practical research towards learning identifiable representations.\\nThe most common family of data generation assumptions take the following form -- z ~Pz, x <--f(z), where z is some latent variable sampled from a distribution Pz, and f is a diffeomorphism that mixes the latents to generate the observation x. For varying assumptions either on Pz or mixing maps or other forms of assumptions on weak supervision, there has been a large body of work that establishes varying forms of identification guarantees. In contrast to existing works, this work places a special emphasis on the role of noise in generation of x. The authors propose a family of results under different assumption. \\n\\nIn Theorem 1, the authors study x <--f(z,e), where z is latent, e is noise, f is diffeomorphism. \\nIn Theorem 2, the authors study x <-- f(z) +e, and show that some of the requirements on multiple domains in Theorem 1 can be relaxed. \\nIn Theorem 3 and 4, the authors study, a model with non-linear distortions and measurement error and arrive at identification guarantees (representation identification and structure identification). \\n\\nTowards, the end the authors conduct experiments to reflect the theoretical claims.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The current frameworks on identifiability have mostly studied additive noise models and here the authors mean to go beyond that family of models. This is an important problem, which has not been well studied and it is good to see authors taking a stab at it. The paper tries does a good job of dividing the analysis based on family of assumptions -- i) with non-additive noise models x <--f(z,e), ii) additive noise models x <--f(z) +e, iii) non-linear distortion models with noise, and finally to iv)models with measurement error.\", \"weaknesses\": \"1. **On noise modeling**: The authors seem to overclaim that they are the first to arrive at identifcation guarantees with general noise. I believe they are referring to the first theorem. In the first theorem the authors use the following model x<-- f(z,e), where f is a diffeomorphism. I would argue that existing frameworks on partial identification already cover these types of problems even though they do not state it like this. One can always state existing works on partial identification in the above light. Suppose I divide my latents z into two parts z1 and z2, and x <--f(z1,z2). In this case, one can seek partial identification guarantees, where one estimates \\\\hat{z}1 that only identifies z1 element-wise and rest of the z2 terms correspond to noise that we don't care about identifying.\\nLet us take the work from Kugelgen et al. https://proceedings.neurips.cc/paper_files/paper/2021/file/8929c70f8d710e412d38da624b21c3c8-Paper.pdf. One could re-interpret their result and say the following. In their result, the authors show that there are two types of latents, content latent and style latent. If one observes a pair of observations where style latent changes but the content latent stays the same, then one can separate the content latents. At this point the problem reduces to standard noise free representation identification.\", \"one_could_also_use_frameworks_https\": \"//arxiv.org/pdf/2401.04890 and divide latents into two blocks. The actions of an agent may impact only the latents u want to identify.\", \"then_there_is_the_framework_from_https\": \"//arxiv.org/pdf/2306.06510, which I believe the authors are already using in their results. I would like to better understand the take of authors on the above perspective and why do they not acknowledge this properly in their work. By overclaiming about noise, the authors are not highlighting the specific challenges that they face beyond separation of content and style.\\n\\n\\n\\n2. **On Theorem 1**:\\n\\n a) The structure and assumptions in Theorem 1 are reminiscent of the assumptions in Theorem 1 in https://proceedings.neurips.cc/paper_files/paper/2022/file/6801fa3fd290229efc490ee0cf1c5687-Paper-Conference.pdf and Theorem 4.1 in https://proceedings.neurips.cc/paper_files/paper/2023/file/2aebc17b683792a17dd4a24fcb038ba6-Paper-Conference.pdf. \\n Can the authors contrast their proof technique from the above papers and give some intutions on what differs?\\n\\n b) In existing works on structural sparsity, e.g., https://proceedings.neurips.cc/paper_files/paper/2022/file/6801fa3fd290229efc490ee0cf1c5687-Paper-Conference.pdf, the authors relied on independence assumption on the latents. In this work, the authors do not seem to require that assumption. What is the assumption that the authors make that allows them to get rid of independence assumption? Also, this seems a bit odd because it seems to imply having noise in the setting makes the problem easier than https://proceedings.neurips.cc/paper_files/paper/2022/file/6801fa3fd290229efc490ee0cf1c5687-Paper-Conference.pdf, which is fairly counterintuitive. \\n\\n c) As I stated above, the problem of identification under noise has been reduced in the authors framework to that of block identification. For block identification, the authors proof of Theorem 1 is a direct combination of result from Kong et al. and structrual sparsity work https://proceedings.neurips.cc/paper_files/paper/2022/file/6801fa3fd290229efc490ee0cf1c5687-Paper-Conference.pdf. Can the authors better tell if there is really new proof techniques that were needed to solve the problem? I again emphasize that authors seemed to highlight noise as a very difficult problem but reduce it to two well studied cases -- block identification (Kugelgen et al., Kong et al.) or deconvolution via additive noise models (Khemakhem et al.).\\n\\n3. **On Theorem 2**: In this Theorem, the authors rely on additive noise assumption. Here I would like to understand why is this result any different from a generalization of Theorem 1 in https://proceedings.neurips.cc/paper_files/paper/2022/file/6801fa3fd290229efc490ee0cf1c5687-Paper-Conference.pdf to additive noise models? I would argue that due to additive noise models this generalization is trivial because you rely on the deconvolution argument from Khemakhem et al.. In Khemakhem's argument, one requires to know the noise distribution to cancel both sides out in the Fourier transform (See equation 20-27 in https://arxiv.org/pdf/1907.04809). If you resort to the same sort of assumptions as Khemakhem et al., then I don't think you are going beyond what already exists in identification literature in your results (Theorem 2 to 4 all rely on additive noise models).\\n\\n4. **On Theorem 3**: The setup of Theorem 3, i.e., equations 3 and 4 seem to combine the models used in Theorem 1 and 2. Can the authors clarify this? For equation 4, we can use Theorem 2 to obtain elementwise transform of x*. After that one could use the inferred x* and use Theorem 1 for obtaining z. Why is presented as a special thing? This could just be a corollary. \\n\\n5. **On experiments:** It would have been nice if there was one experiment reflecting the unique settings of the different theorems. I do not see that is the case. For instance, synthetic expmts for Theorem 2-4 would have been nice, both representation identification ones and causal discovery ones. \\n\\n6. **On the writing**: I found the writing quite verbose in many places. The authors seem to give examples without giving citations. For instance, see lines 165-188, lines 298-308, lines 357-369.\", \"questions\": \"In the weaknesses section itself I have mentioned my concerns and questions. Please refer to that section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"We greatly respect the depth of your review and the meaningful perspectives you offered (2/3)\", \"comment\": \"**Q2:** It tends to oversell the applications of causal representation learning methods, since these methods all assume that we perfectly fit the data distribution with no constraints on model capacity and it is not at all clear how they perform in finite samples with estimation error.\\n\\n**A2:** Thank you for your thoughtful comment. We agree that causal representation learning (CRL) methods face significant challenges before they can fully benefit a wide range of real-world applications. In complex real-world scenarios, it is difficult to ensure that assumptions are perfectly satisfied, likelihoods are accurately matched, scalability is not a concern, or that finite sample errors do not introduce bias.\\n\\nWe share your view that substantial work remains in the field to bridge this gap. To highlight this, we have removed the discussion on applications and instead emphasized the limitations in the updated manuscript. For instance, we have added the following clarification:\\n\\n> \\u201cMoreover, for real-world scenarios, it is extremely challenging to make sure that all conditions on the latent data generating process are perfectly satisfied and the distributions are perfectly matched after estimation. Bridging the gap requires a thorough study of the finite sample error and the robustness of the identification, which remains an open challenge in the literature.\\u201d\\n\\nMoreover, we have also emphasized this point in the introduction, specifically in the paragraphs outlining our contributions, to ensure the message is as clear as possible:\\n\\n> \\u201c... but addressing practical challenges like finite sample errors remains a key open problem for future work to enable broader deployment of identifiability theory.\\u201d\\n\\n**Q3:** Synthetic experimental setting is far more simple than real-world scenarios since simulation is well-specified, although simulations are fine as a sanity check. I liked the MNIST and FashionMNIST examples a little more, but the results are not easy to interpret and also expose practical challenges.\\n\\n**A3:** Thank you for your insightful opinion. As discussed in detail in Q2&A2, we fully agree that there remain significant challenges in completely bridging the gap between causal representation learning (CRL) methods and large-scale real-world applications. Your suggestion highlights one of the key obstacles the CRL community must address, and we sincerely appreciate your valuable perspective. To work towards that goal, we have further conducted **additional experiments** on more synthetic settings with different noise and distortion. According to the new results (Appx. D.2), we can still identify the latent variables in these settings. Although the discussed gaps such as finite sample error are still there, we hope to try our best to empirically support the theory in more diverse settings.\\n\\nRegarding the image examples (e.g., MNIST and FashionMNIST), we acknowledge the difficulty of ensuring fully interpretable recovered semantics. While humans can often infer meaning based on their understanding, there is no guarantee that the underlying generative process aligns with our interpretations. Some latent factors may be inherently \\\"entangled\\\" or lack clear semantic meaning in human terms, yet still represent statistically independent components of the underlying generative process. At the same time, practical issues such as finite sample error also hinder the perfect recovery of the hidden factors. In light of your great comment, to emphasize this point, we have explicitly listed these challenges in the main text of the updated manuscript:\\n\\n> \\u201cImportantly, several practical challenges persist. For example, human interpretations of latent factors are often guided by intuition, yet there is no guarantee that the true generative process aligns with these interpretations. Certain latent factors may inherently appear entangled or lack clear semantic meaning from a human perspective, even if they represent statistically independent components of the generative mechanism. Furthermore, practical constraints, such as finite sample errors, pose additional challenges to achieving perfect recovery of the hidden factors.\\u201d\"}", "{\"title\": \"We greatly respect the depth of your review and the meaningful perspectives you offered (3/3)\", \"comment\": \"**Q4:** What makes the general noise setting fundamentally different from the noiseless setting? If I have a method that can identify latents in the noiseless setting with $n$ variables, can I not just treat the noise variable as the $(n+1)$-th latent and apply an existing approach?\\n\\n**A4:** Thank you so much for your question. The fundamental difference in the general noise setting lies in the lack of specific constraints on the noise. Thus it is challenging to separate them from the latent variables. That being said, in our general setting, we cannot assume existing assumptions such as additivity, $O(n)$ domains or structural conditions w.r.t. to noise. Existing conditions can indeed help to identify latent variables in the noiseless setting, but if we have additional general noise variables without these conditions, the previous theory cannot be applied to disentangle the influence of these non-negligible noises.\\n\\n\\n---\\n\\n**References**\\n\\n[1] Kong et al., Partial identifiability for domain adaptation, ICML 2022\"}", "{\"title\": \"Response to author's comments\", \"comment\": \"Thank you for addressing my questions. After considering the other reviewers' comments and the discussion, I will maintain my score, particularly in light of the concerns about the novelty of the proof for Theorem 1 raised by other reviewers.\"}", "{\"title\": \"We are very grateful for the time and effort you dedicated and constructive feedback (1/2)\", \"comment\": \"We are very grateful for the time and effort you dedicated to thoroughly reviewing our manuscript and providing constructive feedback, which has been invaluable in improving its quality. In particular, your thoughtful suggestions have prompted us to include several **clarifications** and **new discussions** to enhance the clarity of our messages. **New experiments** have been also been conducted to further support our theoretical findings. We kindly invite you to review our detailed responses below and would sincerely appreciate any additional feedback you may have.\\n\\n\\n**Q1:** While notation is thoroughly defined and overall there is a lot of discussion, still there are many key places where details are either missing or difficult to ascertain.\\n\\n**A1:** Thank you very much for taking the time to thoroughly review our manuscript. We sincerely appreciate your comments. Here we provide point-to-point responses to the mentioned details as follows:\\n\\n- **Q1.1:** Is it assumed that the $2$ domains are labeled?\\n\\n- **A1.1:** Thank you for the question. Yes, as mentioned in L469 and L475, the two domains are labeled. It might be worth noting that, most previous works (see e.g., a recent survey (Hyv\\u00e4rinen et al., 2024)) require $2n+1$ labeled domains, where $n$ is the number of latent variables; while we only necessitate $2$ domains for arbitrary number of latent variables, which is essential a minimal degree of variability. To avoid potential confusion, in addition to the current notes, we have further highlighted it earlier in the updated manuscript:\\n\\n > \\u201cThese two domains are realizations of a domain variable $\\\\mathbf{u}$, which are observed and labeled.\\u201d\\n\\n- **Q1.2:** What is the sparsity constraint and where is it listed in the assumptions?\\n\\n- **A1.2:** Thanks for your question. The sparsity constraint refers to an $\\\\ell_0$ regularization on $\\\\hat{\\\\mathcal{F}}_\\\\hat{z}$ during estimation, approximated using $\\\\ell_1$ for gradient-based optimization (L468). Since the identifiability studies what conditions make the data generating process identifiable, and the sparsity constraint is a regularization used during estimation, we previously did not list it in the assumptions. However, we agree with you that introducing it in the theorem could help avoid potential confusion. Therefore, we have updated the manuscript and added the following clarification to the theorems:\\n\\n > \\u201cTogether with a $\\\\ell_0$ regularization on $\\\\hat{\\\\mathcal{F}}\\\\_\\\\hat{z}$ during estimation (${\\\\\\\\|\\\\hat{\\\\mathcal{F}}\\\\_{\\\\hat{z}}\\\\\\\\|}_0 \\\\leq {\\\\\\\\|\\\\mathcal{F}_z\\\\\\\\|}_0$), suppose the following assumptions \\u2026\\u201d\\n\\n Thank you again for your constructive feedback. If you have any additional suggestions, please feel free to share them, and we would be more than happy to incorporate further changes to improve clarity.\\n\\n- **Q1.3:** Why is Eq. 35 (now Eq. 54) without loss of generality?\\n\\n- **A1.3:** This is because $j_1 \\\\neq j_2$, and thus it is either $j_3 \\\\neq j_1$ or $j_3 \\\\neq j_2$. Since $j_1$ and $j_2$ are symmetric in the proof, we can assume $j_3 \\\\neq j_1$ without loss of generality, making Eq. 35 (now Eq. 54) valid.\\n\\n- **Q1.4:** Can more details be provided on this split between latent factors and noise factors in the synthetic data? What is the \\u201cbase\\u201d experiment in this case?\\n\\n- **A1.4:** Thanks so much for your suggestions. As mentioned in the experimental setup, the main difference between latent and noise variables in generating synthetic dataset is that, for latent variables, we sample them from two distinct multivariate Gaussian distributions conditioning on the corresponding domain index, while the noise is sampled from a single multivariate Gaussian. The corresponding details of the distributions are included in the appendix, which we now moved to the main paper in the updated manuscript to make details more noticeable:\\n\\n > \\u201cDuring training, latent variables are drawn from two multivariate Gaussian distributions to satisfy the variability condition, while noise is also sampled from a separate multivariate Gaussian, with means sampled uniformly from the range $[-5, 5]$ and variances sampled uniformly from $[0.5, 2.5]$.\\u201d\\n\\n For the baseline model, we use a fully connected structure to violate the structural sparsity condition, and use a single domain to violate the variability condition. We have modified the current description (L479-481) as follows:\\n\\n > \\u201cIn contrast, the baseline model (Base) violates key assumptions, particularly those related to structural sparsity (by a fully connected structure) and variability (by sampling from a single domain).\\u201d\"}", "{\"title\": \"We sincerely appreciate your insightful feedback and valuable comments (2/3)\", \"comment\": \"**Q3:** The main challenge of proving Theorem 1, as compared to previous works, is not clearly explained.\\n\\n**A3:** Thanks a lot for your feedback. In light of it, we have added a new section to discuss the technical challenge compared to previous works in detail. Since we have already highlighted the unique contributions of our results in introduction and discussion, we mainly focus on the technical challenges in our proof compared to others. Intuitively, since we do not assume specific types of noise (e.g., additivity), there exists minimal difference between noise and latent variables. Therefore, we cannot utilize some existing assumptions such as $2n+1$ domains or contrastive learning, to disentangle noise and latent variables. Instead, we leverage the existence of variability as a minimal difference on the distributions. The details are included in the new section added in the updated manuscript as follows:\\n\\n> \\u201cIn this section, we discuss the challenge of modeling general noise, emphasizing the distinctions between noise and content variables as explored in previous works.\\u201d\\n\\n> \\u201cContent variables are typically semantically meaningful and are often explicitly contrasted with style variables. This enables existing techniques to disentangle content from style through structured variability. For example, contrastive learning frameworks (Von K\\u00fcgelgen et al., 2021) use paired observations differing only in style (e.g., images with the same object but different backgrounds) to disentangle content. In multi-domain settings (Kong et al., 2022), content remains invariant across $O(n)$ distinct domains characterized by different styles. Similarly, in intervention-based settings (Lachapelle et al., 2024)}, agents or environments serve as auxiliary variables that induce changes in the conditional distributions of latent variables. These structured variations provide the foundation for effective disentanglement.\\u201d\\n\\n> \\u201cIn contrast, noise variables often lack semantic meaning and cannot be explicitly manipulated across multiple domains or paired in observations. This makes it infeasible to assume the existence of $O(n)$ conditional distributions or define contrastive objectives. As a result, existing frameworks designed for content-style disentanglement cannot be directly applied to general noise modeling.\\u201d\\n\\n> \\u201cTo address this, we propose to only leverage the existence of variability in the latent distribution, requiring only two distinct distributions as a minimal degree of change. This relaxation reduces the need for $O(n)$ distinct distributions, which are common in existing frameworks, and broadens applicability to scenarios where the distribution is not completely invariant. This shift, from explicitly controlling different types of variables to achieve a required degree of change or transition, to accommodating general variability in scenarios where the distribution is not completely invariant, represents a significant technical contribution that is essential to address the unique contribution of modeling general noise.\\u201d\\n\\nThanks again for your effort and time reviewing our paper. If you have any additional feedback, please do not hesitate to let us know\\u2014we would be more than grateful for your insights.\"}", "{\"title\": \"We deeply value your detailed review and the insightful feedback you have shared (2/4)\", \"comment\": \"> \\u201cTo address this, we propose to only leverage the existence of variability in the latent distribution, requiring only two distinct distributions as a minimal degree of change. This relaxation reduces the need for $O(n)$ distinct distributions, which are common in existing frameworks, and broadens applicability to scenarios where the distribution is not completely invariant. This shift, from explicitly controlling different types of variables to achieve a required degree of change or transition, to accommodating general variability in scenarios where the distribution is not completely invariant, represents a significant technical contribution that is essential to address the unique contribution of modeling general noise.\\u201d\\n\\nThanks again for your valuable insight. Please kindly let us know if there are any further questions.\\n\\n\\n**Q2:** Questions on Theorem 1.\\n\\n**A2:** We sincerely appreciate all these questions and the time you spent on reviewing our manuscript. After carefully reviewing the mentioned works, we believe that there may exist some potential misunderstanding, which could be clarified by the point-by-point responses below.\\n\\n- **Q2.1:** Can the authors contrast their proof technique from (Zheng et al., 2022, Zheng & Zhang, 2023)?\\n\\n- **A2.1:** Thank you for the question. Compared to Theorem 1 in (Zheng et al., 2022), our setting involves general noise variables, and thus need additional assumption of variability to make them distinguishable. In contrast, (Zheng et al., 2022) does not address this challenge as it does not involve disentangling different types of variables in the proof.\\n\\n Compared to Theorem 4.1 (and Theorem 4.2) in (Zheng & Zhang, 2023), our results significantly generalize their conditions by requiring only two conditional distributions to disentangle the latent variables dependent on domains, whereas (Zheng & Zhang, 2023) relies on $n+1$ domains to construct a full-rank linear system for disentangling the changing part. While Theorem 4.1 in (Zheng & Zhang, 2023) includes a variability condition with two domains, it is only used to disentangle the invariant part. They then require $n+1$ domains to disentangle the changing part, whereas we identify both invariant and changing latent variables with only two domains.\\n\\n In summary, our proof fully leverages the variability assumption, demonstrating that latent variables can be disentangled from invariant noises with just two domains. This contrasts with (Zheng et al., 2022), which does not require disentanglement of variable types, and (Zheng & Zhang, 2023), which requires $n+1$ domains to disentangle the changing latent variables.\", \"we_have_also_added_the_following_proof_sketch_for_further_clarification\": \"> \\u201c*Proof Sketch.* We leverage distributional variability across two domains of the latent variables $\\\\mathbf{z}$ to disentangle $\\\\mathbf{z}$ and $\\\\boldsymbol{\\\\epsilon}$ into independent subspaces. To distinguish general noise from latent variables, we use the conditional independence between $\\\\mathbf{z}$ and $\\\\boldsymbol{\\\\epsilon}$ alongside the variability within $\\\\mathbf{z}$. Following (Zheng et al., 2022), a structural sparsity condition then identifies individual components of $\\\\mathbf{z}$ in the nonlinear case. Specifically, for each latent variable, the intersection of parental sets from a subset of observed variables uniquely specifies it. Since we only achieve relations among supports due to the nonparametric setting, there exists an unresolved element-wise transformation. Thus, we achieve element-wise identifiability for the latent variables $\\\\mathbf{z}$ (Defn. 1.)\\u201d\\n\\n We hope these discussions and updates could clarify any potential confusion. Thanks again for raising this point. Please kindly let us know if you have any further feedback.\"}", "{\"title\": \"We are very grateful for the time and effort you dedicated and constructive feedback (2/2)\", \"comment\": \"- **Q1.5:** For the variability assumption - why are rectangular sets not included? Where is this assumption used in the proof of Thm. 1, for instance?\\n\\n- **A1.5:** Thank you for your question. The variability assumption is used to prove that the bottom-left block of the Jacobian, i.e., $\\\\dfrac{\\\\partial \\\\boldsymbol{\\\\epsilon}}{\\\\partial \\\\hat{\\\\mathbf{z}}}$, is zero. As mentioned there, this part of the proof is directly based on steps 1, 2, and 3 in the proof of Theorem 4.2 in (Kong et al. 2022), where the variability assumption is used. For the ease of reference, we have now included the related result in (Kong et al. 2022) as Lemma 1 in Appx. B.1, as well as the full proof with notations being transferred to our setting. Technically, the rectangular sets are not included to build the contradiction needed in the proof.\\n\\n**Q2:** The structural sparsity assumption is extremely strong, albeit somewhat understandable. That said, the paper could be somewhat more realistic when discussing the assumptions - many references are made to real-world data types that do not immediately seem to necessarily follow this framework exactly.\\n\\n**A2:** Thanks so much for your suggestion. Indeed, given the complexity and randomness of the real-world hidden generating process, some scenarios mentioned in our discussion may not perfectly align with the conditions exactly. In light of your constructive feedback, we have removed these examples, and added the following highlight on the limitations:\\n\\n> \\u201cMoreover, for real-world scenarios, it is extremely challenging to make sure that all conditions on the latent data generating process are perfectly satisfied and the distributions are perfectly matched after estimation. Bridging the gap requires a thorough study of the finite sample error and the robustness of the identification, which remains an open challenge in the literature.\\u201d\\n\\nAt the same time, we believe the structural sparsity is more likely to be satisfied when the number of observed variables exceeds the number of latent variables. While we fully acknowledge that this assumption is quite strong when the numbers of latent and observed variables are equal\\u2014a common setting in previous work\\u2014our approach allows for additional observed variables, making the structural sparsity condition more natural in our context. As mentioned in L238-245, the structural sparsity condition only requires a subset of the observed variables\\u2014potentially as few as one or two\\u2014to satisfy the conditions. Therefore, in cases where additional observed variables are available or can be introduced (e.g., by adding more microphones), the assumption becomes much more plausible. Empirical evidence from Zheng & Zhang (2023) further supports this, showing that when the number of observed variables is three times the number of latent variables, approximately 80% of random structures satisfy the condition, and this percentage approaches 100% when the ratio increases to five or more.\\n\\n**Q3:** The assumptions on nondegeneracy preclude linear functions, which is also understandable.\\n\\n**A3:** Yes, you are absolutely right\\u2014this assumption precludes purely linear functions where the Jacobian remains invariant. When the Jacobian is invariant, it cannot span the support space and, as a result, fails to capture the structural information necessary for identifiability.\\n\\n\\n**Q4:** A typo in the variability assumption.\\n\\n**A4:** We sincerely appreciate you for catching this typo. Since the assumption was proposed by previous work (Kong et al. 2022), we have also confirmed your findings with them. Thanks so much for your careful attention. We have corrected it in the updated manuscript accordingly.\\n\\n\\n---\\n\\n**References**\\n\\n[1] Hyv\\u00e4rinen et al., Identifiability of latent-variable and structural-equation models: from linear to nonlinear, Annals of the Institute of Statistical Mathematics, 2024\\n\\n[2] Kong et al., Partial identifiability for domain adaptation, ICML 2022\\n\\n[3] Zheng and Zhang, Generalizing nonlinear ICA beyond structural sparsity, NeurIPS 2023\"}", "{\"title\": \"Thanks so much for your encouragement\", \"comment\": \"Thank you so much for your updated rating\\u2014it means a great deal to us and provides significant encouragement! We deeply appreciate your support and thoughtful feedback.\"}" ] }
7o6SG5gVev
TestGenEval: A Real World Unit Test Generation and Test Completion Benchmark
[ "Kush Jain", "Gabriel Synnaeve", "Baptiste Roziere" ]
Code generation models can help improve many common software tasks ranging from code completion to defect prediction. Most of the existing benchmarks for code generation LLMs focus on code authoring or code completion. Surprisingly, there has been far less effort dedicated to benchmarking software testing, despite the strong correlation between well-tested software and effective bug detection. To address this gap, we create and release TestGenEval, a large-scale benchmark to measure test generation performance. Based on SWEBench, TestGenEval comprises 68,647 tests from 1,210 code and test file pairs across 11 well-maintained Python repositories. It covers initial tests authoring, test suite completion, and code coverage improvements. Test authoring simulates the process of a developer writing a test suite from scratch, while test completion mimics the scenario where a developer aims to improve the coverage of an existing test suite. We evaluate several popular models, with sizes ranging from 7B to 405B parameters. Our detailed analysis highlights TestGenEval's contribution to a comprehensive evaluation of test generation performance. In particular, models struggle to generate high-coverage test suites, with the best model, GPT-4o, achieving an average coverage of only 35.2\%. This is primarily due to models struggling to reason about execution, and their frequent assertion errors when addressing complex code paths.
[ "test generation", "software engineering", "language models" ]
Accept (Poster)
https://openreview.net/pdf?id=7o6SG5gVev
https://openreview.net/forum?id=7o6SG5gVev
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xFEcgWnQnX", "uJanwvV7x4", "qlvUqAHAsw", "lNfMszWqHg", "lKRcVdOzuu", "jYFe8iGD6F", "jD1s3IX7h2", "aIXzSytfnJ", "W1mCqbj1aY", "TLMBczPldE", "JsbNFbsKNH", "Ink4zyU3nR", "HrqejcLk6G", "63nTh3plpq", "1Bunn6QW2D" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "decision", "official_comment" ], "note_created": [ 1734741870247, 1731882308535, 1731882015744, 1731882253483, 1731882030416, 1731882476088, 1730442903638, 1732137472768, 1730348177740, 1733187582718, 1729918479507, 1730796148569, 1731882246040, 1737523800004, 1731882084030 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6893/Area_Chair_DQgk" ], [ "ICLR.cc/2025/Conference/Submission6893/Authors" ], [ "ICLR.cc/2025/Conference/Submission6893/Authors" ], [ "ICLR.cc/2025/Conference/Submission6893/Authors" ], [ "ICLR.cc/2025/Conference/Submission6893/Authors" ], [ "ICLR.cc/2025/Conference/Submission6893/Authors" ], [ "ICLR.cc/2025/Conference/Submission6893/Reviewer_Yezt" ], [ "ICLR.cc/2025/Conference/Submission6893/Reviewer_4d8B" ], [ "ICLR.cc/2025/Conference/Submission6893/Reviewer_FhSJ" ], [ "ICLR.cc/2025/Conference/Submission6893/Reviewer_g3uj" ], [ "ICLR.cc/2025/Conference/Submission6893/Reviewer_4d8B" ], [ "ICLR.cc/2025/Conference/Submission6893/Reviewer_g3uj" ], [ "ICLR.cc/2025/Conference/Submission6893/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6893/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"This paper introduces a new benchmark to support the research on code generation using large language models (LLMs) for real-world software engineering applications. Specifically, the benchmark covers tasks including producing initial test cases, test suite completion, and code coverage improvement, with the goal of comprehensive assessment of test generation performance. The paper evaluates multiple LLMs on this benchmark and finds that even the best-performing models struggles with achieving high code coverage.\\n\\nThe reviewers' were generally positive about the paper, but also raised a number of questions. The author rebuttal answered most questions satisfactorily. One reviewer did not respond to the rebuttal and the corresponding response looks good to me.\\n\\nTherefore, I recommend accepting the paper and strongly encourage the authors' to incorporate all the discussion in the camera copy to further improve the paper.\", \"additional_comments_on_reviewer_discussion\": \"The author rebuttal answered most questions satisfactorily. One reviewer did not respond to the rebuttal and the corresponding response looks good to me.\"}", "{\"comment\": \"**\\u2026details on mutation score calculation\\u2026**\\n\\nThank you for pointing this out. We add more details and references on how the mutants are generated in Appendix D.2.1.\\n\\n**\\u2026lack of quantitative analysis results in the main text is a significant weakness\\u2026**\\n\\nWe agree with your feedback and made these updates in the revised paper version. We refactored the most important parts of our quantitative analysis to be part of the main paper and kept the remaining analysis in Appendix.\\n\\n**\\u2026the best practices often suggest to write functions that are independent and de-coupled\\u2026**\\n\\nThis is true, however to effectively generate unit tests one needs knowledge of the entire file under test. Developers organize unit tests at the file level because it makes sense to have common setup (i.e. mocking objects for a file) followed by tests for individual methods in the file. Additionally, unit test frameworks such as JUnit generally expect generated tests to be at the file level; it is challenging to run individual tests in most frameworks, as people usually execute tests at the file level. Choosing to model our benchmark at the file level allows us to construct tasks that more closely resemble automated quality assurance, for example generating an entire test suite from code under test or adding to an existing complex test suite. LLM approaches that target test file generation outperform those at the method level (see Rao et. al. 2023 in comparison to Nie et. al. 2022), indicating that this file level context is essential for performance.\\n\\n**\\u2026 Rust\\u2026whether these languages fall outside the scope of \\\"real-world unit testing\\\" as claimed by the authors\\u2026**\\n\\nWe argue that Rust would still fall into our paradigm of generating tests at the file level. In Rust a file would be organized as a series of functions, thus if TestGenEval was extended to Rust, it would measure the ability of LLMs to test functions in a given Rust file. We changed the wording from *class* under test to *file* under test to improve clarity.\\n\\n**\\u2026using \\\\citep instead of \\\\cite\\u2026**\\n\\nThank you for pointing this out. We agree with your comment and use \\\\citep when appropriate.\"}", "{\"comment\": \"**What is the key difference between the newly created benchmark and the evaluation sets used in prior work on ML for test generation/completion?**\\n\\nBoth the scale and quality of TestGenEval are significantly larger than any prior dataset. Dinella et. al. 2022 and Tufano et. al. 2020 leverage the ATLAS and Methods2Test datasets, which focus on mapping test methods to their corresponding method under test (or \\u201cfocal\\u201d method). As a result evaluation is at the method level rather than file level. This is much more simplified than the large scale file level test suite generation and test completion tasks that we target in TestGenEval. Repurposing their datasets would require omitting both of our core tasks. \\n\\nNie et. al. and Rao et. al. release test generation datasets that could be repurposed, however their datasets generally consist of simpler programs. Both papers construct their evaluation datasets by trying to build projects and filtering out those that didn\\u2019t build, biasing their evaluation datasets towards simpler projects. We provide a table to compare properties of each dataset. TestGenEval has both complex tests and reasonably long code methods, along with more data points than other evaluation datasets. Rao et. al. has longer source method lengths due to increased verbosity of Java code (https://dl.acm.org/doi/pdf/10.1145/3571850), but significantly fewer gold test methods, and lower quality test files (both shorter in length and containing fewer tests on average). The higher number of tests and test length of TestGenEval points to generally higher quality repositories with better testing efforts. We argue these more closely resemble real world software testing. This is also evidenced by the very high star count of repositories used in TestGenEval (3,523-78,287 stars).\\n\\n| **Paper** | **Gold Test Methods** | **Average Test Method Len (Tokens)** | **Average Source Method Len (Tokens)** |\\n|------------------|-----------------------|--------------------------------------|-----------------------------------------|\\n| Nie et al. | 5,000 | 86.26 | 42.85 |\\n| Rao et al. | 1,048 | 64.78 | 293.24 |\\n| TestGenEval| 68,647 | 163.37 | 163.40 |\\n\\nOutside of the higher quality of TestGenEval data, TestGenEval is set up in a much more reproducible way compared to prior work. We provide docker images for each file pair and an evaluation harness to compute both coverage and mutation score, enabling the community to easily use and extend our dataset. Building around a widely used dataset such as SWEBench also ensures a degree of data quality, as previous issues with testing inconsistencies and indeterminism have already been figured out by the community. \\n\\n**Do you only consider the tests that appeared in PRs in SWEBench?**\\n\\nWe consider the code and test files that appeared in PRs in SWEBench, as these files are involved in real world bugs. We argue that testing targeted at these files is more important than testing arbitrary files in a repository, since we are measuring test quality on files that have had historic bugs in them. Furthermore, this paradigm creates a clean mapping between code and test files touched by a patch are directly connected with the code in the patch. This clean mapping ultimately leads to a higher quality dataset that the community can use. \\n\\n**\\u2026yours is method-level completion, which sounds more like the \\\"test method generation\\\" task\\u2026**\\n\\nOur completion task involves generating entire methods for the file to be tested, as Rao et. al. did for their test method generation task. We updated our phrasing in the revised version of the paper to improve clarity.\"}", "{\"comment\": \"**\\u2026Dependence on Prompts and Temperature Settings\\u2026**\\n\\nWe agree that other settings than 0-shot performance would be relevant (and list this as a limitation in Appendix H). For our tasks, few-shot generation or agents could also be evaluated and we release all our code and containers to allow us to evaluate other methods on our benchmark easily. However, we also believe that comparing popular models for zero-shot generation is very relevant as it is close to the setting users often experience in chat applications and many programming tools.\\n\\n**\\u2026Generated tests may fail on the code under test while exposing bugs in the code under test\\u2026the oracle problem\\u2026**\\n\\nWhile we agree that the oracle problem exists (and list this as a limitation in Appendix H), we argue it is less significant for our current setup of TestGenEval. We run our benchmark on the \\u201cpatched\\u201d version of SWEBench files, where each of the files has gone through the pull request process and been approved by reviewers. Such code is higher quality than most code on GitHub, and is thus less likely to contain bugs. Our task setup is also still useful in the context of regression testing and also aligns with how developers typically write unit tests. While we might not catch bugs in these files directly, we are still able to catch future regressions with generated tests that obtain high coverage and high mutation score.\"}", "{\"comment\": \"**Which mutation tool and mutants did you use? And how many mutants do you generate for each example?**\\n\\nWe used cosmic-ray to generate mutants and use the default set of mutation operators (https://github.com/sixty-north/cosmic-ray/tree/master/src/cosmic_ray/operators). Executing mutants dominates generation in terms of compute cost (our average source file is 1157 LOC and cosmic-ray takes ~15-30 secs per 1000 LOC). For each example, we generate 1031 mutants on average, with approximately 20% of all files we run mutation testing timing out in the one hour allocated for mutation testing execution. However, our error margin for mutation score is low, with an average mutation score uncertainty of 1.06%, meaning our estimates are accurate even in the case mutation testing times out.\\n\\n\\n**\\u2026consider renaming \\\"Pass @ 1\\\" to \\\"Any Pass @ 1\\\"...**\\n\\nWe agree that the naming was confusing and updated this in the revised version of the paper.\\n\\n\\n**\\u2026not generating tests with asserts\\u2026how do you detect them?...do you count\\u2026as pass or not pass?**\\n\\nWe detect tests without assertion or exception keywords statically, as not all generated tests will throw a runtime error. We qualitatively also looked at these cases to confirm our keyword match was comprehensive. We mark these cases as not passing to not inflate our metrics.\"}", "{\"comment\": [\"We thank all reviewers for their detailed comments. We tried to address all writing related concerns in our revised version of the paper, with changed text highlighted:\", \"Clarified how our test completion setting matches the test method generation setting introduced by Rao et. al. 2023\", \"Changed pass@1 to any pass@1 for our full test suite generation setting\", \"Refactored our analysis section to include the results from quantitative analysis and reduced qualitative analysis to one example, with the other two examples in appendix\", \"Clarified that we are looking to generate a test suite for a *file* under test rather than a *class* under test\", \"Added details on how we generate mutants for mutation testing\"]}", "{\"summary\": \"This paper constructs TestGenEval, a comprehensive and proper benchmark for testing unit test generation abilities of existing LLMs. The benchmark unprecedentedly incorporates mutation scores into their evaluation metric. It also considers multiple generation scenarios including full, first, last, and extra test generation. 10 models including a massive Llama 405B is evaluated on the benchmark and detailed results are given in the main paper and the appendix.\\nThis paper is leaning towards the rejection side since 1. the author did not make effort to evaluate if the generated test matches the definition of a unit test and merely add in mutation score to evaluate further the test effectiveness, and 2. the model selected for evaluation is questionable.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The introduction of mutation score improves on previous benchmarks and ensures a more proper benchmark\", \"The work introduces the most comprehensive benchmark dedicated to test generation so far\", \"The work evaluated on a large set of models, even on a very expensive 405B Llama\", \"Extensive quantitative/correlation analysis, contamination check, and also detailed qualitative analysis, which shows the good quality of the dataset\"], \"weaknesses\": [\"### Major\", \"The evaluation runs the 405B Llama model, which is quite impressive. However, it is unclear why is Llama model evaluated but not other closed-source ones (which are much easier to run comparing to Llama 405B) ones like Claude and Gemini.\", \"The paper acknowledges that test generated by the LLM can be very different from what human might write. However, there is no procedure in the paper to guarantee that the model generates actually \\u201cunit tests\\u201d as deemed by human developers (except for the prompt). This means that the model can just generate one big complicated test case for all of the generation scenarios considered (full, first, last, extra), which is against the purpose of unit tests in the first place.\", \"The task of test generation is generally more solvable by agents or by providing example usages. However, the paper only evaluates on vanilla models and didn\\u2019t evaluate even the simple case of adding dependencies into the prompt related to the task.\", \"### Misc\", \"Citations are not wrapped in brackets (Like Jain et al. when discussing R2E), which is mildly annoying\"], \"questions\": [\"Why is the massive Llama 405B selected in the evaluation rather than other better-performing models like Claude and Gemini?\", \"What effort does the benchmark made to ensure the test matches the conventional definition of a unit test?\", \"Why isn't any agents/tools evaluated in the evaluation?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for addressing my concerns and providing more details on mutation testing. I have raised my rating to 6.\"}", "{\"summary\": \"The theme of this paper is the introduction and evaluation of a large-scale benchmark named TESTGENEVAL, which is designed to measure the performance of test generation and test completion.\\n\\nTESTGENEVAL is constructed based on the SWEBench dataset and includes 68,647 test cases from 1,210 code and test file pairs across 11 well-maintained Python repositories. \\n\\nThis benchmark aims to fill the gap in existing code generation language model (LLM) benchmarks that focus less on software testing, despite the strong correlation between well-tested software and effective bug detection. \\n\\nTESTGENEVAL covers tasks such as initial test authoring, test suite completion, and code coverage improvement, aiming to comprehensively assess test generation performance. \\n\\nThe paper also evaluates several popular models with parameter sizes ranging from 7B to 405B and provides a detailed analysis of TESTGENEVAL's contribution to the evaluation of test generation performance.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. Continuous Improvement of Test Suites: TESTGENEVAL supports not only the generation of test suites from scratch but also the completion of existing test suites, which is particularly important for test optimization in continuous integration and continuous deployment (CI/CD) processes.\\n\\n2. Evaluating and Comparing Different Models: TESTGENEVAL provides a standardized environment to evaluate and compare various code generation models, including both open-source and proprietary models, helping researchers and practitioners understand the performance of different models in real-world software testing tasks.\\n\\n3. Test Generation for Real-World Use Cases: TESTGENEVAL is built based on real-world projects, meaning it is closer to actual development testing needs rather than being limited to academic research or toy problems.\", \"weaknesses\": \"1. Dependence on Prompts and Temperature Settings:\\nModel performance is highly dependent on the prompts and temperature parameters used. The paper focuses mainly on 0-shot performance, asking each model to generate an entire test suite or complete a test given the code under test, which may limit the model's performance.\\n\\n2. Risk of Data Contamination:\\nThere is a risk of data contamination in the pre-training data of models. Although the paper suggests that data contamination is unlikely by comparing the perplexity of tests in TESTGENEVAL with common GitHub code of similar length, it remains a potential issue.\\nIn addition, since TESTGENEVAL is adapted from the SWEBench dataset, there is a risk of models overfitting to this specific dataset. \\n\\n3. Computational Cost:\\nThe computational cost of calculating mutation scores is high. Each synthetic bug introduced to the code under test requires an additional test suite execution.\\n\\n4. Assumptions of Test Generation:\\nAll test generation benchmarks assume that the code under test is correct. Generated tests may fail on the code under test while exposing bugs in the code under test (a phenomenon known as the oracle problem).\", \"questions\": \"1. I would like to know what is the cost involved in executing TestGenEval? Including length of prompt, required GPU memory, running time and more cost analysis?\\n\\n2. How do the authors ensure the quality of the dataset, including but not limited to possible leakage of data and semantic correctness of the given code?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response and updates to the paper. They look good to me, and I continue to strongly support the acceptance of this paper.\"}", "{\"summary\": \"This paper introduces TestGenEval,\\na novel benchmark designed to evaluate neural test generation models on real-world Python projects. \\nTestGenEval is built upon 11 popular Python repositories selected from the patch generation benchmark SWE-Bench.\", \"it_assesses_neural_test_generation_from_two_key_perspectives\": \"test generation from scratch and test completion based on some existing test functions.\\nBoth aspects, particularly test completion, are valuable for real-world software engineering applications, such as IDE plugins. \\nThe authors evaluate several popular LLMs on TestGenEval, \\npointing out that even the best-performing model GPT-4o struggles with achieving high code coverage.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"A standard, real-world project-based benchmark for test generation is crucial and useful for advancing software testing research.\", \"TestGenEval is the first benchmark to incorporate mutation score as a metric, a commonly used measure in software engineering to assess the robustness of test cases.\", \"It offers Docker images instrumented with coverage and mutation score, ensuring the reusability of the artifacts.\", \"TestGenEval is also the first benchmark to evaluate file-level test completion, which aligns well with the real-world software development workflows.\", \"The authors perform a comprehensive quantitative analysis of the results, demonstrating the correlations and effects of various components within the benchmark.\"], \"weaknesses\": \"* This paper lacks details on mutation score calculation.\\n The authors do not provide specifics on how mutation score is measured in the benchmark. \\n In traditional software engineering, mutation testing is creating mutants of the code under test to evaluate the robustness of the test cases in detecting these modifications. \\n However, the paper does not clarify the process used to construct program mutants for test generation, \\n which is a non-trivial task that requires further clarification.\\n\\n\\n* The authors claim that real-world unit testing involves reasoning over complex *files*, \\n generating tests for a given *class under test*. \\n The authors also point out that the other related benchmark only measure the individual test method rather than an entire test suite is a weakness. \\n This claim is not well supported with reference nor evidence, \\n and is not commonly adopted by the software engineering community. \\n Unit testing aims to validate the correctness of the smallest building block (and hence, an \\\"unit\\\") in a complex software project.\\n In modern software engineering, the \\\"units\\\" of software are often functions in the codebase.\\n If these functions are tested/verified to be correct, then the composition should be correct without further testing.\\n Therefore, the best practices often suggest to write functions that are independent and de-coupled.\\n The more recent and popular programming languages, for example Rust, often abandon class and favor function composition\\n over inheritance.\\n Are these languages not related to \\\"real-world unit testing\\\"?\\n\\n* The authors claim that real-world unit testing requires reasoning over complex *files* and generating tests for a given *class under test*. \\n They argue that other benchmarks' focus on individual test methods,\\n rather than entire test suites, is a weakness. \\n However, this claim is not well-supported by references or evidence and is not commonly adopted by the software engineering community.\\n\\n Unit testing aims to validate the smallest building blocks of software, or so-called \\\"units,\\\" \\n which are often functions rather than entire classes. \\n In modern software engineering, if individual functions are tested and verified to be correct, \\n the composition of them is expected to be correct without additional testing. \\n Consequently, best practices emphasize writing independent and de-coupled functions.\\n\\n Furthermore, more recent programming languages, such as Rust, \\n favor function composition over inheritance, reflecting a shift away from class-based design. \\n This raises the question of whether these languages fall outside the scope of \\\"real-world unit testing\\\" as claimed by the authors, \\n despite their increasing relevance in software development.\\n\\n* The lack of quantitative analysis results in the main text is a significant weakness. \\n While the authors conduct five different quantitative analyses, all results are placed in the Appendix. \\n If an analysis is mentioned in the main text, it would be more effective to present the corresponding results alongside the discussion. \\n For space consideration, the authors could include only the most important analyses and results in the main text, \\n while moving less critical details to the Appendix. \\n This approach should also be applied to qualitative analyses for better readability and impact.\\n\\n\\n* Minor writing issue: the authors cite all references without using parentheses, regardless of context. \\n In some cases, using `\\\\citep` instead of `\\\\cite` would enhance the reading experience by making the citations more contextually appropriate. \\n Adjusting this would improve the flow and clarity of the text.\", \"questions\": \"Pleased address the concerns in **Weakness**.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a new LLM for test generation benchmark, TestGenEval. It builds on the same repositories as SWEBench, but focuses on extracting the test files and code-test pairs. TestGenEval studies two tasks: test suite generation (generating an entire test suite given a codebase) and test completion (generating test methods to a partial test suite). The generated tests are not only compared against human-written tests based on similarity, but also based on code coverage and mutation score, which are more important metrics for test quality. This paper benchmarked several recent LLMs across different sizes on TestGenEval, and found that generating and completing test suites are challenging tasks for current LLMs.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Included the measurement of code coverage and mutation score as part of the evaluation metrics, which are important metrics for measuring the quality of generated tests.\", \"Conducted a comprehensive set of experiments using LLMs of different sizes and different configurations.\", \"Performed an interesting analysis on the current LLMs' performance on TestGenEval, including the correlation with existing benchmarks and common error types.\", \"Paper is well-written and easy to follow.\"], \"weaknesses\": [\"Not sure if it is the best to include only the tests appeared in the PRs used by SWEBench; this may miss the other tests that did not appear in those PRs (i.e., don't have recent bug-fixing changes), making the data collection process biased.\"], \"questions\": [\"What is the key difference between the newly created benchmark and the evaluation sets used in prior work on ML for test generation/completion (e.g., Rao et al. 2023, Nie et al. 2023, Dinella et al. 2022, Tufano et al. 2020)? Namely, Rao et al. 2023 had a pretty large dataset for pre-training, and evaluated on the task of generating first/last/additional tests. Both Rao et al. 2023 and Nie et al. 2023 also used runtime metrics (compile and pass).\", \"line 142, \\\"we next extract code test file pairs from the code and test files run in the PR\\\": do you only consider the tests that appeared in PRs in SWEBench? Or all the tests in those repositories?\", \"line 189, \\\"This setup is in line with current test completion work Rao et al. (2023); Nie et al. (2023).\\\": this may not be true. The \\\"test completion\\\" in Rao et al. 2023 and Nie et al. 2023 was statement-level completion, but yours is method-level completion, which sounds more like the \\\"test method generation\\\" task in Rao et al. 2023.\", \"Which mutation tool and mutants did you use? And how many mutants do you generate for each example?\", \"For test suite generation task, consider renaming \\\"Pass @ 1\\\" to \\\"Any Pass @ 1\\\" to reduce confusion.\", \"line 361, \\\"the most common error is not generating tests with asserts\\\": but will that lead to a runtime error, and if not, how do you detect them? Also, do you count the generated tests without asserts as pass or not pass?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**I would like to know what is the cost involved in executing TestGenEval? Including length of prompt, required GPU memory, running time and more cost analysis?**\\n\\nWe generate ~11M tokens for TestGenEval and ~1.5M tokens for TestGenEvalLite (including both our test suite generation and test completion tasks). Runtime for TestGenEval is typically between 12-24 hours for the full benchmark (depending on model capability), across 16 CPUs and 2-4 hours for TestGenEvalLite across 16 CPUs. Running without mutation score significantly reduces runtime to 2-4 hours for TestGenEval and approximately 10-30 minutes for TestGenEvalLite for the same 16 CPU setup. For all settings, we use the full model context for TestGenEval (typically 128k tokens), however it is possible to run with a 32k context window with minimal performance differences (see Appendix F.4 for full details). Note that executing TestGenEval is significantly less costly than executing SWEBench (fixes for SWEBench often involve understanding multiple large files and running large trajectories, while TestGenEval focuses on generating test suites and test cases for a single file under test).\\n\\n**How do the authors ensure the quality of the dataset, including but not limited to possible leakage of data and semantic correctness of the given code?**\\n\\nTo answer the data leakage question, we conducted a thorough experiment in line with existing papers on data leakage (https://arxiv.org/pdf/2404.18824, https://arxiv.org/pdf/2309.10677), measuring both n-gram accuracy and perplexity of SWEBenchLite (lite split of SWEBench and what TestGenEvalLite is based on) compared to popular software engineering benchmarks, and GitBug-Java (a benchmark of recent bugs that should not be leaked).\", \"below_are_links_to_each_dataset_we_compared_against\": \"\", \"defects4j\": \"https://github.com/rjust/defects4j\", \"bugsinpy\": \"https://github.com/soarsmu/BugsInPy\", \"gitbug_java\": \"https://github.com/gitbugactions/gitbug-java\", \"swebenchlite\": \"https://github.com/princeton-nlp/SWE-bench/tree/main/swebench\", \"perplexity_of_different_models\": \"| **Model** | **Defects4J** | **BugsInPy** | **GitBug-Java** | **SWEBenchLite** |\\n|--------------------|---------------|--------------|-----------------|------------------|\\n| Llama 3.1 8B | 2.04 | 2.44 | 2.33 | 2.41 |\\n| Llama 3.1 70B | 1.76 | 1.98 | 2.07 | 1.93 |\\n| CodeLlama 7B | 1.58 | 1.81 | 1.68 | 1.80 |\", \"5_gram_match_of_different_models\": \"| **Model** | **Defects4J** | **BugsInPy** | **GitBug-Java** | **SWEBenchLite** |\\n|--------------------|---------------|--------------|-----------------|------------------|\\n| Llama 3.1 8B | 0.44 | 0.38 | 0.38 | 0.34 |\\n| Llama 3.1 70B | 0.51 | 0.47 | 0.45 | 0.47 |\\n| CodeLlama 7B | 0.64 | 0.54 | 0.58 | 0.53 |\\n\\nAs we can see from both tables, the 5-gram match and perplexity of SWEBenchLite is relatively similar to GitBug-Java (a recent unleaked benchmark). For an older and likely leaked benchmark such as Defects4J, we can observe much greater differences in perplexity and 5-gram match, indicating that data contamination is unlikely to be an issue for TestGenEval at the moment.\\n\\nFor semantic correctness of code, we measure both coverage and mutation score. While it is possible to achieve 100% code coverage by just invoking all lines in the method under test and asserting true, it is only possible to achieve high mutation score if the generated tests can discriminate between buggy and fixed code. The need to check more than syntactic correctness of generated solutions was a major motivation for including mutation score (a strong execution metric) as one of our evaluation metrics.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"**Why is the massive Llama 405B selected in the evaluation rather than other better-performing models like Claude and Gemini?**\\n\\nWe chose to evaluate both strong open and closed source models. We wanted to include open-source models as they are more customizable and allow for more privacy, which is important for part of the community. Open source models are also not gated behind paid APIs like Claude and Gemini. \\n\\nFor closed-source models, GPT-4o is widely used and recognized as one of the best models for coding tasks. For open-source models, Llama 405B is a strong model achieving similar performance compared to the most widely-used closed-source models such as GPT-4o, Claude and Gemini. Although Llama 405B is large, we could run it on our servers and were not limited by API costs like with closed-source models.\\n\\n**What effort does the benchmark made to ensure the test matches the conventional definition of a unit test?**\\n\\nWe follow prior work in generating prompts for unit test generation, and additionally only measure mutation score on the file under test (thus if the model were to write an integration test, it would perform poorly on our benchmark, with low mutation score on the file under test). We filter out tests that do not contain assertions or expect exceptions, marking all such tests as not passing on the code under test. Both our filtering and metrics give us confidence that the model generates tests that match the conventional definition of a unit test. We also perform extensive qualitative analysis, and in all cases we find that the model generates tests that resemble a conventional unit test. We provide a website with all model generations (linked in the paper) if you want to verify our conclusions yourself. \\n\\n\\n**Why isn't any agents/tools evaluated in the evaluation?**\\n\\nThis is one of the limitations we mention in Appendix H. To the best of our knowledge, there is no open agent targeting unit test generation and general agents (e.g. Devin) are not publicly available. We consider that creating our own agents for existing models is out of scope for this paper. We wanted to focus on a simpler setting, where we are able to compare models fairly (with lower variance and susceptibility to agent specific hyperparameters). For those looking to build and run testing agents, we open source all code and dockerized containers for evaluation.\"}" ] }
7nyJBVCTGQ
LiFT: Learning to Fine-Tune via Bayesian Parameter Efficient Meta Fine-Tuning
[ "Minyoung Kim", "Timothy Hospedales" ]
We tackle the problem of parameter-efficient fine-tuning (PEFT) of a pre-trained large deep model on many different but related tasks. Instead of the simple but strong baseline strategy of task-wise independent fine-tuning, we aim to meta-learn the core shared information that can be used for unseen test tasks to improve the prediction performance further. That is, we propose a method for {\em learning-to-fine-tune} (LiFT). LiFT introduces a novel hierarchical Bayesian model that can be superior to both existing general meta learning algorithms like MAML and recent LoRA zoo mixing approaches such as LoRA-Retriever and model-based clustering. In our Bayesian model, the parameters of the task-specific LoRA modules are regarded as random variables where these task-wise LoRA modules are governed/regularized by higher-level latent random variables, which represents the prior of the LoRA modules that capture the shared information across all training tasks. To make the posterior inference feasible, we propose a novel SGLD-Gibbs sampling algorithm that is computationally efficient. To represent the posterior samples from the SGLD-Gibbs, we propose an online EM algorithm that maintains a Gaussian mixture representation for the posterior in an online manner in the course of iterative posterior sampling. We demonstrate the effectiveness of LiFT on NLP and vision multi-task meta learning benchmarks.
[ "Bayesian methods", "Parameter efficient fine-tuning", "meta learning" ]
Accept (Spotlight)
https://openreview.net/pdf?id=7nyJBVCTGQ
https://openreview.net/forum?id=7nyJBVCTGQ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zR8LTo2mRw", "wDC7Q3DkO8", "h023GIldjE", "Shv6C1WiLL", "SfPIAmLchM", "PiBjqXXH8p", "OcJNgG5Eit", "LxQgcnAFp0", "LoSNj3JV1w", "JWPTVUlAqD", "FkDXC6qHDZ", "ETCiuYe5Rm", "BscREaywRA", "9XOTMkdJIs", "4z0gS2ShUf", "0sk8fPnUO8" ], "note_type": [ "official_review", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment" ], "note_created": [ 1730654184662, 1730619271825, 1734484127959, 1732201555096, 1732288287203, 1732198078667, 1730841492064, 1732196481784, 1729754575744, 1732288818280, 1732827713204, 1732657836374, 1732200200072, 1737524005061, 1732623587908, 1732198838146 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9775/Reviewer_p6L8" ], [ "ICLR.cc/2025/Conference/Submission9775/Reviewer_3TVT" ], [ "ICLR.cc/2025/Conference/Submission9775/Area_Chair_miKZ" ], [ "ICLR.cc/2025/Conference/Submission9775/Authors" ], [ "ICLR.cc/2025/Conference/Submission9775/Reviewer_pSzU" ], [ "ICLR.cc/2025/Conference/Submission9775/Authors" ], [ "ICLR.cc/2025/Conference/Submission9775/Reviewer_PRGJ" ], [ "ICLR.cc/2025/Conference/Submission9775/Authors" ], [ "ICLR.cc/2025/Conference/Submission9775/Reviewer_pSzU" ], [ "ICLR.cc/2025/Conference/Submission9775/Reviewer_3TVT" ], [ "ICLR.cc/2025/Conference/Submission9775/Reviewer_PRGJ" ], [ "ICLR.cc/2025/Conference/Submission9775/Reviewer_p6L8" ], [ "ICLR.cc/2025/Conference/Submission9775/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9775/Authors" ], [ "ICLR.cc/2025/Conference/Submission9775/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The authors propose a parameter efficient finetuning scheme called learning-to-finetune (LiFT) that can adapt a model not only to a single, but to a set of related tasks.\\n\\nAt the heart of LiFT sits a Bayesian meta training method that is executed on a set of related finetuing tasks. It uses hierarchical priors for the PEFT parameters to split task specific from task agnostic knowledge and runs stochastic gradient Langevin dynamics (SGLD) for posterior inference. The transferable task agnostic knowledge is later used as a prior for test-time adaptation. \\n\\nWhile they explain their method, using LoRA, the method is general and can be adapted easily to any other PEFT scheme.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is in general well written and I enjoyed reading it. Ideas are explained in detail. For most of the math intuitive explanations are provided. Hence, it is easy to understand the proposed meta-learning method and all the tricks that are needed to make it work.\", \"the_paper_contains_two_creative_ideas\": \"1) Their particular hierarchical Bayesian model for the PEFT parameters. 2) The combination of Gibbs sampling and SGLD for memory efficient posterior inference.\", \"weaknesses\": [\"There are several weaknesses to this paper:\", \"The main claim of the paper is, that it is beneficial to use their Bayesian formulation for PEFT of models to a set of related tasks. They give an intuitive explanation that one of their latent variables learns task agnostic and the other task specific adaptations. While this sounds reasonable, there is no analysis if this claim is actually true and, hence, if their hierarchical model actually makes sense. However, experimental sanity checks could be easily made e.g. by comparing to a non-hierarchical model. In the end I am completely puzzled what part causes the demonstrated increase in performance: 1) The formulation of the problem or 2) favorable training dynamics of this quite large training algorithm.\", \"Important ablation studies are not provided: Their hierarchical model requires the choice of two variances $\\\\sigma^2$ and $\\\\beta^2$. Only one set of values is provided. However, to judge how brittle the model is, some ablations would be helpful.\", \"The paper does not give any idea how the proposed algorithm scales with #adapted parameters (can it only be used with PEFT methods or even with FFT?). Training requires to run the SGLD sampling. From the appendix I got, that it requires quite some steps, i.e., 2000 burn-in and 1000 warmup steps. It is not obvious how this translates to training time.\", \"The paper proposes an online EM algorithm to fit a GMM to posterior samples in an iterative way, making it unnecessary to store the full set of posterior samples. While stating that this is a novel contribution, there exists lots of work about this problem already. Keywords are: 1) incremental EM or 2) streaming GMMs [1], [2]. Related works are not referenced and a comparison is missing.\", \"[1] Hosseini, Reshad, and Suvrit Sra. \\\"An alternative to EM for Gaussian mixture models: batch and stochastic Riemannian optimization.\\\" Mathematical programming 181.1 (2020): 187-223.\", \"[2] Karimi, Belhal, et al. \\\"On the global convergence of (fast) incremental expectation maximization methods.\\\" Advances in Neural Information Processing Systems 32 (2019).\"], \"questions\": [\"After thoroughly reading the paper, some questions remain:\", \"Is there any experimental evidence that $\\\\phi$ really learns meaningful task-agnostic adaptations?\", \"How brittle is the training if we change the parameters $\\\\sigma^2$ and $\\\\beta^2$ of the hierarchical model? Did you do any experiments there?\", \"Why do you choose a GMM to model $\\\\phi|\\\\{D_i\\\\}_{i=1}^N$.\", \"I think, having a multi-modal distribution goes against your idea that $\\\\phi$ learns task-agnostic adaptations, because each mode can represent a specialization. More specifically, if the number of modes $M$ is equal to the number of tasks $N$, each mode of $\\\\phi|\\\\{D_i\\\\}_{i=1}^N$ can specialize to one task. In this case you would have something very similar to a stochastic version of the mixtures of LoRA idea. How do you prevent this from happening?\", \"After test-time adaptation, is the model output stochastic or deterministic? I.e. do you continue to run the SGLD sampling for $\\\\theta|\\\\phi$ or do you use some statistics and perform just a deterministic forward pass?\", \"In the stochastic case: Why don't you provide confidence intervals for the LiFT results?\", \"How does the LiFT scale with the #trainable parameters?\", \"What is the difference between warm-up samples and burn-in samples for the SGLD Gibbs sampling?\", \"I am looking forward to your explanations.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The author proposed a modulated meta-learning scheme where the modulation is the LoRA parameter of the large model. Specifically, each task-specific parameter is represented with LoRA, and the base model is shared across all tasks. To learn such a model, the author proposed a hierarchical Bayesian meta-learning method called LiFT. Here, the task specifical LoRA is modeled to be sampled from a prior distribution, where the author have suggested an efficient sampling method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The overall writing is clear, and the method itself is sensible.\\n\\nThe proposed sampling method is efficient. I think it would be great to show the experiment that shows the efficient gain.\", \"weaknesses\": \"Missing critical related works and comparison. Currently, there are many works that consider meta-learning with modulation (i.e., a few parameter updates from the base model, such as LoRA). For instance, CAVIA [1] is the first paper that suggested meta-learning with modulation. Furthermore, a more recent method, CNAPs [2], combines amortization-based meta-learning with modulation. Several works consider modulated meta-learning as follows: FiLM modulation with MAML [3,4], LoRA modulation with MAML [5], Scaling CNAPs to large-scale meta-learning [6,7], and LoRA modulation with amortization-based meta-learning [8].\\n\\n[1] Fast Context Adaptation via Meta-Learning, ICML 2019\\\\\\n[2] Fast and flexible multi-task classification using conditional neural adaptive processes, NeurIPS 2019\\\\\\n[3] From data to functa: Your data point is a function and you can treat it like one, ICML 2022 \\\\\\n[4] COIN++: Neural Compression Across Modalities, TMLR 2022\\\\\\n[5] Modality-Agnostic Variational Compression of Implicit Neural Representations, ICML 2023\\\\\\n[6] Memory Efficient Meta-Learning with Large Images, NeurIPS 2021\\\\\\n[7] Improved Few-Shot Visual Classification, CVPR 2020\\\\\\n[8] Online Adaptation of Language Models with a Memory of Amortized Contexts, arXiv 2024\\n\\n----\\n\\nNeed to consider more recent baselines, and more effective meta-learning baselines. Currently, most of the meta-learning baselines are highly outdated. Furthermore, there are more effective and recent baselines [1,2,3]. Typically, [3] suggested the interpolation of sparse experts (i.e., only a few parameter updates), which has similarities with the current approach (i.e., LoRA modulation). \\n\\n[1] Meta-learning with warped gradient descent, ICLR 2020\\\\\\n[2] Bootstrapped meta-learning, ICLR 2022\\\\\\n[3] Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated Experts, ICML 2024\\n\\n---\\n\\nI think the experiment application needs to be more motivating. The main purpose of using LoRA is to fine-tune large models to reduce the computation burden or overfitting. However, the current setup is mostly conducted in small-scale networks. I believe showing whether the proposed method scale to large-scale LLM (e.g., more than 1B param) will be an interesting and motivating example.\\n\\n---\\n\\nI agree that using meta-learning could be beneficial, but I don't understand the advantages of modeling with Bayesian meta-learning specifically. From an uncertainty perspective, it makes sense, but it's still possible to \\\"jointly learn the source task\\\" without a Bayesian approach. I'm particularly concerned about whether this proposed sophisticated sampling technique will truly scale with large models.\", \"questions\": \"See the question above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper proposes a hierarchical Bayesian meta-learning framework for parameter-efficient fine-tuning (PEFT). The key idea is to model LoRA parameters as latent variables drawn from higher-level, task-agnostic variables, thereby enabling flexible knowledge sharing across tasks. To handle the complexity and scale of the posterior inference, the authors introduce a SGLD-Gibbs sampling algorithm and an online EM approach to approximate the posterior distribution efficiently.\\n\\nAll reviewers agreed that the paper offers a novel and principled approach to meta-learning with PEFT. Initial concerns included the absence of comparisons with more recent meta-learning methods, limited theoretical discussion regarding the asynchronous update scheme, and a lack of certain ablation studies. The authors addressed these points thoroughly during the rebuttal phase. After these clarifications and additional experiments, all reviewers ultimately recommended acceptance.\\n\\nAfter reading the paper, reviews, and discussions, the AC believes that the paper makes a strong and timely contribution. By extending the idea of PEFT to a hierarchical Bayesian meta-learning framework, it provides a principled way to decompose LoRA parameters into task-agnostic and task-specific components. The experiments confirm the scalability and effectiveness of the proposed method. \\n\\nAC encourages the authors to incorporate the additional results and discussions from the rebuttal\\u2014especially those involving stronger meta-learning baselines and more comprehensive ablation studies\\u2014into the final version of the paper.\", \"additional_comments_on_reviewer_discussion\": \"During the discussion, Reviewer PRGJ raised concerns about theoretical and empirical validation for the asynchronous SGLD-Gibbs updates. Reviewer p6L8 emphasized the need for further ablations, including checks on the necessity of the hierarchical Bayesian framework, the robustness of chosen hyperparameters, and the utility of a Gaussian mixture rather than a single Gaussian. Reviewer 3TVT requested additional comparisons with more recent meta-learning methods and confirmation of scalability to larger models. Reviewer pSzU sought statistical significance assessments and clarifications on baseline configurations.\\n\\nIn response, the authors conducted new experiments to compare LiFT against non-hierarchical baselines, demonstrated that their mixture-based posterior representation outperforms simpler alternatives, offered sensitivity analyses for key hyperparameters, and included comparisons with stronger baselines. These steps addressed the core concerns, convincing the reviewers to suggest acceptance.\"}", "{\"title\": \"Thank you for your valuable feedback!\", \"comment\": \"> 1. More meta-learning baselines.\\n\\nMost methods the reviewer suggested assume small class cardinality cases (eg, learning the class prototypes or linear readout heads), hence they are not straightforwardly applicable to open-ended NLP (Crossfit) benchmarks. On the VTAB benchmark, however, we have been able to implement ProtoNet and more recent memory-efficient version ProtoNet-LITE. The latter (LITE) essentially aims to represent the prototype vectors using the entire support data (instead of a minibatch), but to remedy computational overhead they set a randomly chosen large portion of the support data as non-backpropagatable. \\n\\n[ProtoNet-LITE] Memory Efficient Meta-Learning with Large Images, NeurIPS 2021\\n\\nThe results are shown in the following table (the full scores for individual tasks are shown in Table 2 of the revised paper).\\n\\n| | Non-natural -> Natural | Non-special -> Special | Non-structured -> Structured |\\n|:----:|:----:|:----:|:----:|\\n| No Meta Train | 78.0 | 84.3 | 54.4 | \\n| Union Train | 79.0 | 84.5 | 55.2 | \\n| MAML | 79.6 | 85.1 | 55.5 | \\n| FO-MAML | 79.1 | 84.8 | 55.4 | \\ni-MAML | 79.2 | 85.0 | 55.9 | \\n| Reptile | 79.2 | 84.2 | 55.4 | \\n| ProtoNet | 79.8 | 84.3 | 55.2 | \\n| ProtoNet-LITE | 79.2 | 84.4 | 55.3 | \\n| LiFT K=1 (Ours) | 80.6 | 85.6 | 58.3 | \\n| LiFT K=3 (Ours) | 80.6 | 85.7 | 59.1 | \\n| LiFT K=5 (Ours) | 80.6 | 86.3 | 59.9 | \\n\\n\\n> 2. The results do not include standard deviations. Are the improvements with LiFT statistically significant?\\n\\nDue to the computational overhead, it was time and compute-expensive to run multiple random seed experiments, and we only managed to run a single experiment. However, we are able to perform multiple 5 random seed runs on the Crossfit-CLS45 split to collect standard deviations and $p$-values. The results are shown in the following table ($p$-values of the existing methods against our LiFT-K=5 shown in the parentheses):\\n\\n| | No Meta Train | Union Train | MAML | FO-MAML | i-MAML | Reptile | \\n|:----:|:----:|:----:|:----:|:----:|:----:|:----:|\\n| Score | $56.27\\\\pm1.30$ | $56.79\\\\pm0.59$ | $52.82\\\\pm1.05$ | $57.70\\\\pm1.18$ | $59.03\\\\pm0.65$ | $58.26\\\\pm0.66$ | \\n| $p$-value | ($3.26 \\\\times 10^{-6}$) | ($3.63 \\\\times 10^{-8}$) | ($3.72 \\\\times 10^{-8}$) | ($8.02 \\\\times 10^{-6}$) | ($8.81 \\\\times 10^{-7}$) | ($3.36 \\\\times 10^{-7}$) | \\n\\n| | BMAML M=5 | ABML | MAML-Mix K=5 | i-MAML-Mix K=5 | Retriever Mixture K=5 | Retriever Mixture K=all | \\n|:----:|:----:|:----:|:----:|:----:|:----:|:----:|\\n| Score | $57.98\\\\pm0.59$ | $55.74\\\\pm1.25$ | $53.87\\\\pm0.12$ | $58.99\\\\pm1.20$ | $55.82\\\\pm1.42$ | $52.58\\\\pm1.11$ | \\n| $p$-value | ($1.09 \\\\times 10^{-7}$) | ($1.47 \\\\times 10^{-6}$) | ($7.19 \\\\times 10^{-12}$) | ($4.85 \\\\times 10^{-5}$) | ($4.06 \\\\times 10^{-6}$) | ($4.87 \\\\times 10^{-8}$) | \\n\\n| | Retriever Fusion K=5 | Retriever Fusion K=all | MBC-$\\\\mu$ K=5 | MBC-$\\\\mu$ K=10 | LiFT K=5 (Ours) | | \\n|:----:|:----:|:----:|:----:|:----:|:----:|:----:|\\n| Score | $56.03\\\\pm1.69$ | $53.77\\\\pm1.29$ | $56.17\\\\pm1.05$ | $55.41\\\\pm0.87$ | $63.87\\\\pm0.31$ | | \\n| $p$-value | ($1.69 \\\\times 10^{-5}$) | ($3.52 \\\\times 10^{-7}$) | ($6.23 \\\\times 10^{-7}$) | ($8.08 \\\\times 10^{-8}$) | - | |\\n\\nAs shown, our LiFT model outperforms the existing methods statistically significantly.\\n\\nWe have added this result in the revised paper (Appendix E.8 and Table 14). \\n\\n\\n> 3. Some limitations, such as the joint training on the training tasks, which can be infeasible for large-scale datasets.\\n\\nWe did mention this limitation in **Sec. 4.5 Utilizing Task-wise Pre-fine-tuned Models**. In this section, we also introduced a heuristic remedy of clustering pre-fine-tuned models. Table 5 shows the evaluation of this remedy, and the results are promising. \\n\\n\\n> 4. Which architectures and hyperparameters are used?\\n\\nWe used the same architectures and fair hyperparameters for MAML baselines (Appendix C). We agree that MAML is not scalable to large feature extractor networks. For fair comparison and for efficiency, we apply MAML in the same way that we apply LIFT. IE: We train MAML-of-LORAs, rather than MAML of the raw VIT-B/16 backbone, which would be intractable and unstable.\\n\\n\\n> 5. Multiple inner steps for MAML.\\n\\nWe have run the experiments to see the effect of the multiple inner loop updates in MAML. On the Cross-fit CLS-45 benchmark, increasing the number of inner iterations can improve the test performance at the cost of memory, but MAML still underperforms our LiFT by large margin. \\n\\n| # inner iters in MAML | 1 | 2 | 3 | 5 | LiFT (K=5) |\\n|:----:|:----:|:----:|:----:|:----:|:----:|\\n| CLS-45 | 52.68 | 48.48 | 55.73 | 56.02 | 64.12 | \\n\\nMore than 5 iters in MAML incurred OOM issues.\\n\\nWe have added this result in the revised paper (Appendix E.7 and Table 13).\"}", "{\"title\": \"Response to the authors' comments\", \"comment\": \"I would like to thank the authors for addressing my concerns in the revised manuscript. I believe this is a good paper and will maintain my original score.\"}", "{\"title\": \"Thank you for your valuable feedback! (Part 1)\", \"comment\": \"> 1. Why the hierarchical Bayesian method works.\\n\\nTo show the importance of hierarchical modeling, we have implemented and tested two *non-hierarchical* Bayesian models. In the first model, the PEFT LoRA parameters are treated as random variables as ours, but they are shared across all tasks. Hence the posterior inference in this case amounts to learning task-shared (task-agnostic) information from meta training data. We used the SGLD posterior inference. This model performs way worse than our hierarchical model as shown in the table below. \\n\\nAs a second non-hierarchical Bayesian model, we can think of a model where each task is represented by its own PEFT LoRA parameters, but there is no governing higher-level random variables as in our LiFT model. Consequently this model aims to learn only task-specific information, and would not transfer anything to novel tasks. Hence it can be seen as a Bayesian version of \\\"No Meta Train\\\" method, which only runs test-time adaptation via SGLD posterior inference. This model also performs poorly compared to our hierarchical model as shown in the following table.\\n\\n| | Non-hierarchical (Shared PEFT) | Non-hierarchical (Task-wise PEFT) | LiFT (K=1) | LiFT (K=3) | LiFT (K=5) | \\n|:----:|:----:|:----:|:----:|:----:|:----:|\\n| CLS-45 | 55.62 | 56.74 | 62.96 | 63.37 | 64.12 | \\n\\nWe have added this result in the revised paper (Appendix E.2 and Table 7).\\n\\n\\n> 2. Need ablation study on $\\\\sigma^2$ and $\\\\beta^2$.\\n\\nTo see the robustness to these hyperparameters, we have run the ablation study with our LiFT $K=5$ model on the Crossfit CLS-45 task split. As shown in the following table, the test performance is quite robust to the choice of ($\\\\sigma^2$, $\\\\beta^2$).\\n\\n($\\\\sigma^2,\\\\beta^2$) | 0.001 | 0.005 | 0.01 (reported) | 0.05 | 0.1 | 0.2 | 0.3 | 0.5 | \\n|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|\\n| Score | 62.87 | 63.05 | 64.12 | 63.62 | 62.75 | 62.75 | 61.48 | 61.73 | \\n\\nPlease see also the plot of this ablation study in our revised paper (Appendix E.3 and Figure 7).\\n\\n\\n> 3. How the proposed algorithm scales with #adapted parameters (can it only be used with PEFT methods or even with FFT?). Training times for the warm-up and burn-in steps.\\n\\nThere is no point in adapting the full backbone parameters (FFT), which is known to overfit and underperform PEFTs in situations with modest downstream task sizes.\\n \\nWe report the wall clock training times (on a single V100 GPU) in the following table. Our LiFT with $K=5$ GMM is compared with the two baselines \\u201cUnion Train\\u201d and \\u201cMAML\\u201d on two different PEFT (rank 4 and 64) cases. The result below signifies that our model is as efficient as the vanilla SGD update (ie, \\u201cUnion Train\\u201d) for both low and high ranks, thanks to our efficient SGLD-Gibbs sampler. \\nAsymptotically (in Big-$O$ notation), both the vanilla SGD update and our SGLD-Gibbs (Eq.9-11) $+$ online EM (Eq.13-14) take the same $O(T+d)$ time per iteration where $T$ is the time for the backbone forward/backward computation for evaluating $\\\\nabla_{\\\\theta_i^a} \\\\log p(D_i|\\\\theta_i^a)$ and $d$ is the number of PEFT parameters. This is because of the Gaussian $p(\\\\theta_i^a|\\\\phi)$ which allows analytic gradient $\\\\nabla \\\\log p(\\\\theta_i^a|\\\\phi)$ with respect to both $\\\\theta_i^a$ and $\\\\phi$. The EM iteration also takes $O(d)$ time.\\n\\n| | Rank 4 LoRA | Rank 64 LoRA |\\n|:----:|:----:|:----:|\\n| (LiFT K=5) Per-task/iter time (after burn-in) | 0.268 secs | 0.287 secs |\\n| (\\u201cUnion Train\\u201d) Per-task/iter time | 0.242 secs | 0.270 secs |\\n| (\\u201cMAML\\u201d) Per-task/iter time | 0.372 secs | 0.380 secs |\\n\\nThe warm-up steps in our method refers to the vanilla SGD steps for the first 1000 iterations, and the burn-in steps amounts to running the proposed SGLD recurrences for the next 1000 iterations (without collecting the posterior samples). After the burn-in steps, we collect the posterior samples while running the SGLD. The wall clock times for these two stages are shown below, and they are quite reasonable times. \\n\\n| | Rank 4 LoRA | Rank 64 LoRA |\\n|:----:|:----:|:----:|\\n| (LiFT K=5) Warm-up time (1~1000th steps) | 219 secs | 233 secs | \\n| (LiFT K=5) Burn-in time (1001~2000-th steps) | 277 secs | 279 secs | \\n\\nWe have added the above results in the revised paper (Appendix E.5, Table 10 and 11).\\n\\n*(Our responses continue in the thread below.)*\"}", "{\"summary\": \"The paper develops a general fine-tuning strategy for foundational models that is based on the meta-learning principle. This way not only the foundational models become shareable, but also the layers responsible for task-specific fine-tuning can be accumulated and shared to be used on unseen tasks with not enough fine-tuning data. The core contribution of the paper is the Bayesian methodology that casts meta-learning (and hence task-specific fine-tuning) as Bayesian sampling exercise. Technically, this is implemented via a modification of SGLD and the online EM algorithm. Empirical results are obtained from NLP and Vision tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Overall, the idea seems sufficiently novel and interesting\", \"The empirical evaluation is extensive and convincing\", \"Results are state of the art\"], \"weaknesses\": [\"The justification of SGLD-Gibbs Sampling is only empirical through a toy example in Appendix A. A theoretical justification showing the required convergence would have significantly strengthened the contribution.\", \"Ablation studies are not comprehensive enough, failing to support major decisions made in the algorithm design step: the $J$-term update and the Online-EM Mixture for Posterior Approximation. See questions section for details.\", \"It looks like literature review could be updated with the online EM literature. I am not sure if the online EM algorithm is genuinely novel given the lack of analysis of the related work on this topic in the paper. See questions for details.\", \"It is unclear if code is being open sourced.\"], \"questions\": [\"$J$ appears in (11) out of nowhere. Could you please motivate the need in this term more clearly in the text transition between (7) and (9)? It seems to be the core algorithmic contribution that does not follow trivially from the original SGLD formulation in (7-8). It feels like by not discussing it in sufficient detail authors basically undersell their contribution.\", \"Can you say at least something theoretical about your approximations, to strengthen the theory? I understand that convergence rate analysis might be to much of an ask, but if we talk about means and if you take the expectations of (9-11) will they match the expectations of equations (7-8) in the limit?\", \"On a related note, can you run an ablation study by comparing against a more naive version of the algorithm that does not rely on $J$ updates? For example, you could use the sample approximation of the sum in (7) by the log-probability of $\\\\theta_{i}^a$ in a given update round. In my view, this could further strengthen the algorithmic contribution of the paper, or simplify the algorithm if sample approximation has same or better accuracy.\", \"\\\"However, we aim to enrich it by a mixture of Gaussians to better approximate the true posterior that is inherently a multi-modal distribution\\\". Can you provide the results of ablation study comparing against Gaussian confirming that such enrichment is actually useful? This is especially important given that the Online-EM Mixture for Posterior Approximation is necessary only to support the GMM. If the GMM is not provably necessary, the value of the Online-EM Mixture contribution is questionable.\", \"Since you are claiming online EM as a technical contribution, could you please update the related work section with the online EM literature? For example: https://arxiv.org/abs/2207.14019, https://www.diva-portal.org/smash/get/diva2:857377/FULLTEXT01.pdf, https://www.sciencedirect.com/science/article/abs/pii/S0167947304003263. Please discuss the novelty of your work w.r.t. existing contributions and motivate why you need a new version of online EM.\", \"\\\"Hence we stick to the simple average of the metrics over all test tasks regardless of the metric types.\\\" I understand that the relative lift could be tricky, because of division by small numbers. Could you please consider reporting the mean of the absolute deltas between the baseline and the candidate algorithm? I believe this could be a more robust and more statistically significant measure of accuracy improvement than reporting the sum of raw metric values.\", \"Will code be open-sourced?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for your valuable feedback!\", \"comment\": \"> 1. A theoretical justification of the SGLD-Gibbs\\n\\nThe convergence of the SGLD recurrences to the target distribution has been studied recently in (Zou et al., 2021, Xu et al., 2018, Raginsky et al., 2017). Also it is well known in the MCMC literature that the Gibbs sampling, if all the variables are visited sufficiently many times, converges to the stationary distribution since it satisfies both the detailed balance equation and ergodicity. Then since our SGLD-Gibbs has the transition kernel in MCMC that is a composition of these two operators, it should converge to the target distribution. \\n\\nWe have added the above argument in our revised paper (Footnote 1 in p.4).\\n\\nHowever, what is not evident in theory and needs further theoretical investigation, is whether our asynchronous update scheme in Eq.(9-11) would also converge to the stationary distribution of the chain. This is mainly because the sum of the gradients $\\\\nabla_\\\\phi \\\\log p(\\\\theta_i^a|\\\\phi)$ over all tasks $i$ in the $\\\\phi$ update Eq.(7) is not exact in the asynchronous scheme since we use the cavity sum (ie, the sum except the current task $i$) that is computed from the old $\\\\phi$s. \\n\\nWe will be investigating further this theoretical analysis, but it may be difficult for us to come up with a decent theoretical conclusion during this rebuttal stage. \\n\\n\\n> 2. An alternative version to the $J$ update scheme. Eg, sample approximation of the sum in (7).\\n\\nTo see the impact of our $J$ update strategy (Eq. 9-11), we have run the stochastic approximate version of (7), which replaces the average of $\\\\nabla_\\\\phi \\\\log p(\\\\theta_i|\\\\phi)$ by the current task iterate alone. As shown in the table below, it has decent performance but slightly underperforms our original proposal of the $J$ update strategy. We thank the reviewer for an interesting suggestion, and we have mentioned it in the revised paper (Please see Appendix E.4 and Table 9).\\n\\n| | \\\"Sample approximation\\\" | \\\"$J$ update\\\" (our original approach) |\\n|:----:|:----:|:----:|\\nCLS-45 | 63.61 | 64.12 |\\n\\n\\n> 3. Related work section with the online EM literature.\\n\\nWe realize that there have been many prior works on the online EM methods in the literature. We have cited them along with the papers suggested by the reviewer in the extended related work section Appendix D in the revised paper. Although we find that none of them are identical to the one proposed in our paper, most are similar in nature to ours (e.g., exploiting the recursive structure of the EM). Hence we believe that these prior online EM methods can be employed in our online posterior estimation, and can be equally successful. \\n\\nThe discussions on the related online EM algorithms can be found in Appendix D in the revised paper.\\n\\n> 4. Code open sourcing.\\n\\nWe consider releasing our code should the paper be accepted.\\n\\n\\n> 5. Motivation on the $J$ update scheme (Need more elaboration).\\n\\nAlthough $J$ is actually introduced in L:179 p.4 where we stated that $J$ represents and maintains $\\\\sum_i \\\\nabla_\\\\phi \\\\log p(\\\\theta_i|\\\\phi)$, we now elaborate this further in the revised paper. Please see the text before Eq.(9). We reiterate it as follows: \\n\\nAnother computational bottleneck is the sum of the gradients $\\\\nabla_\\\\phi \\\\sum_i \\\\log p(\\\\theta_i^a|\\\\phi)$ over all $i=1,\\\\dots,N$ for each $\\\\phi$ update in (7). To remedy this issue, we introduce an auxiliary variable $J$ that maintains this sum of the gradients, and we let it asynchronously updated (11): at each iteration for task $i$, subtracting the old gradient for $i$ from $J$ to approximate the cavity sum $\\\\sum_{i'\\\\neq i} \\\\nabla_\\\\phi \\\\log p(\\\\theta_{i'}^a|\\\\phi)$ and adding a new gradient $\\\\nabla_\\\\phi \\\\log p(\\\\theta_{i}^a|\\\\phi)$ to $J$. \\n\\n\\n> 6. Provide results of ablation study comparing against Gaussian confirming that such enrichment is actually useful.\\n\\nFig. 3 already shows comparison between LiFT Gaussian posterior approximation (K=1) vs. LiFT GMM posterior approximation (K=3 and K=5). Clearly, GMM is better than Gaussian (K=1) for most cases.\\n\\n\\n> 7. Report means of the absolute deltas between the baseline and the candidate algorithm.\", \"we_report_the_mean_of_absolute_deltas_on_the_cls_45_task_split_as_follows\": \"| | No-Meta-Train | Union-Train | MAML | FO-MAML | i-MAML | Reptile\\n|:----:|:----:|:----:|:----:|:----:|:----:|:----:|\\nMAD score | 8.79 | 11.27 | 13.10 | 6.80 | 7.04 | 9.11 |\"}", "{\"summary\": \"This paper presents a novel hierarchical Bayesian model to tackle the cross-task PEFT knowledge transfer problem with meta-learning. The proposed LiFT approach introduces a SGLD-Gibbs sampling algorithm for efficient training and an online EM algorithm to maintain a Gaussian mixture representation of the posterior samples in an online manner. This method is evaluated on both NLP and vision tasks, and the results show improved performance over several baselines.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper tackles the cross-task PEFT knowledge transfer problem, which is a relevant and important topic in meta-learning and fine-tuning large models.\", \"The paper is well written and well structured. The design concept is clear from the very beginning. The reader is guided through the method step-by-step and the theoretical foundations are supported by the experiments.\", \"The integration of Gibbs sampling into SGLD and the assumptions made to approximate the posterior (all the variables are visited a sufficient number of times and frequently) are valuable.\", \"The proposed approach can also be applied when pre-fine-tuned models are already available, which is valuable for knowledge reuse.\"], \"weaknesses\": [\"The meta-learning baselines considered in the experiments are limited. I recommend including more recent deterministic and bayesian meta-learning approaches. Some examples, but not restriced to these, are SNAIL [2], ProtoNet [3], and MetaQDA [4].\", \"The results do not include standard deviations, making it difficult to assess whether the improvements with LiFT are statistically significant.\", \"While the work addresses several challenges, it also has some limitations, such as the joint training on the training tasks, which can be infeasible for large-scale datasets. The authors might consider adding a limitations section to acknowledge this constraint and any other potential challenges the method faces.\", \"It is unclear which architectures and hyperparameters are used for the meta-learning baselines. I assume that, for a fair comparison, ViT-B/16 was employed. However, MAML is generally inefficient when adapted to large feature extractors due to the difficulty of finding an optimal meta-initialization, which scales with the overall parameter space.\", \"A single inner step is used for MAML and its variant. While this reduces the computational cost, it is unclear if these results reflect the best possible performance for MAML, As noted in [1], MAML typically benefits from multiple inner loop updates.\"], \"references\": \"\\\\\\n[1] Han-Jia Ye, & Wei-Lun Chao (2022). How to Train Your MAML to Excel in Few-Shot Classification. In International Conference on Learning Representations. \\\\\\n[2] Mishra, N., Rohaninejad, M., Chen, X., & Abbeel, P. (2017). A simple neural attentive meta-learner. arXiv preprint arXiv:1707.03141.\\\\\\n[3] Snell, J., Swersky, K., & Zemel, R. (2017). Prototypical networks for few-shot learning. Advances in neural information processing systems, 30.\\\\\\n[4] Zhang, X., Meng, D., Gouk, H., & Hospedales, T. (2021). Shallow bayesian meta learning for real-world few-shot recognition. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 651\\u2013660).\\\\\", \"questions\": [\"How does the choice of $\\\\sigma$ and $\\\\beta$ impact on the model performance?\", \"In Figure 4 and the related discussion in line 742, it does seem that SGLD converges faster than SGLD-Gibbs especially for a low number of iterations. Could the authors clarify how to interpret this figure?\", \"Could the choice of parameters selected for meta-learning in the MAML-based baselines be explained in more detail?\", \"Could a comparison with MAML using 5 inner loops be provided to ensure that performance is not affected by the choice of a single inner loop? If conducting this experiment is not feasible due to time constraints, could an estimate of the time and computational resources required to complete it (based on the time taken for 1 inner loop) be provided?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for the rebuttal\", \"comment\": \"Thank you for the detailed response and clarification. My concerns are well-addressed and I have changed the score accordingly.\"}", "{\"title\": \"Post rebuttal response\", \"comment\": \"I thank the authors for their detailed response that addresses my concerns. I will raise my score accordingly.\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for the clarifications and additional experiments. All my doubts have been adressed. I raised my score.\"}", "{\"title\": \"Thank you for your valuable feedback!\", \"comment\": \"> 1. Missing critical related works (8 papers [1-8]).\\n\\nIn response to the reviewer\\u2019s suggestion, we have made an empirical comparison with the recent method LITE [6]. The rest of the models suggested are all tightly coupled with specific backbone network architectures, eg, CNAPs [2], and/or assume closed set classification problems e.g. [7]. As such they are not straightforwardly applicable to our main task of LoRA PEFT framework for adapting LLMs in open-ended text generation. But as we see that they are important latest meta learning algorithms, we have cited them in our revised paper (Please see Extended related work section in Appendix D, the second paragraph).\\n\\nThe LITE [6] (memory efficient meta learning) is generally not tied to a specific network architecture, and as suggested therein, the ProtoNet meta learner can be applied. To this end, we ran ProtoNet-LITE on the VTAB benchmark since ProtoNet typically assumes small-way classification problems, not adequate for LLM\\u2019s large vocabulary output cardinality. LITE [6] essentially aims to represent the prototype vectors using the entire support data (instead of a minibatch), but to remedy computational overhead, they set a randomly chosen large portion of the support data as non-backpropagatable. \\n\\nThe results are shown in the following table (the full scores for individual tasks are shown in Table 2 of the revised paper).\\n\\n| | Non-natural -> Natural | Non-special -> Special | Non-structured -> Structured |\\n|:----:|:----:|:----:|:----:|\\n| No Meta Train | 78.0 | 84.3 | 54.4 | \\n| Union Train | 79.0 | 84.5 | 55.2 | \\n| MAML | 79.6 | 85.1 | 55.5 | \\n| FO-MAML | 79.1 | 84.8 | 55.4 | \\ni-MAML | 79.2 | 85.0 | 55.9 | \\n| Reptile | 79.2 | 84.2 | 55.4 | \\n| ProtoNet | 79.8 | 84.3 | 55.2 | \\n| ProtoNet-LITE | 79.2 | 84.4 | 55.3 | \\n| LiFT K=1 (Ours) | 80.6 | 85.6 | 58.3 | \\n| LiFT K=3 (Ours) | 80.6 | 85.7 | 59.1 | \\n| LiFT K=5 (Ours) | 80.6 | 86.3 | 59.9 | \\n\\n\\n> 2. Need to consider more recent baselines, and more effective meta-learning baselines (3 papers [1,2,3]).\\n\\n[3] aims to learn a task-specific set of sparse masks for the new parameters added to the pre-trained ones while the new parameters are shared across the tasks (task-agnostic). Since they used ViT full backbone, and we are using the LoRA PEFTs, it is difficult to compare the two approaches directly. Only ViT-small was tested in [3] (sparse interpolation experts), and we also have ViT-B experiments. \\n\\nInstead we have implemented and run [2] bootstrapped-maml in our crossfit experimental framework with LoRA PEFT as parameters to learn. In the bootstrapped maml, the target model is first obtained by applying SGD updates on the train data then batch data sequentially. The meta learning loss is then the KL divergence between the predictive distributions of the train-data-updated model and the target model where the latter is treated as constant (no backprop). \\n\\nComparison with \\u201cbootstrapped-maml\\u201d [2] on Crossfit CLS-45:\\n\\n| | MAML | Bootstrapped MAML | LiFT (K=1) | LiFT (K=3) | LiFT (K=5) | \\n|:----:|:----:|:----:|:----:|:----:|:----:|\\n| CLS-45 | 52.68 | 55.02 | 62.96 | 63.37 | 64.12 | \\n\\nWe have added this result in the revised paper (Appendix E.6, Table 12).\\n\\n\\n> 3. Scalability to large-scale LLM (e.g., more than 1B param).\\n\\nOur current results are already applied on VIT-B, as mentioned above, which is already larger scale than many meta-learning papers\\u2019 experiments. Nevertheless, we agree that it is interesting to explore scaling further. \\n\\nWe now have run our algorithm on an even larger LLM, FLAN-T5-XL (3B), and the results on the Crossfit CLS-45 task split are summarized as follows. We ran all methods on a single A100 80GB GPU where iMAML (and its mixtures) and BMAML baselines incurred OOM.\\n\\n| CLS-45 | No-Meta-Train | Union-Train | MAML | FO-MAML | Reptile | LiFT K=1 (Ours) | LiFT K=3 (Ours) | LiFT K=5 (Ours) | \\n|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|\\n| FLAN-T5-XL (3B) | 65.13 | 73.41 | 73.74 | 74.22 | 69.39 | 78.37 | 79.69 | 80.01 | \\n| BART-base | 57.34 | 57.25 | 52.68 | 59.38 | 57.59 | 62.96 | 63.37 | 64.12 | \\n\\nThe result shows that our LiFT algorithm is scalable to an LLM with 3B parameters, where it continues to show large improvements over the baselines. Thanks for suggesting this experiment. We have added this result in our revised paper (Appendix E.1 and Table 6).\\n\\n\\n> 4. How about \\\"jointly learning the source task\\\" without a Bayesian approach?\\n\\n\\u201cJointly learning the source task without a Bayesian approach\\u201d \\u2013 This is exactly what \\u201cunion-traininig\\u201d does, and it underperforms ours by large margin.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"title\": \"To Reviewer **p6L8**\", \"comment\": \"We hope that you have had a chance to read our rebuttal. We would really appreciate it if you let us know if we answered your questions and concerns properly before the discussion period ends.\"}", "{\"title\": \"Part 2\", \"comment\": \"> 4. Existing works on online EM algorithm.\\n\\nWe realize that there have been many prior works on online EM methods in the literature. We have cited them along with the papers suggested by the reviewer in the extended related work section Appendix D in the revised paper. Although we find that none of them are identical to the one proposed in our paper, most are similar in nature to ours (e.g., exploiting the recursive structure of the EM). Hence we believe that these prior online EM methods can be employed in our online posterior estimation to be equally successful.\\n\\nThe first paper (Hosseini and Sra 2020) suggested by the reviewer, is actually not very relevant to our work, since they rather proposed an SGD learning of the GMM on a Riemannian manifold, showing that it outperformed the conventional EM algorithm. Hence not quite related to the online EM. \\n\\nIn the second paper (Karimi, Belhal, et al. 2019), they analyzed incremental and stochastic versions of the EM algorithm as well as extending the variance reduction technique in a common unifying framework. \\n\\n\\n> 5. Why do you choose a GMM to model $p(\\\\phi | \\\\{D_i\\\\}_{i=1}^N)$?\\n \\nBecause the posterior distribution is expected to be multi-modal. Various recent papers on the mixtures of PEFTs (as cited in our paper) rely on the same idea of the effectiveness of the multiple underlying PEFT prototypes. \\n\\n\\n> 6. If the number of modes is equal to the number of tasks $N$, each mode of $\\\\phi | \\\\{D_i\\\\}_{i=1}^N$ can specialize to one task. In this case you would have something very similar to a stochastic version of the mixtures of LoRA idea. How do you prevent this from happening?\\n\\nWe pre-specify the number of clusters, much smaller than $N$, as an inductive bias. If the number of clusters becomes equal to $N$, we agree that the model would overfit and not extract any task agnostic information. But this would not be a reasonable hyperparameter to set as firstly it would be slower to run; and secondly \\u2013 as per any mixture model \\u2013 the number of mixture components should be much less than the number of data (=tasks in our case).\\n\\nOverall there is a tradeoff. With a uni-modal posterior for $\\\\phi$ the hierarchical model exacts fully task agnostic information, but it may underfit by forcing dissimilar tasks to be too similar (e.g., LIFT K=1 or ABML). With a multi-modal posterior and $K=N$ there may be no extraction and transfer of task-agnostic knowledge. By learning a multi-modal posterior with $1<K<N$ we group similar tasks into common clusters and share knowledge between them, while requiring no knowledge transfer between dissimilar tasks in different clusters that would lead to negative transfer/underfitting. \\n\\nThis high-level notion of clustering similar tasks and enforcing similarity between tasks in the same cluster was also used in other related work that we compare against such as Lora-Mixture and MAML-mixtures (Tab 1). \\n\\n\\n> 7. After test-time adaptation, is the model output stochastic or deterministic? \\n\\nWe use the last sample $\\\\theta_*^a$ from the SGLD run Eq.(18) in the inference. \\n\\n\\n> 8. Why don't you provide confidence intervals for the LiFT results?\\n\\nWe have also collected 4 more SGLD samples $\\\\theta_*^a$ from Eq.(18) in addition to the last one. Using these 5 samples we have taken the MC average during the test time predictive distribution computation. This gives us some confidence intervals. For the CLS-45, K=5 LiFT, we have **$63.81 \\\\pm 0.40$** (64.12 reported in the paper using the last SGLD sample). \\n\\n\\n> 9. What is the difference between warm-up samples and burn-in samples for the SGLD Gibbs sampling?\\n\\nWarm-up stage means before starting to run SGLD steps, just a regular $\\\\theta$ training (without $\\\\phi$ update). After the warm-up stage, we have the burn-in stage in which we start running the SGLD steps Eq.(9,10,11), but do not collect the samples (and no posterior GMM maintenance). After the burn-in stage, we start collecting posterior samples and build/update the GMM.\"}" ] }
7nWKBRQuLT
GeVLM: 3D Object Grounding with Geometry-enhanced Vision Language Model
[ "Weijia Ai", "Chao Zhang", "Guangzhi Sun" ]
Understanding 3D scenes with point cloud data in tasks such as object referencing, question-answering, and captioning poses significant challenges to vision language models (VLMs), due to the complexity of integrating both linguistic and spatial information. While existing methods have mapped point cloud features into LLM space to enable 3D scene comprehension, they often overlook viewpoint information and the relative spatial distance between objects, this can lead to confusion in interpreting spatial descriptions and grounding objects. This paper presents a geometry-enhanced vision LM (GeVLM) to address these challenges. Specifically, we propose viewpoint-consistent position encoding (VCPE) and distance-aware cross-entropy (DACE) loss, which enhance the model's ability to interpret relative spatial relationships agnostic to camera viewpoint and incorporate distance information in the label space. We additionally introduce the DetailedScanRefer dataset, which provides identifiers and spatial annotation for each object mentioned in the referencing description to further emphasize spatial relationships. GeVLM demonstrates significant improvements over the Chat-3D v2 baseline, particularly with 4.0\% and 2.7\% absolute increase in Acc@0.25 and Acc@0.50 respectively on the ScanRefer benchmark.
[ "3D object grounding", "visual large language model", "viewpoint consistency" ]
https://openreview.net/pdf?id=7nWKBRQuLT
https://openreview.net/forum?id=7nWKBRQuLT
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uSuaDQ9Min", "tvM7nW5aFc", "q9bVfzALOa", "fDGeIJ4eqX", "bjXddklDQb", "YvS3RUllTm", "UnLJ13nHO5", "S5L2UKTh1U", "HvC9f1wLnF", "HuTEErZhBF", "ELZHbxuDSh", "DPoiMLYwWw", "BWLbuN6n6d", "AD8sfCBKUs", "9v3jfPMTKb", "2gtryPk6sm", "29LJrfZgvq", "0fhf8dIRg1" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732181327177, 1732181964491, 1730550111214, 1732183208896, 1734286622129, 1732181159730, 1732183088505, 1732183162504, 1730531094321, 1730392828485, 1732183037659, 1732181589919, 1733195870286, 1729311826084, 1732520014727, 1732181879800, 1732181101763, 1732181254263 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6271/Authors" ], [ "ICLR.cc/2025/Conference/Submission6271/Authors" ], [ "ICLR.cc/2025/Conference/Submission6271/Reviewer_wme5" ], [ "ICLR.cc/2025/Conference/Submission6271/Authors" ], [ "ICLR.cc/2025/Conference/Submission6271/Authors" ], [ "ICLR.cc/2025/Conference/Submission6271/Authors" ], [ "ICLR.cc/2025/Conference/Submission6271/Authors" ], [ "ICLR.cc/2025/Conference/Submission6271/Authors" ], [ "ICLR.cc/2025/Conference/Submission6271/Reviewer_C7XU" ], [ "ICLR.cc/2025/Conference/Submission6271/Reviewer_1oXE" ], [ "ICLR.cc/2025/Conference/Submission6271/Authors" ], [ "ICLR.cc/2025/Conference/Submission6271/Authors" ], [ "ICLR.cc/2025/Conference/Submission6271/Reviewer_wme5" ], [ "ICLR.cc/2025/Conference/Submission6271/Reviewer_YvFj" ], [ "ICLR.cc/2025/Conference/Submission6271/Reviewer_1oXE" ], [ "ICLR.cc/2025/Conference/Submission6271/Authors" ], [ "ICLR.cc/2025/Conference/Submission6271/Authors" ], [ "ICLR.cc/2025/Conference/Submission6271/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer wme5 Part 4/4\", \"comment\": \">Additionally, it appears that DetailedScanRefer does not contribute to performance improvement; clarification on this would be helpful.\\n\\nDetailedScanRefer was introduced to address the limited annotation of anchor objects in ScanRefer by providing explicit annotations that highlight these objects. This encourages the model to process spatial context rather than solely focusing on target identification. These additional details enrich the model\\u2019s comprehension of scene relationships and improve alignment with referring expressions.\\n\\nAs shown in Table 4 (comparing row 5 to row 6), the impact of DetailedScanRefer is most pronounced when used in combination with DACE. DACE reinforces spatial awareness by penalizing the model for failing to accurately recognize both the target and anchor objects.\\n\\nThis highlights that the combination of DetailedScanRefer and DACE significantly enhances the model\\u2019s effectiveness by promoting more precise spatial reasoning and anchor-object recognition, which are critical for tasks involving complex referring expressions.\\n\\n\\n>I would also like to see whether DACE remains effective without DetailedScanRefer.\\n\\nWe have added an ablation for **VCPE + DACE** in the table below in for clarity:\\n\\n| VCPE | Detailed | DACE | Unique (Acc\\\\@0.25) | Unique (Acc\\\\@0.50) | Multiple (Acc\\\\@0.25) | Multiple (Acc\\\\@0.50) | Overall (Acc\\\\@0.25) | Overall (Acc\\\\@0.50) |\\n|---------|----------|-----------|-------------------|-------------------|---------------------|---------------------|-------------------|-------------------|\\n| Rotate | -- | -- | 79.6 | 74.7 | 36.2 | 32.6 | 44.2 | 40.4 |\\n| Rotate | -- | $T$=0.05 | 79.5 | 73.7 | 37.9 | 33.7 | 45.6 | 41.1 |\\n\\nAs shown in the table, incorporating DACE without DetailedScanRefer leads to a noticeable performance improvement across several metrics, including increases in **Multiple Acc\\\\@0.25**, **Multiple Acc\\\\@0.50**, and overall metrics. This demonstrats the effectiveness of DACE, even when DetailedScanRefer is not utilized.\\n\\n\\n>There is no quantitative metric has been provided to assess the quality of the dataset generated by GPT-4 and Mask3D.\\n\\nWe have provided the quantitative metric regarding the quality of the Dataset in appendix **A.4 DATASET QUALITY EVALUATION**.\\n\\nAdditionally, we have added a table summarizing key statistics and metrics for Mask3D's performance across both the training and validation splits. This includes counts for IoU \\u2265 0.25 and IoU \\u2265 0.50, as well as maximum IoU rates, to further illustrate the quality and comprehensiveness of the generated proposals.\\n\\n| Metric | Train Split Count | Validation Split Count |\\n|-----------------------------------------|-------------------|-----------------------|\\n| **Total Count (Original ScanRefer Dataset)** | 36,665 | 9,508 |\\n| **IoU \\u2265 0.25 Count** | 36,187 | 8,924 |\\n| **IoU \\u2265 0.50 Count** | 35,061 | 8,168 |\\n| **Max IoU\\\\@0.25** | 98.70% | 93.86% |\\n| **Max IoU\\\\@0.50** | 95.63% | 85.91% |\\n\\nDuring the trainig process, we utilize only 32,338 annotations that meet the strict criterion of IoU\\u22650.75 with ground truth objects. This high threshold ensures that only highly accurate object proposals are retained, emphasizing the precision and relevance of the dataset for effective downstream tasks.\"}", "{\"title\": \"Response to Reviewer C7XU Part 3/3\", \"comment\": \"**Strategies for Enhancing Dataset Robustness**\\n\\nTo ensure the correctness of the DetailedScanRefer dataset, we have implemented several stringent data validation steps:\\n\\n1. **Alignment with Ground-Truth IDs**: We ensured basic correctness by aligning the annotated objects with the ground-truth (GT) object IDs from the original ScanRefer dataset. Specifically, we removed samples where the first object ID provided by GPT-4o did not align with the target object ID. This step ensures that at least the target object is correctly recognized, enhancing the reliability of the annotations.\\n\\n2. **Data Filtering**: We filtered out entries containing invalid IDs, NaN values, or IDs exceeding valid ranges. This process eliminates erroneous data points, thereby enhancing overall data quality.\\n\\n3. **Annotation Reliability Assessment**: We assessed annotation reliability using the GPT-4 API's rating system, as demonstrated in Table 7 of Appendix A. This assessment provides a quantitative measure of annotation confidence and accuracy.\\n\\nTo ensure the robustness DetailedScanRefer dataset, we have implemented the following strategy:\\n\\n1. **Use Photo Instead of Rendered Image**: Instead of relying on rendered images, we utilize real-world photos, offering significantly higher quality. The comparison between rendered images and real photos could be find in Appendix 1 in our paper. When using rendered images, our data cleaning process yielded 9,659 samples. However, by switching to real photos and applying the same data cleaning process, we ended up with 16,151 data points, demonstrating the superior effectiveness and richness of using real-world photos.\\n\\n2. **Careful Prompt Design**: We meticulously crafted prompts to ensure that annotations were generated only when the GPT-4o model confidently recognized objects in the descriptions. This careful design minimizes ambiguity and enhances the quality of the generated annotations.\\n\\nBy implementing these strategies, we have significantly enhanced the robustness of the DetailedScanRefer dataset and minimized potential errors in object ID assignments by GPT-4o. Our approach not only improves the model's grounding performance by effectively leveraging anchor descriptions but also ensures the dataset's reliability for future research.\\n\\nWe believe that these additional explanations and evidence address your concerns regarding the dataset's utility and robustness. We remain committed to further refining our methods and exploring new strategies to maximize the dataset's effectiveness in advancing grounding performance.\\n\\nThank you once again for your insightful and valuable feedback. \\n\\n\\n**Reference**\\n[1] H. Durrant-Whyte and T. Bailey, \\\"Simultaneous localization and mapping: part I,\\\" in IEEE Robotics & Automation Magazine, vol. 13, no. 2, pp. 99-110, June 2006, doi: 10.1109/MRA.2006.1638022.\", \"keywords\": \"{Simultaneous localization and mapping;Mobile robots;Robotics and automation;History;Artificial intelligence;Navigation;Vehicles;Buildings;Bayesian methods;Particle filters},\\n\\n[2] Chen, D.Z., Chang, A.X., Nie\\u00dfner, M. (2020). ScanRefer: 3D Object Localization in RGB-D Scans Using Natural Language. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision \\u2013 ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12365. Springer, Cham. https://doi.org/10.1007/978-3-030-58565-5_13\"}", "{\"summary\": \"This work begins by emphasizing the importance of viewpoint in grounding and introduces a geometry-enhanced vision-language model. It includes the VCPE and DACE modules, which address ambiguity caused by varying viewpoints and the positional information neglect in cross-entropy loss, respectively. Additionally, this work utilizes GPT-4 to generate the DetailedScanRefer dataset to support training, enabling the model to achieve substantial improvements over the baseline.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The motivation of this paper is clear, enhancing LLM-based 3D visual grounding models from the perspectives of viewpoint and relative spatial relationships, which aligns well with human intuition.\\n2. This paper introduces the integration of viewpoint information into the LLM-based paradigm and designs a loss function that better aligns with the task. Additionally, it presents a fine-grained dataset, rich in content, which makes a commendable contribution to the community.\\n3. The method proposed in this paper achieves significant improvements over the baseline.\", \"weaknesses\": \"1. The experimental section lacks a sufficiently comprehensive comparison. In addition to the baseline and other LLM-based models, comparisons should also be made with models specifically designed for this task to better demonstrate the overall performance of the proposed model in this context.\\n2. Since this method uses viewpoint information as model input, it is important to clarify whether other comparison models also include or utilize this information to ensure fairness. This is particularly relevant for works like ConcreteNet (\\u201cFour Ways to Improve Verbo-visual Fusion for Dense 3D Visual Grounding,\\u201d ECCV 2024), which already explores viewpoint information. A comparison with this model's use of viewpoint information would be valuable.\\n3. From the ablation study shown in Table 4, it is observed that most of the performance gains are concentrated in the DACE module. It remains unclear whether similar improvements could be achieved using only \\\"world + DetailedScanRefer + DACE.\\\" The necessity of incorporating viewpoint information is not well substantiated. Additionally, it appears that DetailedScanRefer does not contribute to performance improvement; clarification on this would be helpful. I would also like to see whether DACE remains effective without DetailedScanRefer.\\n4. There is no quantitative metric has been provided to assess the quality of the dataset generated by GPT-4 and Mask3D.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \">The way to integrate view-point information seems too incremental and has limited novelty. It simply utilize position embedding then use self-attention.\\n\\nWhile we utlize standard modules such as *position embeddings* and *self-attention* mechanisms, the key innovation lies in the position encoding itself. Unlike traditional positional encodings, VCPE incorporates rotation transformations to condition positional information based on the observer's viewpoint.\\n\\nThis representation has not been explored in prior work and offers a theoretically sound method for capturing spatial relationships from various perspectives. By transforming object coordinates according to the observer's viewpoint, our approach ensures robustness to translations, allowing the model to better understand spatial relationships.\\n\\n>The gain of VCPE and the designed DetailedScanRefer dataset in the ablation study are too small to prove the effectiveness of the method.\\n\\nWe respectfully disagree with the assessment that the gains are too small to demonstrate the effectiveness of our approach. As shown in our ablation study (see Table below), incorporating VCPE and DACE consistently yields meaningful improvements across multiple metrics.\\n\\n- **Overall Performance Gains**: Our best model achieves an Overall Acc\\\\@0.25 of 46.9% and Acc\\\\@0.50 of 42.3%, marking significant improvements over the baseline Chat-3D v2* (42.9% and 39.6%, respectively).\\n\\n- **Unique and Multiple Targets**: The improvements are consistent across both the Unique and Multiple target subsets. Notably, in the Multiple Targets subset, we observe an increase in Acc\\\\@0.25 from 34.7% to 39.0%, which represents a substantial gain for this challenging task.\\n\\nIn the context of 3D grounding, even modest percentage improvements are highly significant due to the inherent complexity of the task. The consistent performance gains across various settings demonstrate the effectiveness and practical value of our method.\\n\\n>The paper lacks qualitative experimental analysis to prove that VCPE and DACE have learned the expected position information.\\n\\nWe would like to highlight that we have already provided several qualitative examples in the paper. Specifically, **Figure 3** and **Figure 6** demonstrate how VCPE and DACE contribute to learning and conveying the expected positional information. Additionally, we have dedicated a subsection in **Section 5.2** to qualitative anlysis, which was originally titled 'Qualitative Examples' and has now been revised to 'Qualitative Analysis' for clarity.\\n\\nWe recognize the importance of qualitative analysis in supporting our findings, and we will ensure that these sections are prominently emphasized in the revised version of the paper for greater clarity and visibility.\", \"title\": \"Response to Reviewer YvFj\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Response to Reviewer wme5 Part 2/4\", \"comment\": [\">Since this method uses viewpoint information as model input, it is important to clarify whether other comparison models also include or utilize this information to ensure fairness. This is particularly relevant for works like ConcreteNet (\\u201cFour Ways to Improve Verbo-visual Fusion for Dense 3D Visual Grounding,\\u201d ECCV 2024), which already explores viewpoint information. A comparison with this model's use of viewpoint information would be valuable.\", \"**Novel Use of Viewpoint Information**: To the best of our knowledge, current unified 3D LLMs capable of performing grounding tasks, such as 3D-LLM and Chat-3D v2, do not incorporate viewpoint-related information, specifically camera poses. Our approach introduces a novel aspect by leveraging rotated coordinates to effectively address viewpoint-dependent tasks, particularly in 3D grounding task.\", \"**Comparison with ConcreteNet**: We appreciate the mention of ConcreteNet, which incorporates viewpoint information using a different approach - a Global Camera Token that encodes camera-related features, supervised by actual camera positions. Interestingly, an ablation study in ConcreteNet showed that incorporating rotation information into this token resulted in a performance drop, which is counterintuitive given the expectation that rotation awareness would enhance understanding of viewpoint-dependent tasks.\", \"**Efficient Inference Through Rotated Coordinates**: ConcreteNet employs multiple rotated views of the scene to make predictions, refining mask outputs based on aggregated predictions from these views. In contrast, our approach directly utilizes rotated coordinates, bypassing the need for translation vectors or ensembling strategies. This results in a more computationally efficient inference process.\", \"**Key Differences and Our Approach's Rationale**: Unlike ConcreteNet, which emphasizes *translation* vectors in its viewpoint-related features, our approach focuses on *rotation* matrices. As shown in Table 5 of the paper, incorporating rotation matrices resulted in improved performance compared to using camera coordinates. We attribute this improvement to the alignment provided by rotation matrices, as they effectively account for orientation changes in a way that translation vectors alone cannot. While *translation* adjusts only the scene\\u2019s position along the axes, *rotation* fundamentally transforms how view-dependent descriptions are interpreted, leading to a more robust understanding of spatial relationships.\"]}", "{\"title\": \"Response to Reviewer 1oXE Part 2/3\", \"comment\": \">The improvement on SQA, a benchmark focusing on viewpoint, is weak. Since this paper mainly focuses on improving the view understanding of the LLM, the results are not compelling enough.\", \"we_would_like_to_clarify_the_following_points\": \"- **Lack of Explicit Viewpoint Annotations in ScanQA**: \\nThe ScanQA dataset does not provide explicit viewpoint annotations or record camera poses during data collection. Annotators used an interactive 3D scene viewer to freely navigate and observe scenes from multiple angles without documenting specific viewpoints.\\nIn datasets like **ScanRefer**, each description is linked to a specific camera viewpoint. **SQA3D** explicitly states the viewpoint in the initial sentence of each description. It is infeasible to reconstruct or infer the annotators' viewpoints in ScanQA, unlike our approach with the **Scan2Cap** dataset.\\n\\n- **Limited Viewpoint-Related Content**:\\nThe dataset's documentation does not specify how many questions are viewpoint-related. Many questions focus on viewpoint-invariant attributes such as object color or quantity (e.g., \\\"What color is the plastic clothes hanger?\\\" or \\\"How many armchairs are there?\\\"). Incorporating viewpoint information in these cases does not provide additional benefits.\\n\\n- **Dataset Constraints Affecting Results**:\\nThe modest improvements observed are not indicative of our method's effectiveness but reflect the constraints imposed by the dataset itself. Our viewpoint-aware approach demonstrates benefits on datasets where viewpoint information is available.\\n\\n\\nWe hope this clarifies the performance results on ScanQA and underscores the effectiveness of our approach when appropriate viewpoint information is available.\\n\\n>In line 418, 'the task prioritizes object semantics over spatial location, further diminishing the effectiveness of the DACE loss.' the paper does not give any evidence to support this claim.\\n\\n- As noted in the caption of **Table 2** in our paper, the Multiple Targets (MT) task involves scenarios where a scene contains multiple instances of the target object. Unlike single-target tasks that require distinguishing a specific object based on spatial relationships to surrounding objects, the MT task focuses on identifying all instances of a target object within the scene. This shifts the emphasis toward object semantics derived from the language description (e.g., object type and attributes) rather than spatial location or proximity. Therefore, the task inherently prioritizes object semantics over spatial information, which explains why spatial differentiation mechanisms like the DACE loss may be less effective.\\n- The DACE loss is defined in **Equations (4)** and **(5)** of our paper (please refer to these equations for detailed formulation). The DACE loss is designed to enhance fine-grained spatial differentiation by assigning higher importance to closer objects and penalizing distant ones. In the context of the MT task, where the goal is to recognize all relevant objects regardless of their spatial relationships, the spatial emphasis introduced by the DACE loss does not align with the task objectives. This misalignment can lead to diminished performance, as observed in our results in Table 2, because the loss function adds unnecessary spatial constraints to a task that primarily relies on object semantics.\\n\\n>Although the method surpasses the baseline, it also uses extra annotations. No ablations can be used to decide which aspect contributes to the performance gain. For example, the setting in which only VCPE DACE is used and no detailed ScanRefer annotations are added.\\n\\nWe have added a comparison row for **VCPE + DACE** in the table below in the paper for clarity:\\n\\n| VCPE | Detailed | DACE | Unique (Acc\\\\@0.25) | Unique (Acc\\\\@0.50) | Multiple (Acc\\\\@0.25) | Multiple (Acc\\\\@0.50) | Overall (Acc\\\\@0.25) | Overall (Acc\\\\@0.50) |\\n|---------|----------|-----------|-------------------|-------------------|---------------------|---------------------|-------------------|-------------------|\\n| Rotate | -- | -- | 79.6 | 74.7 | 36.2 | 32.6 | 44.2 | 40.4 |\\n| Rotate | -- | $T$=0.05 | 79.5 | 73.7 | 37.9 | 33.7 | 45.6 | 41.1 |\\n\\nAs shown in the table, incorporating DACE without DetailedScanRefer results in a noticeable performance improvement across multiple metrics, including an increase in **Multiple Acc\\\\@0.25**, **Multiple Acc\\\\@0.50**, and overall metrics, demonstrating the effectiveness of DACE even in the absence of DetailedScanRefer.\"}", "{\"title\": \"Response to Reviewer 1oXE Part 3/3\", \"comment\": \"**Questions:**\\n\\n>Question 1: Is the view information only used during training? Since we can not access the ground-truth view information, this hinders the wide application of the proposed method.\\n\\n- The view information, represented by the rotation matrix of the camera pose, is used during both training and inference.\\n\\n- In real world application, many modern systems support real-time camera calibration and pose estimation throught techniques such as Simultaneous Localization and Mapping (SLAM) [1]. By integrating these technologies, our approach ensures consistent viewpoint information, even in dynamic environments where exact camera poses may fluctuate.\\n\\n- Futhermore, slight inaccuracies in camera pose estimation do not significantly affect the understanding of relative object positions. In particular, inaccuracies in translation have minimal impact when rotation matrices are used to focus on the orientation. Our approach leverages rotation matrices to maintain consistency in relative object positions across different viewpoints. This ensures that VCPE remains robust to minor variations or inaccuracies in camera translations, while preserving critical spatial relationships through rotational alignment.\\n\\n\\n>Question 2: Some metrics, such as the ROUGE-L and EM on the ScanQA dataset, are not reported.\\n\\nThank you for bringing this to our attention. We have updated **Table 3** in the revised paper to include these matrics.\\n\\n> Question 3: Why are \\\"near\\\" and \\\"far\\\" considered viewpoint-related questions with potential ambiguities in line 519? The distance between objects will remain constant from the given viewpoint.\\n\\nThe terms \\\"Near\\\" and \\\"far\\\" can be relative to the camera position. For example, in a description like \\\"This chair is at the far end of the room by the fireplace,\\\" the phrase \\\"far end\\\" becomes inherently ambiguous, as its meaning depends on the observer's perspective. The chair referred to will vary depending on the camera's position within the room.\\n\\n>Question 4: Why is 'Chat-3D v2' marked with * in Tables 1, 3, and 4 but not in Table 2? Does this denote reproduced results?\\n\\nThank you for pointing this out. We had missed the * in Table 2 and it has been added in the revised version of the paper.\"}", "{\"summary\": \"This paper proposes to enhance 3D LLM by incorporating information about 3D viewpoints and relative spatial distances. Besides, they create the DetailedScanRefer dataset with grounding annotations for each object described. The experimental results show the improvement over baseline methods.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The motivation for incorporating viewpoint and spatial distance is reasonable.\\n2. The DetailedScanRefer dataset with fine-grained grounding annotations is valuable for future research in 3D object grounding.\\n3. The experiments demonstrate the effectiveness of the proposed method.\", \"weaknesses\": \"1. The VCPE module's reliance on camera viewpoints as an additional input may pose challenges for real-world applications due to the difficulty of acquiring such data.\\n2. The spatial distance-aware loss function appears to overlook semantic similarities between objects. Objects that are spatially close but semantically different from the target should incur a greater penalty than those that are semantically similar but spatially distant. A more effective approach might involve incorporating both spatial and semantic distances into the cross-entropy loss.\\n3. The utility of the newly created dataset for enhancing grounding performance is questionable, as indicated by the ablation results in Table 4. Additional evidence supporting the dataset's importance would be beneficial, along with strategies to enhance its robustness, especially considering potential errors in object ID assignments by GPT-4o.\", \"questions\": \"Please refer to the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper points out that many 3D LLM models often do not consider the viewpoint information and the relative spatial distance between objects. It introduces a method for 3D scene understanding using LLM that mainly focuses on incorporating 3D viewpoint information, improving positional encoding, implementing distance-aware cross-entropy loss, and enhancing the dataset's quality. The model trained using the proposed method improves upon the baseline by clear margins.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. This paper notices an interesting aspect that other 3D LLM may overlook. The corresponding method is specifically tailored to tasks and successfully tackles the identified problems, as shown by the ablation studies.\\n2. The proposed method is orthogonal to the existing methods and can be used to improve other methods.\", \"weaknesses\": \"1. The improvement on the Scan2Cap dataset is weak, although the author incorporates extra information for captioning, and the proposed method may have a negative impact. Some recent baselines, for example, LEO[1] and LL3DA[2], are not compared. In line 453, the caption ability is mentioned, but no results are provided.\\n2. The improvement on SQA, a benchmark focusing on viewpoint, is weak. Since this paper mainly focuses on improving the view understanding of the LLM, the results are not compelling enough.\\n3. In line 418, 'the task prioritizes object semantics over spatial location, further diminishing the effectiveness of the DACE loss.' the paper does not give any evidence to support this claim.\\n4. Although the method surpasses the baseline, it also uses extra annotations. No ablations can be used to decide which aspect contributes to the performance gain. For example, the setting in which only VCPE DACE is used and no detailed ScanRefer annotations are added.\\n\\n[1] https://arxiv.org/abs/2311.12871\\n[2] https://arxiv.org/abs/2311.18651\", \"questions\": \"1. Is the view information only used during training? Since we can not access the ground-truth view information, this hinders the wide application of the proposed method.\\n2. Some matrices, such as the ROUGE-L and EM on the ScanQA dataset, are not reported.\\n3. Why are \\\"near\\\" and \\\"far\\\" considered viewpoint-related questions with potential ambiguities in line 519? The distance between objects will remain constant from the given viewpoint.\\n4. Why is 'Chat-3D v2' marked with * in Tables 1, 3, and 4 but not in Table 2? Does this denote reproduced results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 1oXE Part 1/3\", \"comment\": \"We thank the reviewer for the constructive suggestions, and we would like to address the concerns as follows.\\n\\n\\n**Weaknesses:**\\n\\n> Weakness 1: The improvement on the Scan2Cap dataset is weak, although the author incorporates extra information for captioning, and the proposed method may have a negative impact. \\n\\nWe acknowledge the concern regarding the seemingly modest improvement on the Scan2Cap dataset. However, we'd like to clarify that our modifications have introduced a more challenging, viewpoint-aware captioning task, which significantly affects evaluation metrics and comparability with existing models.\\n\\n1. **Viewpoint-Aware Augmentation**: We augmented the Scan2Cap dataset by reintroducing camera poses from the original ScanRefer dataset, making it viewpoint-aware. Each prompt is now associated with a specific viewpoint and corresponds to a single unique reference caption. This adds spatial context to the captioning task, requiring models to generate descriptions accurate from particular perspectives within the scene.\\n\\n2. **Stricter Evaluation Protocol**: Our augmentation leads to a more stringent evaluation process. Unlike the original dataset, where evaluation metrics are averaged over multiple reference captions per prompt, we evaluate the model's output against a single, specific reference caption tied to a viewpoint. This makes direct comparison with models trained on the original Scan2Cap dataset inappropriate, as the tasks are fundamentally different in difficulty.\\n\\n3. **Competitive Performance Despite Increased Difficulty**: Despite the heightened challenge, our model demonstrates competitive performance, highlighting the effectiveness of incorporating viewpoint information. We believe this advancement aligns more closely with real-world applications where an observer's perspective is crucial. \\n\\n\\nWe have provided detailed explanations of our dataset augmentation, evaluation implications, and results in **Appendix B** in our paper.\\n\\n\\n>Some recent baselines, for example, LEO[1] and LL3DA[2], are not compared. \\n\\nNote that upon reviewing these works, we found that both LEO and LL3DA are not able to perform 3D grounding tasks.\\n\\n#### ScanQA Metrics\\n\\n| System | B1 | B2 | B3 | B4 | M | C | R | EM |\\n|------------|--------|--------|--------|--------|--------|--------|--------|--------|\\n| LL3DA | -- | -- | -- | 13.5 | 15.9 | 76.8 | 37.3 | -- |\\n| LEO | -- | -- | -- | 13.2 | 20.0 | 101.4 | 49.2 | 24.5 |\\n| GeVLM | 42.4 | 28.7 | 21.3 | 15.4 | 18.1 | 90.5 | 41.8 | 21.7 |\\n\\n\\n#### SQA3D Metrics\\n\\n| System | What | Is | How | Can | Which | Others | Avg |\\n|------------|--------|--------|--------|--------|--------|--------|--------|\\n| LEO | -- | -- | -- | -- | -- | -- | 50.0 |\\n| GeVLM | 44.1 | 68.6 | 52.3 | 62.7 | 45.6 | 55.8 | 53.5 |\\n\\n\\nOur model outperforms LL3DA on the ScanQA task, but LL3DA is not applicable to the SQA3D task. Regarding LEO, we outperform it on the SQA3D task but observe slightly lower performance on the ScanQA task for some metrics. We have updated our paper to include comparisons with these models on ScanQA and SQA3d tasks in Table 3.\\n\\n>In line 453, the caption ability is mentioned, but no results are provided.\\n\\nAs stated in line 362, we have provided results for Scan2Cap in the Appendix B. We included further clarification in Section 5.2 at the suggested place.\"}", "{\"title\": \"Response to Reviewer C7XU Part 1/3\", \"comment\": \"We sincerely thank the reviewer for their insightful feedback and constructive suggestions, and we will address the identified weaknesses in detail below.\\n\\n>The VCPE module's reliance on camera viewpoints as an additional input may pose challenges for real-world applications due to the difficulty of acquiring such data.\", \"we_would_like_to_clarify_the_practicality_of_using_viewpoint_information_as_follows\": \"- **Integration with Real-Time Camera Calibration Technologies**: Modern systems support real-time camera calibration and pose estimation using techniques such as Simultaneous Localization and Mapping (SLAM) [1]. By integrating these technologies, our approach ensures consistent and reliable viewpoint information, even in dynamic real-world scenarios where exact poses may vary. \\n- **Robustness to Pose Estimation Inaccuracies**: Slight inaccuracies in camera pose estimation do not significantly affect the understanding of relative positions (e.g. left-right relationships). Specifically, translation inaccuracies have minimal impact when rotation matrices are used exclusively. Our approach leverages rotation matrices to ensure that our Viewpoint-Consistent Position Encoding (VCPE) is robust to minor pose inaccuracies, preserving critical spatial relationships through rotational alignment.\\n\\n> The spatial distance-aware loss function appears to overlook semantic similarities between objects. Objects that are spatially close but semantically different from the target should incur a greater penalty than those that are semantically similar but spatially distant. A more effective approach might involve incorporating both spatial and semantic distances into the cross-entropy loss.\\n\\nThe standard cross-entropy loss inherently penalizes semantic discrepancies. When the model predicts an object with *incorrect* semantics (i.e., belonging to a different category than the target), the cross-entropy loss assigns a higher penalty. This ensures that semantically incorrect predictions, whether spatially close or distant, are appropriately penalized for not matching the target's category.\\n\\nHowever, previous models like Chat3D-v2 have overlooked the explicit consideration of spatial distance during training. Our proposed Distance-Aware Cross-Entropy (DACE) loss briges this gap by incorporating spatial proximity into the loss function, enhancing the model's ability to account for both semantic and spatial alignment.\"}", "{\"comment\": \"Thanks for the detailed responses and the authors' efforts. However, based on the experimental results, it appears that GeVLM still exhibits a significant performance gap compared to the current state-of-the-art (SOTA), particularly in terms of Acc@0.25, where the disparity is especially pronounced. As a paper explicitly titled \\u201c3D Object Grounding\\u201d, such a performance gap is difficult for me to accept.\\n\\nAdditionally, the experiments with \\u201cWorld + DetailedScanRefer + DACE\\u201d further raise questions about the contributions of this work. In the ScanRefer dataset, \\u201cMultiple\\u201d category data accounts for approximately 80%, and GeVLM clearly struggles to handle this critical aspect effectively. This limitation also explains why GeVLM underperforms in the Overall metric in ScanRefer.\\n\\nIn conclusion, my core concern has not been resolved, and I will maintain my original score of 5.\"}", "{\"summary\": \"The paper aims to solve a challenging 3D object grounding task. The paper proposes GeVLM to grasp view-point information and the relative spatial distance between objects. Besides, to facilitate the research community, the paper introduces a new dataset named DetailedScanRefer, which provides identifiers and spatial annotation for each object mentioned in the referencing description to further emphasize spatial relationships.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. Focusing on positional relationships is a very worthy research problem for 3D tasks.\\n2. Experiments validate the performance of the proposed method by comparing some advanced algs.\", \"weaknesses\": \"1. The way to integrate view-point information seems too incremental and has limited novelty. It simply utilize position embedding then use self-attention.\\n2. The gain of VCPE and the designed DetailedScanRefer dataset in the ablation study are too small to prove the effectiveness of the method.\\n3. The paper lacks qualitative experimental analysis to prove that VCPE and DACE have learned the expected position information.\", \"questions\": \"See weaknesses\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"The author's responses have addressed some of my concerns; however, the proposed method still does not demonstrate a significant performance improvement compared to previous approaches. As a result, I will raise my score from 3 to 5.\"}", "{\"title\": \"Response to Reviewer C7XU Part 2/3\", \"comment\": \">The utility of the newly created dataset for enhancing grounding performance is questionable, as indicated by the ablation results in Table 4. Additional evidence supporting the dataset's importance would be beneficial, along with strategies to enhance its robustness, especially considering potential errors in object ID assignments by GPT-4o.\\n\\nWe would like to provide additional evidence supporting the importance of our dataset and outline the strategies we've employed to enhance its robustness, especially concerning potential errors in object ID assignments by GPT-4o.\\n\\n**Enhancing Grounding Performance through Effective Use of Anchor Descriptions**\\n\\nIn the ScanRefer dataset [2], each sample description typically consists of two parts: the first sentence refers to the target object, while the subsequent sentences describe surrounding \\\"anchor\\\" objects. Statistical analysis indicates that including these anchor descriptions has only a minor impact on grounding accuracy, as evidenced by the results below:\\n\\n| Method | Unique (Acc\\\\@0.25 / Acc\\\\@0.5) | Multiple (Acc\\\\@0.25 / Acc\\\\@0.5) | Overall (Acc\\\\@0.25 / Acc\\\\@0.5) |\\n|---------------------------|-----------------------------|-------------------------------|------------------------------|\\n| Ours (first sentences) | 73.52 / 46.60 | **33.71 / 21.20** | 41.44 / 26.12 |\\n| Ours (whole descriptions) | **76.33 / 53.51** | 32.73 / 21.11 | **41.19 / 27.40** |\\n\\nThese results, derived from the original ScanRefer paper [2], suggest that processing the entire description does not significantly improve performance, particularly in the \\\"multiple\\\" category, which accounts for the majority of the dataset (7,663 out of 9,538 samples in the validation split). This category involves scenes with multiple objects of the same type, where distinguishing between them relies heavily on contextual information from anchor objects.\\n\\nTo address this challenge, our approach **encourages the model to effectively utilize anchor descriptions**. Instead of limiting the task to predicting only the object ID of the target, we modified it to require the model to output all object IDs mentioned in the sentence, including both target and anchor objects. This adjustment forces the model to focus on the entire description, improving its ability to disambiguate between similar objects in complex scenes. Additionally, this method enables us to generate more ground-truth data, as multiple objects are now grounded per description.\"}", "{\"title\": \"Response to Reviewer wme5 Part 1/4\", \"comment\": \"We sincerely thank the reviewer for their insightful feedback and constructive suggestions, and we will address the identified weaknesses in detail below.\\n\\n>The experimental section lacks a sufficiently comprehensive comparison. In addition to the baseline and other LLM-based models, comparisons should also be made with models specifically designed for this task to better demonstrate the overall performance of the proposed model in this context.\\n\\nWe have included a comprehensive performance comparison table in **Appendix C**, which summarizes the performance of various models on the ScanRefer benchmark since 2022. The table now cover expert models, general backbones and other unified 3D LLMs, offering a detailed comparison with our proposed model, GeVLM.\\n\\nIn the main paper, we have updated **Table 1** to include comparisons with 3DJCG and D3Net. These models were selected to ensure a fair comparison among unified 3D-Language Models (3D-LLMs) capable of handling multiple tasks.\\n\\n### Table: Performance comparison of other models on ScanRefer.\\n\\n| Method | Unique (Acc\\\\@0.25 / Acc\\\\@0.5) | Multiple (Acc\\\\@0.25 / Acc\\\\@0.5) | Overall (Acc\\\\@0.25 / Acc\\\\@0.5) |\\n|---------------------------|-----------------------------|-------------------------------|------------------------------|\\n| **ScanRefer** | 76.33 / 53.51 | 32.73 / 21.11 | 41.19 / 27.40 |\\n| **MVT** | 77.67 / 66.45 | 31.92 / 23.30 | 39.43 / 33.26 |\\n| **3D-SPS** | 84.12 / 66.72 | 40.32 / 29.82 | 48.82 / 36.98 |\\n| **ViL3DRel** | 81.58 / 68.62 | 40.30 / 30.71 | 47.94 / 37.73 |\\n| **BUTD-DETR** | 84.20 / 66.30 | 46.60 / 35.10 | 52.20 / 39.80 |\\n| **HAM** | 79.24 / 67.86 | 41.46 / 34.03 | 48.79 / 40.60 |\\n| **3DRP-Net** | 83.13 / 67.74 | 42.14 / 31.95 | 50.10 / 38.90 |\\n| **EDA** | 85.76 / 68.57 | 49.13* / 37.64 | 54.59 / 42.26 |\\n| **M3DRef-CLIP** | 85.30 / 77.20 | 43.80 / 36.80 | 51.90 / 44.70 |\\n| **ConcreteNet** | 86.40* / 82.05* | 42.41 / 38.39 | 50.61 / 46.53* |\\n| **DORa** | - / - | - / - | 52.80* / 44.80 |\\n| **3D-VisTA** (need task fine tuning) | 81.60 / 75.10 | 43.70 / 39.10* | 50.60 / 45.80 |\\n| **3D-VLP**(need task fine tuning) | 84.23 / 64.61 | 43.51 / 33.41 | 51.41 / 39.46 |\\n| 3DJCG (grounding + captioning) | - / 64.34 | -/ 30.82 | - / 37.33 |\\n| D3Net (grounding + captioning) | - / 72.04 | - / 30.05 | - / 37.87 |\\n| GeVLM (Ours) | 82.00 / 75.70 | 39.00 / 34.70 | 46.90 / 42.30 |\\n\\nMethods highlighted in **bold** are expert models designed specifically for the 3D grounding task, limiting their capabilities to this single task. In contrast, models like 3D-VLP and 3D-VisTA serve as general backbones for a variety of 3D tasks but require task-specific fine-tuning to perform competitively on individual tasks.\\n\\n- Key observations from the table:\\n - **Competitive Accuracy**: GeVLM surpasses all unified 3D-LLM and demonstrates strong performance among expert models, particularly in the Acc\\\\@0.5 metric. This highlights the effectiveness of our DACE method in aligning predicted and reference locations with minimized distance-based penalties.\\n - **No Task-Specific Fine-Tuning**: Unlike the expert models listed, GeVLM achieves its results without requiring task-specific fine-tuning. This underscores the robustness and generalization capabilities of our model compared to specialized models that are heavily optimized for specific tasks.\\n - **Broad Applicability**: While all listed models are either expert systems tailored to a specific task or require task specific fine-tuning, GeVLM is versatile, addressing a diverse range of tasks. This flexibility emphasizes the broader utility of our approach, distinguishing it from single-purpose models, and making its results even more significant.\"}", "{\"title\": \"Response to Reviewer wme5 Part 3/4\", \"comment\": \">From the ablation study shown in Table 4, it is observed that most of the performance gains are concentrated in the DACE module. It remains unclear whether similar improvements could be achieved using only \\\"world + DetailedScanRefer + DACE.\\\" The necessity of incorporating viewpoint information is not well substantiated.\\n\\nWe conducted experiments comparing the \\\"World + DetailedScanRefer + DACE\\\" configuration to our proposed model, GeVLM incorporates rotated scene coordinates (utilizing viewpoint information), DetailedScanRefer, and DACE. The results across various tasks and metrics are summarized as follows:\\n\\n| Metric | World + DetailedScanRefer + DACE | GeVLM |\\n|--------------------------------------|----------------------------------|----------------------|\\n| **ScanRefer (Acc\\\\@0.25)** | **47.28** | 46.94 |\\n| **ScanRefer (Acc\\\\@0.50)** | **42.35** | 42.29 |\\n| **ScanRefer (Unique Acc\\\\@0.25)** | 81.41 | **82.04** |\\n| **ScanRefer (Unique Acc\\\\@0.50)** | 75.33 | **75.72** |\\n| **ScanRefer (Multiple Acc\\\\@0.25)** | **39.53** | 38.97 |\\n| **ScanRefer (Multiple Acc\\\\@0.50)** | **34.87** | 34.70 |\\n| **Multi3dRefer (F1\\\\@0.25)** | 49.18 | **49.95** |\\n| **Multi3dRefer (F1\\\\@0.50)** | 45.33 | **46.14** |\\n| **ScanQA (CIDEr)** | 87.73 | **90.53** |\\n| **ScanQA (Bleu-4)** | 13.79 | **15.45** |\\n| **SQA3D (EM)** | **52.53** | 52.24 |\\n\\nWhile the \\\"World + DetailedScanRefer + DACE\\\" configuration shows slightly better performance on ScanRefer Multiple and Overall metrics(Acc\\\\@0.25 and Acc\\\\@0.50), our GeVLM model excels in the Unique category, achieving higher Unique Acc\\\\@0.25 and Unique Acc\\\\@0.50. Additionaly, GeVLM outperforms in tasks such as Multi3dRefer and ScanQA.\\n\\nThe use of rotated scene coordinates aligns the spatial representation with the camera's viewpoint. This alignment proves advantageous, especially for tasks like ScanQA and SQA3D, which do not provide explicit camera poses and typically rely on world coordinates. We have clarified this point in the experimental section. By rotating the coordinates, we standardize the spatial representation across tasks, enabling the model to generalize more effectively and leverage learned spatial relationships.\\n\\nConsidering the overall performance across multiple tasks, we find that incorporating viewpoint information through the rotated coordinate system offers significant benefits. Thus, we continue to employ rotated coordinates in our GeVLM model to enhance its generalization capabilities and performance across diverse 3D language tasks.\"}" ] }
7nOl5W6xU4
CityAnchor: City-scale 3D Visual Grounding with Multi-modality LLMs
[ "Jinpeng Li", "Haiping Wang", "Jiabin chen", "Yuan Liu", "Zhiyang Dou", "Yuexin Ma", "Sibei Yang", "Yuan Li", "Wenping Wang", "Zhen Dong", "Bisheng Yang" ]
In this paper, we present a 3D visual grounding method called CityAnchor for localizing an urban object in a city-scale point cloud. Recent developments in multiview reconstruction enable us to reconstruct city-scale point clouds but how to conduct visual grounding on such a large-scale urban point cloud remains an open problem. Previous 3D visual grounding system mainly concentrates on localizing an object in an image or a small-scale point cloud, which is not accurate and efficient enough to scale up to a city-scale point cloud. We address this problem with a multi-modality LLM which consists of two stages, a coarse localization and a fine-grained matching. Given the text descriptions, the coarse localization stage locates possible regions on a projected 2D map of the point cloud while the fine-grained matching stage accurately determines the most matched object in these possible regions. We conduct experiments on the CityRefer dataset and a new synthetic dataset annotated by us, both of which demonstrate our method can produce accurate 3D visual grounding on a city-scale 3D point cloud.
[ "3D Visual Grounding", "Large language model", "multi-modality language model" ]
Accept (Poster)
https://openreview.net/pdf?id=7nOl5W6xU4
https://openreview.net/forum?id=7nOl5W6xU4
ICLR.cc/2025/Conference
2025
{ "note_id": [ "znpqB2tyAk", "zJCwoV6P5q", "y0yVdJpd77", "teQPlp5jRt", "p5fbFO2VSB", "oeA7GSZQ9b", "lZySq8JPIJ", "iJWLHnxUoM", "gaX2s1qj3J", "fknCMPlj46", "dKO4HfSJKo", "dHAXZrDyPJ", "cBXIJ6eqXk", "aZqJboxKJo", "Txf691h1sR", "PncAb1ZXh5", "P7ogTadSO9", "P1QuRiZNLJ", "OeG9WwDZUI", "ONQPvKpBvL", "L4nAmjZr9Q", "JX2fwLIxpJ", "IaXnqbnBVS", "DEmqluayHH", "AVOHgV3w7q", "5WSkLN8qdt" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1733223160527, 1732167340677, 1732169363871, 1730720926214, 1732707925439, 1732165141444, 1730897798573, 1733168047461, 1732168177270, 1733144431253, 1730720953370, 1737523541290, 1730305763605, 1734869823059, 1733210357382, 1732166312732, 1732169926044, 1732547988912, 1732170275789, 1733210242818, 1733212548697, 1732167276930, 1732706608694, 1733090428575, 1732706444315, 1732473022253 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2923/Authors" ], [ "ICLR.cc/2025/Conference/Submission2923/Authors" ], [ "ICLR.cc/2025/Conference/Submission2923/Authors" ], [ "ICLR.cc/2025/Conference/Submission2923/Reviewer_YNjM" ], [ "ICLR.cc/2025/Conference/Submission2923/Authors" ], [ "ICLR.cc/2025/Conference/Submission2923/Authors" ], [ "ICLR.cc/2025/Conference/Submission2923/Reviewer_6qAx" ], [ "ICLR.cc/2025/Conference/Submission2923/Reviewer_P5JV" ], [ "ICLR.cc/2025/Conference/Submission2923/Authors" ], [ "ICLR.cc/2025/Conference/Submission2923/Authors" ], [ "ICLR.cc/2025/Conference/Submission2923/Reviewer_8f4q" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2923/Reviewer_P5JV" ], [ "ICLR.cc/2025/Conference/Submission2923/Area_Chair_sQoy" ], [ "ICLR.cc/2025/Conference/Submission2923/Authors" ], [ "ICLR.cc/2025/Conference/Submission2923/Authors" ], [ "ICLR.cc/2025/Conference/Submission2923/Authors" ], [ "ICLR.cc/2025/Conference/Submission2923/Authors" ], [ "ICLR.cc/2025/Conference/Submission2923/Authors" ], [ "ICLR.cc/2025/Conference/Submission2923/Authors" ], [ "ICLR.cc/2025/Conference/Submission2923/Reviewer_6qAx" ], [ "ICLR.cc/2025/Conference/Submission2923/Authors" ], [ "ICLR.cc/2025/Conference/Submission2923/Authors" ], [ "ICLR.cc/2025/Conference/Submission2923/Reviewer_YNjM" ], [ "ICLR.cc/2025/Conference/Submission2923/Authors" ], [ "ICLR.cc/2025/Conference/Submission2923/Reviewer_P5JV" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer 6qAx\", \"comment\": \"Thank you for the discussion. We really appreciate your efforts in reviewing our work and the insightful comments that help us improve the paper!\"}", "{\"title\": \"Responses to Reviewer 8f4q (Part 2)\", \"comment\": \"**Q3:** Also, there are little vague sentences and grammatical errors in the paper. I recommend that the author will revise the paper.\\n\\n**A3:** Thanks for your comments. We have carefully checked and corrected the vague sentences and grammatical errors in the paper, and the PDF has been updated.\"}", "{\"title\": \"Responses to Reviewer P5JV (Part 1)\", \"comment\": \"**We sincerely appreciate the reviewer's insightful comments and time dedicated to evaluating our work. We address all concerns below.**\\n\\n**Q1:** Limitations of 2D Mapping: Projecting 3D space onto a 2D map often results in a loss of spatial information, particularly in cases of occlusions or overlapping instances. This pipeline, especially the CLM component, operates under the significant assumption that all objects are nearly flat, which inherently limits its applicability to more complex environments, such as indoor spaces or densely populated urban areas like NYC.\\n\\n**A1:** Thanks for pointing out the limitations of 2D mapping. We agree that the proposed method using 2D projection works in most cities similar to our two datasets but not in extremely complex 3D cities like NYC. Current existing 3D visual grounding only focuses on small-scale scenes like a room while our method CityAnchor extends our baseline methods to real city-scale visual grounding, which is a substantial step toward intelligent geospatial analysis. We agree that generalizing such visual grounding to complex 3D cities like NYC could be an important and interesting future topic. Though the current CLM may not be applicable to such complex 3D cities, incorporating an LLM like our FMM or a more generalized CLM is still a feasible and promising starting point for these complex 3D cities. We have added these discussions in the revision of **Sec.5 (Limitations)**.\\n\\n**Q2:** Dependence on Point Segmentation Model: The pipeline requires users to first segment the scene point cloud into objects using a pretrained 3D segmentation model. As a result, it heavily depends on the granularity and accuracy of this pretrained model, setting an upper performance limit for the proposed approach.\\n\\n**A2:** Thanks for pointing out the CityAnchor's dependence on a pretrained 3D segmentation model. Below, we provide a detailed response to address this concern.\\n\\n**a) CityAnchor has the flexibility in selecting 3D segmentation models.**\\nCityAnchor follows the CityRefer to segment the scene point cloud into objects using a pretrained 3D segmentation model but can be incorporated with different instance segmentation methods. We do not rely on any specific type of instance segmentation algorithms.\\n\\n**b) CityAnchor has the potential to generalize to unknown objects.**\\nIn the revision, we additionally conduct a generalization ability experiment to assess CityAnchor's ability to recognize unknown objects in Figure 8 of the Appendix. On the CityRefer dataset, CityAnchor is trained with only four object categories: Building, Car, Parking, and Ground, and we introduce three objects of unknown object categories (Road Intersection, Woodland, and River) that CityAnchor has not seen before. We show that CityAnchor is still able to determine a reasonable matching score between the text descriptions and the given 3D objects due to the strong generalization ability of LLMs, even though these objects may not be well segmented by the 3D segmentation model.\\n\\n**Q3:** Justification for the Proposed Dataset: While introducing a new dataset is valuable, the necessity of the CityAnchor dataset for this task or the proposed method requires further clarification. It is essential to address whether the method\\u2019s performance would significantly degrade without this dataset, whether it enhances diversity or robustness, and if it offers any unique attributes that the CityRefer dataset lacks. Providing these details would strengthen the contribution of the dataset to the paper.\\n\\n**A3:** Thanks for your comments. \\nWe utilize the CityAnchor dataset to illustrate the effective design of our method because **1)** the CityAnchor dataset has a wider range of object categories, covering nine categories including Building, Vegetation, Aircraft, Truck, Vehicle, LightPole, Fence, StreetSign, and Bike, which demonstrates the effectiveness of the grounding algorithm on more categories. **2)** the CityAnchor dataset is based on synthetic data and thus has accurate instance labels, which helps us evaluate the visual grounding more accurately without other distracting factors. Thus, we have provided this dataset for the evaluation of our visual grounding performance.\"}", "{\"summary\": \"The paper addresses the challenge of object localization in city-scale 3D point clouds with a method called CityAnchor, which leverages a multi-modal large language model. Using a two-stage approach of coarse localization and fine-grained matching, CityAnchor efficiently and accurately identifies target objects in large urban scenes. Experimental results show that CityAnchor significantly outperforms existing methods in accuracy and speed, with promising applications in urban planning and geospatial analysis.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"S1: The method\\u2019s coarse-to-fine design is well-structured for efficient large-scale 3D visual grounding.\", \"s2\": \"Experimental results show substantial and objective improvements over existing methods.\", \"s3\": \"The proposed model demonstrates clear practical potential for applications in urban planning and geospatial analysis.\", \"weaknesses\": \"W1: The method seems heavily reliant on large language models, and the need for manual adjustment of the Region of Interest (RoI) threshold in the coarse localization stage suggests a lack of generalizability.\", \"w2\": \"While the coarse-to-fine localization design is effective, it builds on existing multi-stage approaches without introducing fundamentally new concepts in 3D visual grounding, which suggests that the method's strengths lack a degree of innovation.\", \"other\": \"The layout could be improved; for instance, all teaser images should be embedded as PDFs rather than PNGs to ensure the text within images is selectable.\", \"questions\": \"It\\u2019s unclear how the method addresses dynamic objects in urban environments, such as moving vehicles or pedestrians, which could affect the reliability of grounding results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Gentle reminder for Reviewer 6qAx\", \"comment\": \"Dear Reviewer 6qAx,\\n\\nAs the author-reviewer discussion period is coming to a close, we would like to provide a gentle reminder that we have posted a response to your valuable comments. We sincerely appreciate your encouraging feedback and are eager to address any additional questions or concerns you may have.\\n\\nIf there is any additional information or clarification we might provide to support the discussion, we would be grateful if you could let us know.\\n\\nThank you once again for your encouraging comments and time dedicated to evaluating our work.\"}", "{\"title\": \"Overall Response from Authors\", \"comment\": \"We sincerely appreciate all reviewers for their detailed feedback and constructive suggestions. We are glad that the reviewers recognize the effectiveness of CityAnchor in addressing the 3D city-scale visual grounding problem with positive comments like the well-structured coarse-to-fine design (6qAx, 8f4q, YNjM, P5JV), the impressive experimental results demonstrating significant performance improvements (6qAx, 8f4q, YNjM, P5JV) compared to the existing baseline methods, and the introduction of the CityAnchor dataset can serve as a valuable resource for future research (P5JV).\\n\\nAfter carefully improving the quality of our submission, we present here a revised main paper and supplementary materials, and the pdf has been updated. The modifications in the main paper have been highlighted in red, the added experiments and necessary analysis supplemented in the appendix have been highlighted in blue.\", \"changes_in_the_main_paper_include\": \"$\\\\bullet$ Detailed meaning of $c$ and encoding process for FMM in Sec.3.3.1.\\n\\n$\\\\bullet$ Improved writing: StreetSgin $\\\\rightarrow$ StreetSign in Sec.4.1.1.\\n\\n$\\\\bullet$ Specific examples added for \\\"Novel Objects\\\" and \\\"Novel Descriptions\\\" settings in Sec.4.1.3.\\n\\n$\\\\bullet$ Clarification of fixed RoI threshold (0.3) and positive and negative sample ratio (1:3) in Sec.4.1.4.\\n\\n$\\\\bullet$ More in-depth discussion for limitations and future work in Sec.5.\", \"changes_in_the_revised_supplementary_material_include\": \"$\\\\bullet$ Additional Results (Figure A.2) and discussion on CityRefer dataset with RoI map in Sec.A.1.\\n\\n$\\\\bullet$ Results (Figure A.7) and analysis on diverse text prompts in Sec.A.4.\\n\\n$\\\\bullet$ Generality experiment (Figure A.8 and Figure A.9) and discussion for unknown objects in Sec.A.5.\"}", "{\"summary\": \"This paper proposes a multi-modal LLM for city-scale 3D visual grounding named CityAnchor. CityAnchor adopts a coarse-to-fine searching strategy. First, a 3D segmentation model generates the potential objects from point clouds. Next, in the coarse localization stage, a LLaVA generates the <RoI> token and a SAM generates the RoI heapmap indicating the candidates of the target object. At last, in the fine-grained matching stage, another LLaVA select the best candidate as the target object. Besides, this work also presents a new dataset for 3D visual grounding. Experiments show that CityAnchor outperforms the previous methods by a large margin.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written and easy to follow.\\n2. The experiment results are impressive.\\n3. The proposed method is well motivated and technically sound.\", \"weaknesses\": \"1. As also mentioned in Sec.5, the efficiency of the proposed method is a major concern. It involves two LLMs and multiple forward passes in the fine-grained matching stage. Are there any possible solutions to improve the efficiency?\\n2. How to choose the positive and the negative samples during the training of FMM?\\n3. The objects are generated by a pretrained 3D segmentation model which can only recognize a closed set of targets. So I expect to see the generality of CityAnchor to unknown objects.\\n4. Some important details are missing:\\n- Line 157, what is $T_c$?\\n- Line 138 & 209, how to determine whether an object has a landmark name?\\n- Line 206, what is $c$ in $E^s_x$?\\n- Line 232, which encoder is the concatenated feature vector fed into?\", \"questions\": \"Please address the questions in the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank authors for their thorough explanation and efforts in rebuttal. Majority of my concerns have been addressed, I'll raise my score accordingly.\"}", "{\"title\": \"Responses to Reviewer YNjM\", \"comment\": \"**We sincerely appreciate the reviewer's insightful comments and time dedicated to evaluating our work. Our responses are listed below.**\\n\\n**Q1:** The method seems heavily reliant on large language models, and the need for manual adjustment of the Region of Interest (RoI) threshold in the coarse localization stage suggests a lack of generalizability.\\n\\n**A1:** Thanks for your comments and our responses are in the following.\\n\\n**a) Reasons for using LLMs.**\\n\\nLLMs have proven effectiveness across a variety of domains due to their powerful text understanding and reasoning capabilities, and a promising direction to improve visual grounding is to design a multi-modality large language model to process both the text prompts and city-scale 3D point clouds. While the previous city-scale grounding method is trained from scratch using text prompts and 3D point clouds, one of our contributions is to introduce the powerful LLMs in city-scale visual grounding, which outperforms them by 36\\\\% ~ 51\\\\% on Acc@0.25 and 31\\\\% ~ 48\\\\% on Acc@0.50 on two city-scale datasets as shown in Table 1 of the main paper. We believe that our method is aligned with the ongoing trend in the research community to incorporate LLMs into multi-modal applications, where the complementary strengths of language modeling and 3D understanding are combined to tackle complex problems in city-scale scenarios.\\n\\n**b) We do not manually tune the threshold in CLM.**\\n\\nIn all the experiments conducted on the CityRefer and CityAnchor datasets (e.g., Tables 1, 2, and 3 of the main paper), we employed a fixed RoI threshold (0.3) without adjusting the threshold. \\nAs shown in Table 1 of the main paper, when the RoI threshold is fixed to 0.3, CityAnchor already performs well and achieve at least 46\\\\% and 35\\\\% on Acc@0.50 on both CityRefer and CityAnchor datasets, respectively.\\nTable 4 uses the results of different thresholds to analyze the efficiency and effectiveness of our method but is not our main results. \\nWe have clarified this in **Sec.4.1.4 (Implementation details)** of the revised paper.\\n\\n**Q2:** While the coarse-to-fine localization design is effective, it builds on existing multi-stage approaches without introducing fundamentally new concepts in 3D visual grounding, which suggests that the method's strengths lack a degree of innovation.\\n\\n**A2:** Thanks for the feedback. \\nWe agree that the coarse-to-fine strategy is widely adopted in various parts of Computer Vision but how to utilize such a strategy in city-scale geospatial analysis is unexplored. Previous method CityRefer randomly selects objects for the visual grounding task, which is slow and inaccurate. Thus, we introduce the coarse-to-fine strategy to improve both accuracy and efficiency. \\n\\nDesigning such a coarse searching module for the city-scale point cloud is also non-trivial. We effectively project the point clouds onto the ground and formulate it as a more manageable 2D coarse-searching problem. All of these techniques are unexplored and a part of our innovation.\\n\\n**Q3:** The layout could be improved; for instance, all teaser images should be embedded as PDFs rather than PNGs to ensure the text within images is selectable.\\n\\n**A3:** Thanks for your suggestion. we have adjusted the paper to use PDFs instead of PNGs as the teaser and experiment figures.\\n\\n**Q4:** It\\u2019s unclear how the method addresses dynamic objects in urban environments, such as moving vehicles or pedestrians, which could affect the reliability of grounding results.\\n\\n**A4:** Thanks for your comments. In our current implementation, we remove dynamic objects, such as moving vehicles, pedestrians, and ongoing constructions, following the baseline method CityRafer. By removing these dynamic objects, we can reduce the risk of grounding errors caused by the unpredictable motion or temporary presence of such objects.\\n\\nWe agree with the reviewer that handling dynamic objects, such as moving vehicles, pedestrians, and active construction sites, is important in the data preprocessing of our method. For future works, it could be an interesting topic to conduct city-scale visual grounding for dynamic 3D point clouds. We have added these discussions in the revision of **Sec.5 (Limitations)**.\"}", "{\"title\": \"Gentle reminder for Reviewer P5JV\", \"comment\": \"As the author-reviewer discussion period approaches its end, we want to kindly remind you of our responses to your follow-up questions and thoughtful suggestion. We sincerely appreciate your considerate feedback and are eager to address any additional questions or concerns you may have.\\n\\nIf there is any additional information or clarification we could provide to support the discussion, we would be delighted if you could let us know.\\n\\nThank you once again for your considerate feedback and the time you have devoted to reviewing our work.\"}", "{\"summary\": \"In the paper, the authors introduced CityAnchor, a 3D visual grounding method tailored for city-scale scenes. The CityAnchor is based on a two-stage multi-modal LLM. In the coarse stage, CityAnchor predicts candidate objects that match the query text descriptions on projected 2D maps derived from urban point clouds, allowing us to efficiently filter out redundant regions and concentrate on likely objects. Next, it performs fine-grained matching between text descriptions and the candidate objects to establish the final grounding results by encoding both text descriptions and object features into an LLM. To do that, the authors created a new dataset and evaluated the proposed method on both CityAnchor and CityRefer datasets to demonstrate the effectiveness of the proposed method on city-scale point clouds.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"First of all, it is interesting that a highly intuitive idea is proposed to address two major issues in existing methods concerning how to improve the multi-modality feature extraction in the large-scale visual grounding and how to efficiently localize the object in a large-scale point cloud.\\n\\nAlthough it lacks technical novelty, it effectively implements the necessary modules to address two main problems. More specifically, a coarse location firstly finds possible regions on a projected 2D map of the point cloud while the fine-grained matching accurately determines the most similar region with the given test description.\\n\\nIn experimental section, the proposed method achieved SoTA results in city-level localization tasks, demonstrating its effectiveness. The ablation study is highly analytical for each level module and feature representation.\", \"weaknesses\": \"One concern is the validity of the coarse localization module. As shown Fig.4, I am curious whether CLM operates as intended. It appears to activate too many candidates, as if it were merely distinguishing between urban and non-urban areas. This raises doubts about whether it significantly aids in the efficient operating in the overall framework.\\n\\nExperiment analysis and description are not specific and descriptive. For example, a detailed explanation based on specific examples is needed to clarify the novel objects and novel descriptions, whether the reason they are not in the training data is due to an out-of-distribution (OOD) scenario.\", \"questions\": \"I mentioned all comments including reasons and suggestions in the above sections. I recommend that the author will provide all the concerns, and improve the completeness of the paper. If the rebuttal period resolves the above-mentioned concerns, I will gladly raise my score. Also, there are little vague sentences and grammatical errors in the paper. I recommend that the author will revise the paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The paper presents CityAnchor, a novel 3D visual grounding method designed for city-scale scenes, utilizing a two-stage multi-modality large language model (LLM). The first stage focuses on coarse localization of potential object regions based on text descriptions, while the second stage involves fine-grained matching to determine the best object matches. Specifically,\\n- Coarse Localization Module (CLM) takes a city-scale colored point cloud, projects it onto a 2D map, and uses the text description to regress possible regions of the target on this 2D map.\\n- Fine-grained Matching Module (FMM) performs fine-grained comparisons between each candidate object identified by the CLM and the text description by predictiong the similarity between the text and each candidate object, and select the object with the highest similarity score.\\nAdditionally, the paper introduces the CityAnchor dataset, a synthesized benchmark for 3D visual grounding. The method demonstrates significant improvements in accuracy and efficiency over existing techniques on the CityRefer and CityAnchor datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper introduces CityAnchor, a novel approach to 3D visual grounding specifically tailored for city-scale scenes. Its use of a two-stage multi-modality large language model (LLM) for both coarse localization and fine-grained matching is effective and advance in the field. The proposed approach effectively integrates spatial context and object features, showcases the ability in addressing the complexities of urban environments, setting it apart from existing methods that may not adequately handle such challenges.\", \"The paper provides a comprehensive evaluation of the model's performance, demonstrating significant improvements in accuracy compared to existing methods. Additionally, the introduction of the CityAnchor dataset as a synthesized benchmark for 3D visual grounding further enhances the quality of the research, providing a valuable resource for future studies.\", \"The paper is well-structured and easy to follow, clearly stating the objectives, methodology, and results of the research.\"], \"weaknesses\": [\"Limitations of 2D Mapping: Projecting 3D space onto a 2D map often results in a loss of spatial information, particularly in cases of occlusions or overlapping instances. This pipeline, especially the CLM component, operates under the significant assumption that all objects are nearly flat, which inherently limits its applicability to more complex environments, such as indoor spaces or densely populated urban areas like NYC.\", \"Dependence on Point Segmentation Model: The pipeline requires users to first segment the scene point cloud into objects using a pretrained 3D segmentation model. As a result, it heavily depends on the granularity and accuracy of this pretrained model, setting an upper performance limit for the proposed approach.\", \"Justification for the Proposed Dataset: While introducing a new dataset is valuable, the necessity of the CityAnchor dataset for this task or the proposed method requires further clarification. It is essential to address whether the method\\u2019s performance would significantly degrade without this dataset, whether it enhances diversity or robustness, and if it offers any unique attributes that the CityRefer dataset lacks. Providing these details would strengthen the contribution of the dataset to the paper.\", \"Inference time and hyperparameter tuning: These two limitations were briefly discussed in the paper.\"], \"questions\": [\"Effectiveness of CLM: In Figure 4, it appears that CLM is not highly effective, as it predominantly highlights houses and roads. Could the authors provide additional results or offer an explanation for this behavior?\", \"Prompt Engineering Requirements: It seems that users must invest effort in engineering their prompts to achieve accurate results, which may pose usability challenges. The authors could consider conducting a user study to gather prompts from a diverse set of users to assess the method\\u2019s robustness when different descriptions target the same object.\", \"Progressive Prompting Approach: It would be intuitive to structure prompts in a coarse-to-fine manner\\u2014beginning with a broad query that identifies multiple potential candidates and gradually introducing constraints to precisely locate the target.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper introduces an approach that localizes an object in a city-scale point cloud. The key of the proposed idea has two stages: one that roughly locates possible positions in the 2D map of the point cloud using LLM, and the next step accurately localizes these potential candidates. The authors conduct experiments with the CityRefer dataset and the newly annotated synthetic dataset, and the authors demonstrate superior accuracy on these two datasets. Given the consensus from reviewers for the paper's acceptance, AC confirms that the proposed approach introduced interesting ideas in this field and proposed a challenging dataset for the task.\", \"additional_comments_on_reviewer_discussion\": \"Overall, all reviewers were inclined toward paper acceptance after the rebuttal phase. AC confirms that the discussion was constructive, and the authors provided solid feedback. Finally, the authors updated the main paper and supplementary materials correctly.\\n\\nMore specifically, reviewer 6qAx asks a question regarding the efficiency of the proposed method, and the authors mention possible ways to improve the speed by using RoI enhancement and landmark information guidance. The authors also provided answers regarding the question about positive and negative sampling ratios, and the authors explained CityAnchor settings that use known object classes. The other comments are mostly about equations and network design. The reviewer 8f4q mentioned the validity of the coarse localization module, such as CLM, in the proposed approach, and the authors provide additional results and note the limitation of the proposed approach. The reviewer 8f4q requested a more informative discussion regarding the analysis of the experimental results. Although reviewer 8f4q did not confirm the author's feedback, AC confirms that the authors correctly addressed the concerns. The reviewer YNjM provides a short review regarding the motivation for using LLMs, threshold setting in CLM, inserted image format, and moving objects. The authors provided feedback on the comments, and the reviewer YNjM raised the score in the end. The reviewer P5JV provided a thorough review. The questions were about the limitation of the 2D mapping, dependence of the point segmentation model, and justification of the proposed dataset, inference time, RoI threshold, prompt engineering requirements. The reviewer P5JV also suggests the constructive direction that adjusts the structure of the prompts in a coarse-to-fine manner. There was another discussion about identifying unknown objects. The reviewer P5JV finally raised the score, clarifying that all major concerns have been resolved.\"}", "{\"title\": \"Response to Reviewer YNjM\", \"comment\": \"Thank you for the discussion. We sincerely appreciate the time and effort you dedicated to reviewing our work and helping us improve the paper!\"}", "{\"title\": \"Responses to Reviewer 6qAx\", \"comment\": \"**We sincerely appreciate the reviewer for their invaluable comments and time dedicated to evaluating our work. We provide our responses to each question below.**\\n\\n**Q1:** As also mentioned in Sec.5, the efficiency of the proposed method is a major concern. It involves two LLMs and multiple forward passes in the fine-grained matching stage. Are there any possible solutions to improve the efficiency?\\n\\n**A1:** Thanks for your insightful comments. Our method achieves better efficiency than baselines due to the coarse-to-fine strategy. We agree that there is still room for improvement. We provide two possible solutions in the following. We have added this discussion in the revision of **Sec.5 (Limitations)**.\\n\\n**a) RoI Enhancement.** In the first stage of CityAnchor, the Coarse Localization Module (CLM) is designed to regress the regions of interest (RoI) of target object for filtering out irrelevant objects. We can train the CLM using more diverse datasets to enhance the accuracy of the RoIs, enabling them to comprehend complex query texts, while effectively excluding irrelevant objects and purposefully retaining only the relevant objects.\\n\\n**b) Landmark Information Guidance.** We observe that landmarks are frequently mentioned in query text, such as \\u201cThe big oval green ground that is nearby the Perry BarrGreyhound Stadium\\u201d. Considering that Retrieval-Augmented Generation (RAG) emerges as a promising solution to mitigate issues of limited expertise in highly specialized queries, we can incorporate landmark information for each object into the CityAnchor as external knowledge using RAG, utilizing landmark information for efficient grounding. Specifically, we plan to select only the objects near the landmark to compare with the query text and avoid too many candidate objects being involved in fine-grained comparisons. \\n\\n**Q2:** How to choose the positive and the negative samples during the training of FMM?\\n\\n**A2:** Thanks for your comments. We select positive and negative samples in a ratio of 1:3 for FMM training, and the negative samples are randomly chosen among the candidate objects after excluding the target object. We have included this clarification in **Sec.4.1.4 (Implementation details)**.\\n\\n**Q3:** The objects are generated by a pretrained 3D segmentation model which can only recognize a closed set of targets. So I expect to see the generality of CityAnchor to unknown objects.\\n\\n**A3:** Thanks for your comments. \\nWe follow the experimental setting of CityAnchor to rely on a pre-trained 3D segmentation model to extract objects.\\n\\nWe show that CityAnchor has the potential to generalize to unknown objects. In the revision, we additionally conduct a generality experiment to assess CityAnchor's ability to recognize unknown objects in Figure 8 of the Appendix. On the CityRefer dataset, CityAnchor is trained with only four object categories: Building, Car, Parking, and Ground, and we introduce three objects of unknown object categories (Road Intersection, Woodland, and River) that CityAnchor has not seen before. We show that CityAnchor is still able to determine a reasonable matching score between the text descriptions and the given 3D objects due to the strong generalization ability of LLMs.\\n\\n**Q4:** Some important details are missing:\\n\\nLine 157, what is $T_c$?\\n\\nLine 138 \\\\& 209, how to determine whether an object has a landmark name?\\n\\nLine 206, what is $c$ in $E^s_x$?\\n\\nLine 232, which encoder is the concatenated feature vector fed into?\\n\\n**A4:** Thanks for your comments. We have added these missing details in the revised version.\\n\\n**a) Line 157, what is $T_c$?**\\n\\n$T_c$ denotes the output texts (including <RoI> token) in Coarse Localization Module (CLM). We have added a representation of $T_c$ in Figure 2 in revised version paper.\\n\\n**b) Line 138 \\\\& 209, how to determine whether an object has a landmark name?**\\n\\nThe CityRefer dataset provides landmark names for some objects, with approximately 10\\\\% of the objects already assigned a landmark name, while the remaining objects lack such identifiers. We follow the CityRefer dataset and assign a landmark name to an object based on the geospatial information retrieved from OpenStreetMap.\\n\\n**c) Line 206, what is $c$ in $E^s_x$?**\\n\\n$c$ in $E^{s}_x\\\\in \\\\mathbb{R}^{c \\\\times d}$ represents the feature-length of the 2D CLIP feature, which is 256.\\n\\n**d) Line 232, which encoder is the concatenated feature vector fed into?**\\n\\nThe concatenated feature vector is input into the transformer encoder of the Fine-grained Matching Module (FMM), which is fine-tuned from LLaVA. The FMM predicts the similarity between the query text and the objects based on the concatenated feature vector and then selects the object with the largest similarity as the final grounding result for the query text.\"}", "{\"title\": \"Responses to Reviewer P5JV (Part 2)\", \"comment\": \"**Q4:** Inference time and hyperparameter tuning: These two limitations were briefly discussed in the paper.\\n\\n**A4:** Thanks for your comments. We have quantitatively discussed inference time and hyperparameter tuning for RoI threshold as much detail as possible in the main paper. Below, we provide a detailed response to clarify these points.\\n\\n**a) Inference time.** We have discussed the inference time in Table 2 and Table 3 of the main paper. CityAnchor takes only $32.45s$ and $51.72s$ to localize an object in a scene of the CityRefer and CityAnchor dataset, which is $1.3\\\\times$ and $1.8\\\\times$ faster than the CityRefer method, respectively. This is mainly attributed to our coarse-to-fine strategy for filtering out most irrelevant objects. Additionally, utilizing the CLM in CityAnchor accelerates inference time by $2.5 \\\\times$ compared to the configuration without CLM.\\n\\n**b) RoI threshold in hyperparameter tuning.**\\nAs shown in Table 4 of the main paper, we have discussed the RoI threshold in hyperparameter tuning. A low threshold results in numerous candidate objects being involved in fine-grained comparisons, thereby leading to long grounding times. Conversely, setting a stricter threshold results in the exclusion of correct target objects, leading to a decrease in grounding accuracy. To strike a balance between grounding accuracy and time consumption, we select $0.3$ as the RoI threshold for the CLM in the first stage.\\n\\nFrom the above analysis, our CityAnchor is faster and better than all the baseline methods, because the CLM effectively filters out erroneous objects.\\n\\n**Q5:** Effectiveness of CLM: In Figure 4, it appears that CLM is not highly effective, as it predominantly highlights houses and roads. Could the authors provide additional results or offer an explanation for this behavior?\\n\\n**A5:** Thanks for your comments. The rasterized map shown in Figure 4 is mainly dominated by small objects, which shows the limited effectiveness of CLM in this case.\\nAs illustrated in Figure 2 of the Appendix, we provide additional qualitative results of our method on the CityRefer dataset in \\\"Novel Objects\\\" setting, which shows that CLM can effectively filter our erroneous objects.\\n\\nAs illustrated in Figure 6 of the Appendix, we provide heat map visualizations for both simple and complex query texts, which shows that CLM performs reasonably even on complex texts. \\nWe have shown in Table 3 of the main paper that CLM significantly reduces computation time while improving grounding accuracy, with 46.86\\\\% of Acc@0.50 and only $32.45s$ in localizing an object.\\n\\nThe coarse localization module (CLM) is designed to filter out irrelevant objects, thereby enhancing the model efficiency in city-scale visual grounding tasks. However, we must acknowledge that the filter effectiveness can vary depending on the size and attribute of target objects. Large objects (e.g., factories, athletic fields, parking lots) can be easily identified and excluded through the heat map output by CLM. In contrast, small objects (e.g., cars, residential buildings) are more challenging to distinguish, requiring further detailed comparison in the fine-grained matching module (FMM). \\n\\nWe have revised the paper to include a discussion of this phenomenon in **Sec.5 (Limitations)** and plan to address these issues in future work.\"}", "{\"title\": \"Responses to Reviewer P5JV\", \"comment\": \"Thanks for valuable comments and suggestions. We evaluate CityAnchor's ability to recognize unknown objects for entire pipeline including CLM and FMM. For better visualization, we have drawn a target box on CLM results for Figure 4, Figure A.1, Figure A.2 and Figure A.9. The PDF has been updated.\\n\\n**CLM has a moderate ability and inherent limitations to identify unknown objects for novel categories.** Image-based open vocabulary segmentation is a challenging task, especially for large-scale maps rasterized from city-scale point clouds. We agree that we may need more efforts to generalize city-scale grounding to open vocabulary level. Though CLM may not always precisely select only the target object into the RoI map in visual grounding for unknown object of novel categories, it can include a broad range of objects potentially matching textual descriptions. This ensures that unknown objects can be considered as the candidate objects for fine-grained matching in FMM.\\n\\nAs illustrated in Figure 9 of the Appendix, we add qualitative experiments on RoI detection and visual grounding and provide both the RoI map and grounding results for three representative unknown objects. The final grounding results demonstrate that CityAnchor exhibits the certain generalization capabilities, enabling it to perform 3D visual grounding across a broader range of objects.\"}", "{\"title\": \"Responses to Reviewer P5JV (Part 3)\", \"comment\": \"**Q6:** Prompt Engineering Requirements: It seems that users must invest effort in engineering their prompts to achieve accurate results, which may pose usability challenges. The authors could consider conducting a user study to gather prompts from a diverse set of users to assess the method\\u2019s robustness when different descriptions target the same object.\\n\\n**A6:** Thanks for your comments. We have performed experiments to analyze how specific parts of text prompts influence grounding performances, which serve as the reference in prompt engineering for different categories of objects. Additionally, we assess the method\\u2019s robustness when diverse text prompts from different users target the same object. \\n\\n**a) Analysis for specific parts of text prompts**\\n\\nAs shown in Figure 6 of the main paper, we visualize the grounding results of systematically removing text descriptions related to color, shape, and neighborhood contextual information. Color description plays an important role in visual grounding and the lack of color information often leads to incorrect results. In contrast, while shape descriptions are less critical for cars, they become more significant for buildings due to the larger variability of shapes. In a scene with many similar objects (e.g., red cars), color and shape descriptions are inadequate to distinguish these objects, and integrating neighborhood contextual information is vital for achieving accurate grounding.\\n\\n**b) Analysis for diverse text prompts from different users**\\n\\nAs illustrated in Figure 7 of the Appendix, we provide qualitative results for diverse text prompts toward same object. In addition to thorough object feature extraction, the text description plays a crucial role in visual grounding. For the successful grounding cases, text descriptions related color and category are often indispensable.\\n\\n**Q7:** Progressive Prompting Approach: It would be intuitive to structure prompts in a coarse-to-fine manner\\u2014beginning with a broad query that identifies multiple potential candidates and gradually introducing constraints to precisely locate the target.\\n\\n**A7:** We greatly appreciate the reviewer\\u2019s insightful suggestion in the progressive prompting approach, and it holds great potential for enhancing the usability and flexibility in city-scale grounding. \\n\\nWhile current CityAnchor employs a two-stage pipeline for coarse localization and fine-grained matching, integrating progressive prompting approach offers a promising area for future work. This process could begin with a broad query to identify multiple potential candidates, followed by progressively refining constraints (such as category, spatial relation, attribute cues) to precisely locate the target object.\\n\\nFor example, consider a city scene where the user queries, \\u201cThe tall and white-roofed building near the One Stop Building (Landmark)\\u201d. In a progressive prompting query, we might have the following steps:\\n\\n**(1) Initial Prompt (Broad Query):** \\u201cThe building in this city scene\\u201d. The system identifies all the building candidate objects in the city-scale point cloud.\\n\\n**(2) Refined Prompt (Spatial Constraint):** \\u201cThe building near the One Stop Building\\u201d. The system filters most of the building candidate objects based on their proximity to the \\u201cOne Stop Building\\u201d, progressively localizing the target object.\\n\\n**(3) Final Prompt (Attribute Matching):** \\u201cThe tall and white-roofed building\\u201d. Additional attributes (e.g., roof color or specific textural features) could further confirm the target object among the filtered building candidate objects.\\n\\nProgressive prompting aligns naturally with human reasoning and interaction patterns, making it an inspiring way to enhance CityAnchor\\u2019s adaptability in real-world scenarios.\"}", "{\"title\": \"Response to Reviewer P5JV\", \"comment\": \"Thank you for the discussion. We sincerely appreciate your time and effort in reviewing our work and helping us improve our paper!\"}", "{\"comment\": \"Thanks the authors for the detailed explanations and most of my concerns have been addressed. In spite of the potential efficiency problem caused by the usage of LLMs, I still think this work has made enough contribution to the community. For this reason, I would keep my rating unchanged.\"}", "{\"title\": \"Responses to Reviewer 8f4q (Part 1)\", \"comment\": \"**We sincerely appreciate the reviewer's encouraging comments and time dedicated to evaluating our work. We make the responses to each question below.**\\n\\n**Q1:** One concern is the validity of the coarse localization module. As shown Fig.4, I am curious whether CLM operates as intended. It appears to activate too many candidates, as if it were merely distinguishing between urban and non-urban areas. This raises doubts about whether it significantly aids in the efficient operating in the overall framework.\\n\\n**A1:** Thanks for your comments. The rasterized map shown in Figure 4 is mainly dominated by small objects, which shows the limited effectiveness of CLM in this case. As illustrated in Figure 2 of the Appendix, we provide additional qualitative results of our method on the CityRefer dataset in \\\"Novel Objects\\\" setting, which shows that CLM can effectively filter our erroneous objects. As illustrated in Figure 6 of the Appendix, we provide heat map visualizations for both simple and complex query texts, which shows that CLM performs reasonably even on complex texts. We have shown in Table 3 of the main paper that CLM significantly reduces computation time while improving grounding accuracy, with 46.86\\\\% of Acc@0.50 and only 32.45s in localizing an object.\\n\\nThe coarse localization module (CLM) is designed to filter out irrelevant objects, thereby enhancing the model efficiency in city-scale visual grounding tasks. However, we must acknowledge that the filter effectiveness can vary depending on the size and attribute of target objects. Large objects (e.g., factories, athletic fields, parking lots) can be easily identified and excluded through the heat map output by CLM. In contrast, small objects (e.g., cars, residential buildings) are more challenging to distinguish, requiring further detailed comparison in the fine-grained matching module (FMM). \\n\\nWe have revised the paper to include a discussion of this phenomenon in **Sec.5 (Limitations)** and plan to address these issues in future work.\\n\\n**Q2:** Experiment analysis and description are not specific and descriptive. For example, a detailed explanation based on specific examples is needed to clarify the novel objects and novel descriptions, whether the reason they are not in the training data is due to an out-of-distribution (OOD) scenario.\\n\\n**A2:** We appreciate the reviewer\\u2019s insightful feedback and acknowledge the need for more detailed experimental analysis and clearer descriptions regarding novel objects, novel descriptions, and out-of-distribution (OOD) scenarios. In the revised version paper, we have added more specific examples and explanations to address these concerns. \\n\\n**a) Novel Descriptions (ND).**\\n\\n\\\"Novel Descriptions\\\" setting indicates that the evaluation objects are seen in the training set, but the query text descriptions are different in test set. This variance in text description may lead to different grounding results, even for the same object. \\n\\nTo clarify \\\"ND\\\" setting, we provide a specific example in **Sec.4.1.3 (Metrics and experimental setting)** of the revised version. For example, a building object may be described in the training set as \\u201cA building with white roof near the road\\u201d, but in the test set, the same object may be described as \\u201cAlong with the street, there is a white-roofed building\\u201d.\\n\\n**b) Novel Objects (NO).**\\n\\n\\\"Novel Objects\\\" setting means the objects in the test set are unseen in the training set. This variance in both object and text description is likely to lead to different grounding results, even for these objects belong the same semantic category (e.g., Building or Car).\\n\\nTo clarify \\\"NO\\\" setting, we provide a specific example in **Sec.4.1.3 (Metrics and experimental setting)** of the revised version. For example, a blue car object may appear in the test set, while the same blue car object and its corresponding text description is not present during training process.\\n\\n**c) Out-of-distribution (OOD) scenarios.**\\n\\nTo evaluate CityAnchor's ability to recognize \\\"OOD\\\" objects of novel categories, we conducted a generality experiment as illustrated in Figure 8 of the Appendix. In the CityRefer dataset, CityAnchor was trained with only four object categories: Building, Car, Parking, and Ground, and we introduced unknown object categories (such as road Intersection, Woodland, and River) that CityAnchor has not seen before. We evaluated the CityAnchor\\u2019s ability to predict the similarity between these unknown objects and both the correct text descriptions (Positive Sample) and incorrect text descriptions (Negative Sample). The grounding results for unknown objects demonstrate that CityAnchor exhibits strong generalization capabilities, enabling it to perform 3D visual grounding across a broader range of objects.\"}", "{\"title\": \"Gentle reminder for Reviewer YNjM\", \"comment\": \"Dear Reviewer YNjM,\\n\\nAs the author-reviewer discussion period is coming to an end, we wanted to kindly remind you of our responses to your valuable comments. We sincerely appreciate your considerate feedback and are eager to address any additional questions or concerns you may have.\\n\\nIf there is any additional information or clarification we could provide to support the discussion, we would be grateful if you could let us know.\\n\\nThank you once again for your considerate feedback and the time you have devoted to reviewing our work.\"}", "{\"comment\": \"Thank you for your further explanation. I have raised my scores.\"}", "{\"title\": \"Gentle reminder for Reviewer 8f4q\", \"comment\": \"Dear Reviewer 8f4q,\\n\\nAs the author-reviewer discussion period is coming to an end, we wanted to kindly remind you of our responses to your valuable comments. We sincerely appreciate your thoughtful feedback and are eager to address any additional questions or concerns you might have.\\n\\nPlease let us know if there's any further information or clarification we can provide to facilitate the discussion process.\\n\\nThank you once again for your encouraging comments and time dedicated to evaluating our work.\"}", "{\"comment\": \"Thank the authors for the detailed and comprehensive response! I have some follow-up questions.\", \"re\": \"A2(b) The setting of this experiment is a bit confusing to me. I cannot tell if you are only testing the FMM, or the entire pipeline. My question is, it seems CLM relies on off-the-shelf point cloud seg model a lot, so if CLM cannot identify the novel categories, there's not much FMM can do. A most straightforward way to proof the generalizability of your approach might be something like in Fig.4, but with prompts of novel objects/neighborhood/landmarks.\\n\\nAlso a suggestion for figures like Fig.4, Fig.A1, Fig.A2, it would be better if authors can also draw a target box on CLM results. Right now I'm having a hard time finding the query object on the map (i.e. cannot verify if CLM is able to correctly include the target in the candidates)\"}" ] }
7mlvOHL6qJ
LASeR: Towards Diversified and Generalizable Robot Design with Large Language Models
[ "Junru Song", "Yang Yang", "Huan Xiao", "Wei Peng", "Wen Yao", "Feifei Wang" ]
Recent advances in Large Language Models (LLMs) have stimulated a significant paradigm shift in evolutionary optimization, where hand-crafted search heuristics are gradually replaced with LLMs serving as intelligent search operators. However, these studies still bear some notable limitations, including a challenge to balance exploitation with exploration, often leading to inferior solution diversity, as well as poor generalizability of problem solving across different task settings. These unsolved issues render the prowess of LLMs in robot design automation largely untapped. In this work, we present LASeR -- Large Language Model-Aided Evolutionary Search for Robot Design Automation. Leveraging a novel reflection mechanism termed DiRect, we elicit more knowledgeable exploratory behaviors from LLMs based on past search trajectories, reshaping the exploration-exploitation tradeoff with dual improvements in optimization efficiency and solution diversity. Additionally, with evolution fully grounded in task-related background information, we unprecedentedly uncover the inter-task reasoning capabilities of LLMs, facilitating generalizable design processes that effectively inspire zero-shot robot proposals for new applications. Our simulated experiments on voxel-based soft robots showcase distinct advantages of LASeR over competitive baselines. Code at https://github.com/WoodySJR/LASeR.
[ "Robot Design Automation", "Large Language Model", "Voxel-Based Soft Robot" ]
Accept (Poster)
https://openreview.net/pdf?id=7mlvOHL6qJ
https://openreview.net/forum?id=7mlvOHL6qJ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xje9Cs0Qf3", "xfxU3ybXJt", "x607BoBjKB", "wr1eXLvXJO", "wBGVvkjnIz", "w1Hmbt3DgA", "uSLS2EPLMN", "tIkDCflmIb", "pGY3LfXATI", "nDouaAzGU8", "mXvFK9GlX8", "mNH1ydTgA1", "mDhsUTpXpH", "lX265qCEsN", "jAQBy6xAfr", "heNXk8vnc3", "dgblG6LXam", "cejTq3vnjC", "b0GNFmb7vk", "ZrM5PibZvE", "YZwOzptxck", "WrcVN2zYiy", "Ttbcwaxoha", "T7veJfd06Y", "PVl9k39n8Q", "Olz7RjINoK", "O26gOo6ncB", "M9fYmqdhIW", "JJAwYVH1z9", "FkP5dnNnOa", "Eq2AriWtqo", "EUlMnIjnq2", "DNgXcpuVN1", "CrbwqMjzoK", "CPynV2jgZa", "CBGjlb6MRy", "C9MvqpW4j6", "BI8XquFqLw", "7zhPq4wqk1", "7xkWR2ZsLI", "6SchGTjVc3", "4d26lxWOmX", "2CG2LUGk8T", "27IbgDac69", "11lFUBcvAg" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732800428705, 1733292633384, 1732613657028, 1733149380607, 1732612405243, 1732614695832, 1733150933569, 1733150143800, 1732800618889, 1732615286511, 1733169412362, 1732800957211, 1732653704342, 1730496817832, 1732612000051, 1732868461312, 1733066317524, 1732868576822, 1733149231988, 1732654564248, 1732859781926, 1732800243712, 1732613863232, 1732614964164, 1735141319274, 1730689591055, 1732805375528, 1737524282095, 1732615092735, 1732614036014, 1732613017226, 1732868345176, 1732654227412, 1732615224565, 1733109360253, 1732614485221, 1732655625475, 1733149653968, 1730724941997, 1732613334380, 1730650201888, 1732801491249, 1732656324399, 1732612813430, 1732801517597 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Reviewer_pDhF" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Reviewer_drYB" ], [ "ICLR.cc/2025/Conference/Submission13792/Reviewer_drYB" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Reviewer_5ABx" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Reviewer_drYB" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Area_Chair_Wn2J" ], [ "ICLR.cc/2025/Conference/Submission13792/Reviewer_D75R" ], [ "ICLR.cc/2025/Conference/Submission13792/Reviewer_drYB" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Reviewer_drYB" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Reviewer_D75R" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Reviewer_drYB" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Reviewer_pDhF" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Reviewer_5ABx" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Reviewer_drYB" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ], [ "ICLR.cc/2025/Conference/Submission13792/Authors" ] ], "structured_content_str": [ "{\"title\": \"Further Response to Reviewer drYB by Authors (2/5)\", \"comment\": \"Thank you for the suggestion. We now provide both the **separate measures** (Table 6 and 7 in our latest revised paper) and their **weighted average** (Table 8) in our paper. These results also include two additional SOTA baselines and Catcher-v0 (a hard task). Our finding suggests that **LASeR has more of a advantage in discovering distinct high-performers** than achieving high averaged edit distance. We cannot actually state which approach is more favorable, as both benefit diversity in their own way. However, combining the results of optimization efficiency (i.e. the fitness curves), it is clear that **LASeR better balances exploration with exploitation**.\\n\\nMeanwhile, as we pointed out in Appendix L, we believe that the results in Table 7 are somewhat misleading, because edit distance in itself does not suffice as a valid diversity metric. Even if a group of robot designs has another group as its subset, the former might still have a lower edit distance (even much lower, due to the paradox in Figure 20). The comparative results in Table 7 might not be in our favor, but we choose to display them to **reveal an open problem regarding diversity measurement** and hopefully inspire future work to investigate further. \\n\\nWe additionally come up with yet another diversity metric -- **the total number of different voxels between all pairs of high-performing robots** (i.e. edit distance without being averaged). We believe this metric more naturally aggregates distinctiveness and number of distinct high performers **without needing pre-specified weighting coefficients**, thus benefiting from **better interpretability**. The results are reported in **Table 9**, which show that LASeR ranks first in Walker-v0 and second in the remaining tasks. The major competitor here is MorphVAE, which achieves an average rank of 2.5 across four tasks. LASeR, on the other hand, achieves 1.75. This means that, according to this newly proposed metric, **LASeR still achieves the highest overall diversity**.\"}", "{\"comment\": \"We greatly appreciate your positive feedback. Thank you again for your time and efforts!\"}", "{\"title\": \"Response to Reviewer D75R (4/4)\", \"comment\": \">**Question 4**: To further improve this paper, it is better to show the designed robots by LLM and add analysis of the differences between LLM-generated and GA-generated robot designs.\\n\\n**Response**:\\nWe greatly appreciate your suggestion. In response, we have included visualizations of the robot designs evolved by different algorithms in Appendix K of revised paper. Most notably, the robot designs obtained by LASeR exhibit **readily observable diversity**, which supports the quantitative results presented in Section 3.2.1. \\n_____\\n>**Question 5**: While the paper demonstrates inter-task knowledge transfer, how well does LASeR generalize to tasks that are significantly different from the ones used in the experiments? What are the limitations of this generalization? \\n\\n**Response**: \\nThank you for the thought-provoking question. In our paper we focused on transferring design experience between task instances that are intuitively similar. These experimental designs are largely based on the structural similarities between tasks as revealed in Wang et al. (2023). Here we demonstrate that **the prior knowledge of task relationships is not strictly necessary for successful inter-task generalization**. Specifically, we conducted additional zero-shot experiments where LLM was given elite samples of Walker-v0, but instructed to propose designs for Jumper-v0, which is a significantly different task (Appendix O of revised paper). Remarkably, the LLM was able to dig deeper into the underlying inter-task associations and identify rather **general, low-level design principles** (such as the importance of actuators for shape change and rigid voxels for structure stability) that are still relevant to the new task. According to the results presented in Appendix O, **the zero-shot designs still outperform randomly sampled ones**. These results suggest that LLMs possess substantial potential for generalizing design experience across seemingly different optimization problems, as long as they share some common ground and are not completely irrelevant to one another. \\n_____\\nThank you again for your time and effort spent reviewing this paper! Your suggestions have been really conducive to our revision. Please let us know if your questions and concerns have been fully addressed. \\n\\n**References**:\\n\\n[1]Bhatia, Jagdeep, et al. \\\"Evolution gym: A large-scale benchmark for evolving soft robots.\\\" Advances in Neural Information Processing Systems 34 (2021): 2201-2214.\\n\\n[2] Hao, Hao, Xiaoqun Zhang, and Aimin Zhou. \\\"Large Language Models as Surrogate Models in Evolutionary Algorithms: A Preliminary Study.\\\" arXiv preprint arXiv:2406.10675 (2024).\\n\\n[3] Lange, Robert, Yingtao Tian, and Yujin Tang. \\\"Large language models as evolution strategies.\\\" Proceedings of the Genetic and Evolutionary Computation Conference Companion. 2024.\\n\\n[4] Lim, Bryan, Manon Flageat, and Antoine Cully. \\\"Large Language Models as In-context AI Generators for Quality-Diversity.\\\" arXiv preprint arXiv:2404.15794 (2024).\\n\\n[5] Liu, Tennison, et al. \\\"Large Language Models to Enhance Bayesian Optimization.\\\" The Twelfth International Conference on Learning Representations. 2024. \\n\\n[6] Song, Junru, et al. \\\"MorphVAE: Advancing Morphological Design of Voxel-Based Soft Robots with Variational Autoencoders.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 9. 2024.\\n\\n[7] Wang, Yuxing, et al. \\\"PreCo: Enhancing Generalization in Co-Design of Modular Soft Robots via Brain-Body Pre-Training.\\\" Conference on Robot Learning. PMLR, 2023.\\n\\n[8] Wei, Jason, et al. \\\"Chain-of-thought prompting elicits reasoning in large language models.\\\" Advances in neural information processing systems 35 (2022): 24824-24837.\\n\\n[9] Yang, Chengrun, et al. \\u201cLarge Language Models as Optimizers.\\u201d The Twelfth International Conference on Learning Representations. 2024. \\n\\n[10] Yao, Shunyu, et al. \\\"Tree of thoughts: Deliberate problem solving with large language models.\\\" Advances in Neural Information Processing Systems 36 (2024).\"}", "{\"title\": \"Further Response to Reviewer D75R (2/4)\", \"comment\": \"**Response to 2 and 3:**\\n\\nThank you very much for your suggestions. We agree that the paper will benefit from further clarification regarding the main purpose of introducing diversity. We also concur that a description of the robot design problem should precede the introduction of large language models to foster a more rational narrative logic. Following your advice, we have re-organized the first three paragraphs of the introduction. Since the deadline of paper uploading has passed, we include the revised version below.\"}", "{\"title\": \"Response to Reviewer pDhF (2/2)\", \"comment\": \">**Weakness 3**: The experiments in the paper are restricted to relatively simple voxel-based soft robots within predefined settings.\\n\\n**Response:** \\nIn this paper, we adopt the commonly used setup from previous research on voxel-based soft robots (VSRs) (Song et al., 2024; Saito and Oka, 2024; Dong et al., 2024; Wang et al., 2023; Bhatia et al., 2021), specifically the 5x5 design space with 5 distinct materials. This configuration yields a combinatorially vast search space, encompassing approximately $2.98\\\\times10^{17})$ possible designs. This design space has proven **sufficiently expressive, allowing for the emergence of complex and diverse morphological structures**. We adhere to this setup also to facilitate a **direct comparison** with the majority of studies on VSR. That being said, our approach is fully scalable to larger design spaces and could also applied to other types of robots. We conducted an additional experiment in which we used LASeR to evolve 10x10 Walkers (Appendix I of revised paper), and **the advantage of our approach remains evident**. We note that the current results are averaged across two independent runs. We planned to conduct three runs but there remains one of them unfinished. We would post the complete results once they are available. For your reference, we notice that our experiments on 10x10 Walker-v0 take on average **1.5 times longer** than the 5x5 case. Evaluating the potential of LLMs for larger-scale and more intricate robot design problems will be one of our focuses in future work. \\n_____\\n>**Weakness 4**: The core method does not involve learning or fine-tuning for LLMs besides PPO utilized for the fitness evaluation. \\n\\n**Response:** \\nIn this work, we opt for **the most cost-effective** method of leveraging LLMs for robot design automation, which is through in-context learning or instruction tuning. Our experimental results demonstrate that pre-trained LLMs, once appropriately prompted, can **already achieve excellent optimization performance**. Fine-tuning the LLM parameters would incur substantially higher costs, both in terms of computational resources and the need for a large, carefully curated dataset. Moreover, fine-tuning introduces the risk of issues such as overfitting and catastrophic forgetting, which might negatively impact performances. Nevertheless, we greatly appreciate your insightful comment. We recognize that this is a promising avenue for future exploration, and fine-tuning general-purpose LLMs for various combinatorial optimization tasks represents a valuable research direction. We have included a discussion of this prospect in Appendix P of our revised paper. \\n_____\\n\\n>**Question 2**: In Section 3.2.1, you mentioned that the fitness performance of robot designs would not change significantly after being modified by DiRect. Do you have quantitative results to support this conclusion? \\n\\n**Response**: \\nThank you for your value feedback. We have now included the quantitative results that led to our argument in Section 3.2.1. Concretely, we took Walker-v0 as an example and conducted a two-tailed Student\\u2019s $t$-test to compare the fitness of robot designs before and after DiRect modification. The resulting $p$-value was **0.19**, which exceeds the 0.05 significance threshold, indicating no significant change in fitness. Additionally, we replaced the DiRect mechanism with random mutations, in which case the $p$-value was **less than 0.001**, demonstrating a significant decrease in fitness compared to our diversity reflection mechanism. For further discussion and experimental results of LLMs serving as ***intelligent* mutation operators**, the reviewer is referred to Appendix F of revised paper. \\n_____\\nThank you again for your time and effort spent reviewing this paper! Your suggestions have been really conducive to our revision. Please let us know if your questions and concerns have been fully addressed. \\n\\n**References**: \\n\\n[1] Bhatia, Jagdeep, et al. \\\"Evolution gym: A large-scale benchmark for evolving soft robots.\\\" Advances in Neural Information Processing Systems 34 (2021): 2201-2214.\\n\\n[2] Dong, Heng, Junyu Zhang, and Chongjie Zhang. \\\"Leveraging Hyperbolic Embeddings for Coarse-to-Fine Robot Design.\\\" The Twelfth International Conference on Learning Representations. 2024. \\n\\n[3] Saito, Takumi, and Mizuki Oka. \\\"Effective Design and Interpretation in Voxel-Based Soft Robotics: A Part Assembly Approach with Bayesian Optimization.\\\" Artificial Life Conference Proceedings 36. Vol. 2024. No. 1. One Rogers Street, Cambridge, MA 02142-1209, USA journals-info@ mit. edu: MIT Press, 2024.\\n\\n[4] Song, Junru, et al. \\\"MorphVAE: Advancing Morphological Design of Voxel-Based Soft Robots with Variational Autoencoders.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 9. 2024.\\n\\n[5] Wang, Yuxing, et al. \\\"PreCo: Enhancing Generalization in Co-Design of Modular Soft Robots via Brain-Body Pre-Training.\\\" Conference on Robot Learning. PMLR, 2023.\"}", "{\"title\": \"Response to Reviewer drYB (2/5)\", \"comment\": \">**Weakness 2 (Weak notion of diversity and lacking examples)**\\n\\n>**Question 1**: How do you justify the proposed measures of diversity and the weighted averaging used? Can you provide other measures? For example, the proportion of bodies made up of different voxel types? \\n\\n>**Question 2**: Can you provide concrete examples of morphologies that emerged using your method compared with others? Is the diversity in these collections immediately observable just by looking at the bodies? \\n\\n**Response**: \\nThank you for the question, and we appreciate this opportunity to clarify our approach towards diversity measurement. First of all, we acknowledge that the number of distinct, high-performing robots is not a comprehensive measure of diversity, as it does not account for the extent of distinctiveness of these robots. Rather, this measure was designed as a ***correction* to prevalent diversity metrics**. Previous studies have predominantly employed two methods for quantifying diversity: (1) averaged measures of distinctiveness within a group of robots, such as per-voxel entropy (Song et al., 2024) or pairwise edit distance (Saito and Oka, 2024); (2) manual categorization of robot designs into distinct classes, followed by the computation of the Simpson index, which is analogous to an entropy of class distribution (Medvet et al., 2021). However, **the latter method becomes impractical** when dealing with more abstract morphologies that lack clear subpopulations. The former approach presents a ***paradox*** (as illustrated in Appendix L in revised paper): adding a new robot design to an existing collection can reduce diversity, even if the new design is distinct, provided that it falls within the existing distribution of this collection. This is **counter-intuitive** because the addition of an alternative should increase, rather than decrease, overall diversity. \\n\\nTo address the above issue, we incorporate the number of distinct robot designs into our measurement as a correction. Thus, our two measures -- **distinctiveness** (through edit distance) and **the number of distinct designs** -- complement each other, providing a **more comprehensive and reasonable** characterization of diversity. **To clarify, the edit distance measure amounts to counting the number of different voxels between a pair of robot designs, and this should be equivalent to \\u201cthe proportion of bodies made up of different voxel types\\u201d that you mentioned.** \\n\\nWhile our approach takes both distinctiveness and the number of robots into account, we acknowledge that the weights assigned to these quantities are somewhat expedient and primarily intended to bring them onto the same scale. Specifically, we selected weights of 1.0 and 0.1 based on preliminary experiments, where we found that the number of distinct high-performing designs obtained in a single run of experiment ranged **from several dozen to around two hundred**, while the edit distance is defined to range **between 0 and 25**. Given the lack of universally accepted metrics for morphological diversity, we hope our approach will inspire future work to propose more reasonable and comprehensive measurements. In response to your suggestion, we have also visualized the robot designs evolved by different methods to enable a more straightforward and intuitive comparison of diversity (Appendix K of revised paper).\"}", "{\"comment\": \"We deeply appreciate your recognition, which is very encouraging. Thank you again for your time and effort!\"}", "{\"title\": \"Further Response to Reviewer D75R (4/4)\", \"comment\": \"**We hope that the above responses have fully addressed your questions and concerns. Once again, please allow us to express our sincere gratitude for your time and effort dedicated into reviewing our paper.**\\n\\n**References:**\\n\\n[1] Achiam, Josh, et al. \\\"Gpt-4 technical report.\\\" arXiv preprint arXiv:2303.08774 (2023).\\n\\n[2] Bhatia, Jagdeep,et al. \\\"Evolution gym: A large-scale benchmark for evolving soft robots.\\\" Advances in Neural Information Processing Systems 34 (2021): 2201-2214.\\n\\n[3] Cheney, Nick, et al. \\\"Unshackling evolution: evolving soft robots with multiple materials and a powerful generative encoding.\\\" ACM SIGEVOlution 7.1 (2014): 11-23.\\n\\n[4] Brahmachary, Shuvayan, et al. \\\"Large Language Model-Based Evolutionary Optimizer: Reasoning with elitism.\\\" arXiv preprint arXiv:2403.02054 (2024).\\n\\n[5] Chocron, Olivier, and Philippe Bidaud. \\\"Evolutionary algorithms in kinematic design of robotic systems.\\\" Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems,1997.\\n\\n[6] Tilman, David, et al. 2017. Future threats to biodiversity and pathways to their prevention. Nature 546, 7656 (2017), 73\\u201381.\\n\\n[7] Hiller, Jonathan, and Hod Lipson. \\\"Automatic design and manufacture of soft robots.\\\" IEEE Transactions on Robotics 28.2 (2011): 457-466.\\n\\n[8] Hu, Jiaheng, et al. \\\"Modular robot design optimization with generative adversarial networks.\\\" 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022.\\n\\n[9] Hu, Jiaheng, Julian Whitman, and Howie Choset. \\\"GLSO: grammar-guided latent space optimization for sample-efficient robot design automation.\\\" Conference on Robot Learning. PMLR, 2023.\\n\\n[10] Huang, Beichen, et al. \\\"Exploring the True Potential: Evaluating the Black-box Optimization Capability of Large Language Models.\\\" arXiv preprint arXiv:2404.06290 (2024a).\\n\\n[11] Huang, Sen, et al. \\\"When Large Language Model Meets Optimization.\\\" arXiv preprint arXiv:2405.10098 (2024b).\\n\\n[12] Karine Miras, Eliseo Ferrante, and AE Eiben. 2020. Environmental influences on evolvable robots. PloS one 15, 5 (2020).\\n\\n[13] Lange, Robert, Yingtao Tian, and Yujin Tang. \\\"Large language models as evolution strategies.\\\" Proceedings of the Genetic and Evolutionary Computation Conference Companion. 2024.\\n\\n[14] Lehman, Joel, et al. \\\"Evolution through large models.\\\" Handbook of Evolutionary Machine Learning. Singapore: Springer Nature Singapore, 2023. 331-366.\\n\\n[15] Liu, Shengcai, et al. \\\"Large language models as evolutionary optimizers.\\\" 2024 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2024a.\\n\\n[16] Liu, Tennison, et al. \\\"Large Language Models to Enhance Bayesian Optimization.\\\" The Twelfth International Conference on Learning Representations, 2024b.\\n\\n[17] Medvet, Eric, et al. \\\"Biodiversity in evolved voxel-based soft robots.\\\" Proceedings of the Genetic and Evolutionary Computation Conference. 2021.\\n\\n[18] Morris, Clint, Michael Jurado, and Jason Zutty. \\\"Llm guided evolution-the automation of models advancing models.\\\" Proceedings of the Genetic and Evolutionary Computation Conference. 2024.\\n\\n[19] Qiu, Kevin, et al. \\\"RoboMorph: Evolving Robot Morphology using Large Language Models.\\\" arXiv preprint arXiv:2407.08626 (2024).\\n\\n[20] Romera-Paredes, Bernardino, et al. \\\"Mathematical discoveries from program search with large language models.\\\" Nature 625.7995 (2024): 468-475.\\n\\n[21] Sims, Karl. \\\"Evolving virtual creatures.\\\" Seminal Graphics Papers: Pushing the Boundaries, Volume 2. 2023. 699-706.\\n\\n[22] Leger, Chris. Darwin2K: An evolutionary approach to automated design for robotics. Vol. 574. Springer Science & Business Media, 2012.\\n\\n[23] Song, Junru, et al. \\\"MorphVAE: Advancing Morphological Design of Voxel-Based Soft Robots with Variational Autoencoders.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 9. 2024a.\\n\\n[24] Team, Gemini, et al. \\\"Gemini: a family of highly capable multimodal models.\\\" arXiv preprint arXiv:2312.11805 (2023).\\n\\n[25] Team, InternLM. \\\"Internlm: A multilingual language model with progressively enhanced capabilities.\\\" 2023-01-06)[2023-09-27]. https://github. com/InternLM/InternLM (2023).\\n\\n[26] Touvron, Hugo, et al. \\\"Llama 2: Open foundation and fine-tuned chat models.\\\" arXiv preprint arXiv:2307.09288 (2023).\\n\\n[27] Tran, Thanh VT, and Truong Son Hy. \\\"Protein design by directed evolution guided by large language models.\\\" IEEE Transactions on Evolutionary Computation (2024).\\n\\n[28] Wang, Tingwu, et al. \\\"Neural graph evolution: Towards efficient automatic robot design.\\\" arXiv preprint arXiv:1906.05370 (2019).\\n\\n[29]Yang, Chengrun, et al. \\\"Large Language Models as Optimizers.\\\" The Twelfth International Conference on Learning Representations, 2024.\\n\\n[30] Ye, Haoran, et al. \\\"Large language models as hyper-heuristics for combinatorial optimization.\\\" arXiv preprint arXiv:2402.01145 (2024).\\n\\n[31] Zhang, Lechen. \\\"CUDA-Accelerated Soft Robot Neural Evolution with Large Language Model Supervision.\\\" arXiv preprint arXiv:2405.00698 (2024).\"}", "{\"title\": \"Further Response to Reviewer drYB by Authors (3/5)\", \"comment\": \"We implemented random editing by substituting the diversity reflection (DiRect) mechanism with random voxel mutation, which is supported by a built-in function of EvoGym. Specifically, we found that the number of voxels edited by DiRect in each design is about **2.61** on average. Thus, for random mutation, we set the mutation rate to be 0.1, i.e. each voxel will, with a probability of **0.1**, be randomly replaced by a different material. Given that a robot design consists of 25 voxels, this results in **2.5** voxels being edited on average, which we believe is reasonably close to DiRect editing.\\n\\nIn Figure 13, we observe that both LASeR w/ DiRect and LASeR w/ random editing can reach a comparable level of fitness during early stage of evolution. This once again reflects the bottleneck effect explained in the response to Question 1. However, LASeR w/ DiRect demonstrated **continuous improvements** after this, whereas LASeR w/ random editing **stagnated a lot**, represented by the long segments of flat fitness curve. These observations suggest that the proposed diversity reflection mechanism better facilitates fine-grained optimization of voxel combinations due to its \\u201c***informed***\\u201d nature. The \\u201cnegligible\\u201d performance difference is primarily due to how the reward function is defined, as well as where the true difficulty of a task lies in (as explained in the response to Question 1). In Walker-v0, the reward function is defined as the distance travelled in given time. However, we notice that even randomly generated robot morphology can achieve decent travelling distance with optimized control policies. It is, instead, **the further improvements beyond 10.6 that reflect how a morphology is truly adapted to locomotion**. But we do agree that more repeated experiments are needed to verify statistical significance of our superiority. We will post the results once these experiments are finished. \\n\\nThe dashed red line improves because here random editing is **only replacing the DiRect mechanism** rather than the whole search algorithm. That is, we are still using LLM as the search operator that generates offspring solutions and using natural selection to keep top-ranking solutions among them. Random editing is playing the part of DiRect to introduce variability into offspring solutions when they overly resemble evaluated ones, except that it relies on random mutation rather than a reflection mechanism. We implemented this set of experiments mainly to show that **the proposed diversity reflection mechanism is indeed introducing variability in a more *intelligent* way**.\"}", "{\"title\": \"General Response\", \"comment\": \"Dear reviewers, area chairs and program chairs,\\n\\nWe sincerely thank you for the time and effort that you spent reviewing our paper. We very much appreciate the reviewers\\u2019 recognition of our contributions, which we briefly summarize as follows. \\n\\n>**Novelty:** \\u201cThe proposed framework is novel, and the usage of LLMs in robot design and their evolution is under-researched\\u201d (Reviewer 5ABx); \\u201cThis paper demonstrates originality by developing a mechanism aimed at solving lack of diversity in LLM-aided robot design processes... The idea of inter-task reasoning also adds to originality\\u201d (Reviewer drYB). \\n\\n>**Presentation:** \\u201cThe paper is clear and well-structured\\u201d (Reviewer pDhF); \\u201c...clearly set out intended contributions, scientific hypotheses and experiments... Figures are high quality and aesthetically pleasant...covering a fairly broad portion of related literature\\u201d (Reviewer drYB); \\u201cThe paper is well written and easy to follow\\u201d (Reviewer drYB). \\n\\n>**Effectiveness and significance:** \\u201c...effectively leverage the reasoning and decision-making capabilities of LLMs to improve the inter-task transfer propagability\\u201d (Reviewer D75R). \\u201cThe conclusions the paper makes, and its applications are relevant to the robot learning community\\u201d (Reviewer 5ABx); \\u201cJoint design and control of robotic systems is an important problem... research and development in this area is crucial to advance the field\\u201d (Reviewer drYB). \\n\\n>**Experiments:** \\u201cExtensive experimental results are provided\\u201d (Reviewer pDhF); \\u201cThe performed ablation studies are very interesting and insightful\\u201d (Reviewer 5ABx). \\n\\nWe refer the reviewers to the **individual responses** where we addressed your questions in detail. We have also made revisions to our paper accordingly (in blue font). Please let us know if your concerns are fully addressed or if there is any difficulty accessing our revised paper. Once again, please allow us to express our sincere gratitude for your valuable suggestions and feedback, which have been really conducive to our revisions.\"}", "{\"comment\": \"I sincerely thank the authors for the detailed responses and additional experimental results, which solve my main concerns. I have raised my score accordingly.\"}", "{\"title\": \"Further Response to Reviewer drYB by Authors (4/5)\", \"comment\": \"Thank you for the constructive feedback. Following your advice, we plot 95% confidence intervals (i.e. mean\\u00b11.96std) rather than one standard deviation in Figure 16. We also conduct hypothesis testing (specifically two-tailed $t$-test) to prove statistical significance. Since in this work we have chose **a sufficiently large number of robot evaluations** (i.e. 1000) to give ample opportunity to all algorithms to converge, it becomes more relevant to **compare the convergence speed** rather than entire fitness curves. To this end, we first average the eventual fitness values obtained by all repeated experiments (denoted as $f$) within a given task, and then record the number of evaluations that each experiment took to reach this average fitness (denoted as $n$). For those that did not reach $f$, $n$ is simply recorded as 1000. We then conduct a two-tailed $t$-test to compare the $n$\\u2019s of different algorithms. For Carrier-v0, $f$ is 10.69, and $n$ is on average **719.2** and **979** for LASeR and LLM-Tuner, respectively (**$p$=0.029**). For Pusher-v0, $f$ is 12.95, and $n$ is on average **528** and **888.6** with **$p$=0.054**. For Walker-v0, since none of the experiments of LLM-Tuner reach $f$=10.65, we instead compare the eventual fitness values achieved by LASeR and LLM-Tuner, which are on average **10.67** and **10.63**, with **$p$<0.001**. Since we are comparing against the best-performing baseline (which itself is **SOTA without much room left for improvement**), we believe the above analysis confirms the significant advantage of LASeR **in terms of optimization efficiency**. With regard to diversity, we notice that both LASeR and LLM-Tuner (as well as many other baselines) exhibit high variability in their diversity outcomes. So more than 5 repeated experiments would be needed to prove statistical significance. However, given that **LASeR nearly consistently achieves top-ranking diversity across different tasks** (as seen in Table 1 and 9), we believe this largely verifies the robustness of its advantage. Nevertheless, we will continue with additional repeated trials.\\n\\n>\\u201cHowever, we notice that both LASeR and LLM-Tuner exhibit relatively high variability in their diversity outcomes, suggesting that even more repetitions would be needed to establish statistical significance. Therefore, we will continue with additional repeated experiments.\\u201d \\n\\nOur above argument is with respect to diversity, rather than optimization efficiency. Sorry for causing the misunderstanding.\"}", "{\"comment\": \"Please clarify how random voxel editing was implemented.\\n\\nHow did the number of edited voxels compare to the number edited by the LLM operator? \\n\\nHow does Fig 13 represent superiority of an LLM operator over random mutations? \\n\\nThe performance difference seems negligible, and with a single trial and large differences in initialization, it is not so convincing. \\n\\nAlso, if the random editing made designs worse then why does performance of the dashed red line seem to improve over evolutionary time?\"}", "{\"summary\": \"This work proposes and evaluates an LLM-based evolutionary operator for robot design. The proposed method distinguishes itself from prior related work by incorporating an explicit mechanism for \\u201creflection\\u201d and \\u201cinter-task knowledge transfer\\u201d where the former intends to balance exploitation and exploration and the latter intends to exploit existing robot design datasets whilst leveraging LLMs ability for inter-task reasoning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Originality\\n\\nThis paper demonstrates originality by identifying the need for and explicitly developing a mechanism aimed at solving lack of diversity in LLM-aided robot design processes. The idea of using existing robot design datasets and inter-task reasoning to transfer or modify it also adds originality to this work. \\n\\nQuality\\n\\nThe authors clearly set out their intended contributions and state their scientific hypotheses as well as the experiments they intend to test these hypotheses. The figures throughout the paper are high quality and aesthetically pleasant. The authors do a good job of covering a fairly broad portion of the related literature and providing motivations for their own work. \\n\\nClarity\\n\\nThe paper is well-written and easy to follow. The methods and results figures are easy to interpret. The main method figure is done particularly well and makes it easy to understand the relatively involved, multi-step process that is the proposed algorithm. The tables throughout the paper are also well-labeled without superfluous text. \\n\\nSignificance\\n\\nJoint design and control of robotic systems is an important problem and provides a canonical example of a combinatorially explosive design space automated methods aspire to solve. The lack of diversity or tendency towards local optima in the morphological design process is a known limitation of evolutionary robotics broadly, so research and development in this area is crucial to advance the field.\", \"weaknesses\": \"Small design space\\n\\nA 5x5, two dimensional design space makes it difficult to discover interesting structures or behaviors, rendering the implications for design (let alone robotics) somewhat unclear. The search space is much smaller than most work over the past three decades, which has been in 2D but at much higher resolution with hundreds of independent motors (https://www.roboticsproceedings.org/rss20/p100.html) or in 3D with hundreds or thousands of voxels (https://www.creativemachineslab.com/soft-robot-evolution.html). The paper would be much more compelling if LLMs were operating over a 3D space where variety in gait patterns more readily appear and the control complexity increases substantially. \\n\\nWeak notions of diversity and lacking examples\\n\\nThe paper proposes the measure diversity in terms of the voxel space edit distance of robots and the number of distinct, high-performing robots. The latter is likely a poor measure of diversity as two robots can be highly similar while remaining distinct in terms of the precise voxel layout, and the former is difficult to interpret. Moreover, the paper provides no examples of the morphologies (and diversity) discovered by their algorithm. The overall diversity measure is the weighted average of these two metrics with weights of 1.0 and 0.1. There is no rationale for the selection of these weights outside of an anecdote that weighting the latter by 0.1 makes the two \\u201croughly on the same scale and given equal importance\\u201d. \\n\\nDiversity reflection incomplete ablation \\n\\nFollowing up on the above point, the paper reports an ablation study to test the effectiveness of their diversity reflection mechanism; however, the ablation does not elucidate whether the LLM is actually providing intelligent mutations that encourage diversity. An additional ablation study should be run wherein random mutations are made to existing designs (in parallel to LLM guided mutations). This would help demonstrate whether the diversity reflection mechanism represents an intelligent operator. \\n\\nEarly convergence\\n\\nAlso related to diversity, the proposed algorithm appears to converge very early relative to some other baselines in most cases. This appears to be an indication that the algorithm may be stuck in a local optima, or is it closer to a global optima? If allowed to run for longer would the other methods arrive at a similarly high performing result? If it is indeed discovering something that is close to globally optimal then the fast convergence should indicate that this task (read design space) is too easy. \\n\\nIt also appears that when using the diversity reflection component the algorithm converges faster, which seems somewhat counter-intuitive as one would expect greater exploration to produce longer convergence times. The fact that it does not leads back to a prior question as to whether the LLM is truly modifying the design in intelligent ways that ultimately produce meaningful diversity in the population. \\n\\nMarginal gains in performance\\n\\nWhen all is said and done the proposed method produces marginal gains in performance relative to baselines. In the ablation study with and without diversity reflection performance also does not change substantially whereas the diversity does improve significantly. The diversity metric itself remains questionable (see above). \\n\\nThe knowledge transfer mechanism also does not appear to itself demonstrate meaningful improvements relative to the LASeR without knowledge transfer. \\n\\nReproducibility\\n\\nThe paper states that all experiments are conducted three times and the results are averaged. Since the performance gains are relatively small, additional trials are necessary to demonstrate statistical significance of the results. \\n\\nMissing related work\\n\\nThis paper fails to discuss, compare and contrast their work with other similar methods that use LLMs to design robots, for example: https://link.springer.com/chapter/10.1007/978-981-99-3814-8_11\\n\\nChoices about the control policy and its training also bears at the very least some discussion and comparison to other evolutionary robotics approaches that employ other methods, such as gradient based optimization (https://www.roboticsproceedings.org/rss20/p100.html), for the control problem.\", \"questions\": \"1. How do you justify the proposed measures of diversity and the weighted averaging used? Can you provide other measures? For example, the proportion of bodies made up of different voxel types?\\n\\n2. Can you provide concrete examples of morphologies that emerged using your method compared with others? Is the diversity in these collections immediately observable just by looking at the bodies? \\n\\n3. Can you run these experiments again several times over to provide more meaningful performance measures, confidence intervals and statistical hypothesis testing? \\n\\n4. How do you explain the early convergence of your method when a primary claim relates to encouraging population level diversity and design exploration? \\n\\n5. Can you perform additional ablation studies of the diversity reflection aspect? For example, randomly edit voxels and compare results to LLM-based editing? Can you catalog examples of LLM edits that encourage diversity through an evolutionary lineage? \\n\\n6. Can you run experiments with a larger design space? Perhaps 9x9?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer pDhF (1/2)\", \"comment\": \"Dear reviewer,\\n\\nThank you for your constructive feedback. Below is our detailed response to the questions you raised. Please let us know if you have any trouble accessing our revised paper. \\n\\n>**Weakness 1**: The experiments of the paper did not mention the time taken of the LLM-based methods for the evolutionary algorithm, which should be considered into the evaluation of robot design efficiency. \\n\\n>**Question 3**: How does the computation time of LASeR compare to the baselines, including LLM-Tuner and the ones without using LLMs? \\n\\n**Response:** \\nThank you for pointing out this important issue. According to the data released on LLM Leaderboard (https://artificialanalysis.ai/leaderboards/models), for GPT-4o-mini, the median rate of output token generation is 99.8 tokens per second, and the latency (i.e. time to first token) is reported as 0.5 seconds. Given that LASeR makes an average of 130 API calls per generation, with each call involving approximately 180 output tokens (here we assume the worst case where each newly generated robot design triggers DiRect), this results in an overhead of around 5 minutes per generation, or 5 hours in total. \\n\\nThe latency issue could be mitigated with locally deployed LLMs, which are less affected by network delays and request queuing. However, we believe it is more pertinent to compare the overall running time of different methods, with optimization efficiency taken into account. Specifically, by checking the log messages of our programs, we find that, for Carrier-v0, in order to reach the same level of fitness, LASeR requires **7 hours**, in contrast to the most competitive baseline, LLM-Tuner, which takes **15 hours**. For Pusher-v0, the difference is greater: LASeR requires **11 hours**, whereas LLM-Tuner takes **46 hours**. On Walker-v0, LASeR is even capable of **reaching a fitness unattainable by baselines**. Hence, the rapid convergence enabled by LLMs outweighs the additional computation overhead, **rendering the latter perfectly worthwhile**. We have also included this analysis of computational efficiency in Appendix Q of revised paper. \\n_____\\n>**Weakness 2**: The similarity threshold seems to be an important hyperparameter for the proposed method, while it is not discussed in the paper. \\n\\n>**Question 1**: What considerations were taken when selecting similarity thresholds? How do you balance the diversity and performance of the generated designs? \\n\\n**Response:** \\nThe similarity threshold is indeed a crucial hyper-parameter that modulates the performance of LASeR. The choice of this threshold reflects the extent of diversity that one expects to see in the evolved solutions, and therefore **should be driven by the user\\u2019s specific preferences**. For instance, setting the threshold to 20 (as we did in our experiments) means that if a newly generated robot design shares more than 20 identical voxels with any existing solutions, it will undergo modifications by DiRect. \\n\\nThere are some **general principles** for choosing this parameter. These principles are supported by our additional experiments with several different values of threshold (as shown in Appendix N of revised paper). High similarity thresholds, like threshold=23, are generally not recommended, as they would hinder the beneficial exploration enabled by LLMs. Conversely, excessively low thresholds (such as threshold=15) might increase diversity but also risk overly aggressive exploration that compromises functionality and, in turn, harms optimization efficiency. We believe this is due to the poor extrapolation performance of LLMs when required to propose robot designs that are much different from given examples. Any moderate values in the middle should give rise to desirable performances. In fact, our findings suggest that a threshold of 18 leads to further performance gains beyond 20, which we have chosen in our study. However, we note that lower thresholds also more frequently trigger DiRect, which means more LLM API calls. Hence, **the threshold choice also involves a trade-off between evolutionary performance** (including both optimization efficiency and diversity) and **computational costs**, and **should be considered case by case**. We believe adaptive threshold scheduling, based on problem specifics and evolutionary outcomes, could be a promising direction for future research. \\n_____\"}", "{\"title\": \"We look forward to your feedback!\", \"comment\": [\"Dear Reviewer D75R,\", \"Thank you so much for your thoughtful suggestions. We have made revisions in our re-uploaded paper according to your feedback, and the changes are outlined as follows. We believe these adjustments would more effectively clarify and highlight our contributions.\", \"Included demonstrations of interpretable LLM decision-making processes in Appendix F, J and O;\", \"Included finer-grained ablation studies on our prompt, as well as the intuitions behind the prompt design, in Appendix M;\", \"Included comparisons with two more state-of-the-art baselines in Appendix G;\", \"Included evaluations on a larger design space in Appendix J, as well as on a more complex task in Appendix E;\", \"Visualized robot designs evolved by different algorithms in Appendix K to enable easier comparison;\", \"Evaluated the capability of our approach to transfer experience across different task instances in Appendix O.\", \"We hope we have addressed all of your concerns regarding the value of our work. With the discussion period deadline approaching, we eagerly look forward to any further questions or comments you may have. We sincerely hope that our responses and revisions will assist you in re-evaluating our paper. Once again, we deeply appreciate the time and effort you have dedicated into reviewing our work. Happy Thanksgiving!\"]}", "{\"comment\": \"I thank the authors for their responses to my raised questions and concerns.\\nOverall, after the extensive discussion here I am still convinced that this paper is a valuable contribution and presenting an interesting first approach for using LLMs in a more interesting way in the robot design space and I remain positive on the recommendation to accept the paper.\"}", "{\"title\": \"We look forward to your feedback!\", \"comment\": [\"Dear Reviewer 5ABx,\", \"We deeply appreciate your favorable comments of our work, which have been very encouraging. We have revised our paper according to your thoughtful suggestions, and the modifications are listed as follows.\", \"Included evaluations on a larger design space in Appendix J, as well as on a more complex task in Appendix E;\", \"Included results from additional repeated experiments of LASeR and LLM-Tuner (the most competitive baseline) in Appendix H. These results are accompanied by significance tests to demonstrate the robust superiority of our approach;\", \"Expanded our discussion of limitations and several open problems for future research in Appendix P;\", \"Cited the highly relevant and inspiring article in Related Work (Section 2.2).\", \"We hope that with our responses and revisions, we have fully addressed all your questions and concerns. We also eagerly look forward to any further questions or comments you may have. Once again, we deeply appreciate the time and effort you have dedicated into reviewing our work. Happy Thanksgiving!\"]}", "{\"title\": \"Further Response to Reviewer D75R (1/4)\", \"comment\": \"Dear Reviewer D75R,\\n\\nWe greatly appreciate your insightful feedback, which is very helpful to our revision. Below is our response to the concerns you raised. \\n\\n**Response to 1:** \\n\\nWe share your view that *Evolution Through Large Models* (ELM) is one of the most representative and to our knowledge, the earliest work on LLM-based evolutionary optimization. However, we did not include ELM as one of our baselines because it was originally proposed for designing Sodaracers through **code optimization**, which necessitates a program interface that maps between robot designs and Python codes. Such a mapping becomes **ambiguous and difficult to define** in the context of voxel-based soft robots (VSRs). Consequently, to further verify the effectiveness of our proposed method, we performed additional comparisons with another evolutionary approach that uses LLMs as mutation operators, **OPRO** (Yang et al., 2024), which is more recent and operates within the original solution space as we do. The results (presented in Appendix G in our re-uploaded paper) still demonstrate a clear advantage of LASeR. Nevertheless, to acknowledge the significance of this pioneering work, we have cited ELM (Lehman et al., 2023) in Section 2.1 and it will be one of our focuses in future work to devise an appropriate code-level solution space for VSRs, both to examine the potential impact of different solution representations and to allow for a more direct comparison with ELM.\"}", "{\"comment\": \"Acknowledging that diversity is difficult to quantify is helpful, thank you for adding it to the paper. However, the arbitrary weighting coefficients renders the employed metric difficult to interpret. I'd suggest reporting diversity in terms of edit distance, # of distinct high performers, and the weighted average separately so the reader can compare different methods. Demonstrating that LASeR outperforms on all three would be convincing, and where LASeR underperforms there may be insight into the limitations of the existing metrics.\"}", "{\"title\": \"Response by Authors\", \"comment\": \"We greatly appreciate your recognition of our work, which is very encouraging. Thank you again for your time and effort spent reviewing this paper.\"}", "{\"title\": \"Further Response to Reviewer drYB by Authors (1/5)\", \"comment\": \"Dear reviewer,\\nWe very much appreciate your active participation in the discussion, and the questions you raised are very instructive. Our responses are as follows. \\n\\nWe appreciate this opportunity to clarify that the absolute difference in fitness largely depends on **how the reward function of the task instance is defined**, as well as **where the true difficulty of a certain task lies in**. From our observation, locomotion tasks in EvoGym, regardless of the scale of design spaces, generally exhibit these characteristics: (a) Even randomly initialized populations could achieve **decent fitness levels**, due to the powerful RL algorithms and relatively low requirement for a robot design to pick up at least some form of walking gaits (it remains to be verified whether this will hold in even larger design spaces, such as 100x100); (b) However, most of the challenge lies in, once a design algorithm reaches a certain level of fitness, will it be able to **break through the bottleneck** and achieve further performance gains (This bottleneck effect is also demonstrated in Song et al. (2024)). The latter aspect demands high proficiency in a search algorithm to conduct rather **fine-grained optimization w.r.t. voxel placements**. This explains why on Walker-v0 (both 5x5 in Figure 13,15,16 and 10x10 in Figure 17), different algorithms typically reach very similar fitness levels in early stages of evolution. But **it is instead the nuanced differences thereafter that truly reflect optimization efficiency**. On the contrary, we note that object manipulation tasks (such as Pusher-v0, Carrier-v0 and Catcher-v0) have the opposite nature: they typically require **more coarse-grained** (but not necessarily easy-to-find) functional substructures to have optimal task performance, and hence **the early-stage convergence speed deserves more attention**. To further establish the statistical significance of our approach and render our evaluations more convincing, we will conduct more repeated experiments and post the results once they are available.\"}", "{\"title\": \"Response to Reviewer 5ABx (1/2)\", \"comment\": \"Dear reviewer,\\n\\nWe sincerely appreciate your recognition of the significance of our research problem and the contributions of our work. Please let us know if you have any trouble accessing our revised paper. \\n\\n**Response to Weakness 1**: \\n\\nThank you for the comment, which makes perfect sense. In this work, we chose the relatively simple experimental setup, namely a 5x5 body size with 5 different material types, mainly to facilitate **a easier comparison** with the majority of related studies in this field, as it is commonly adopted in previous Voxel-based Soft Robot (VSR) research (Song et al., 2024; Saito and Oka, 2024; Dong et al., 2024; Wang et al., 2023). We believe the primary reason why they stick with this configuration is because it results in a combinatorially vast design space with approximately $2.98\\\\times10^{17}$ possible solutions, already representing a challenging optimization problem. Meanwhile, it is also **expressive enough to generate complex and diverse morphological structures**. That being said, we find that our approach is readily adaptable to larger design spaces. Specifically, we performed additional evaluations on 10x10 Walker-v0, and observed that LASeR still **holds notable advantage** (Appendix I of revised paper). We look forward to seeing our approach applied to broader scenarios in future work. Furthermore, we conducted additional experiments on Catcher-v0 (Appendix E of revised paper), one of the most challenging tasks in EvoGym, and observed that our approach **continues to demonstrate significant advantages**. These promising results reveal the potential of our approach to scale to even larger and more complex design problems.\"}", "{\"title\": \"Response to Reviewer drYB (3/5)\", \"comment\": \">**Weakness 3 (Diversity reflection incomplete ablation)**\\n\\n>**Question 5**: Can you perform additional ablation studies of the diversity reflection aspect? For example, randomly edit voxels and compare results to LLM-based editing? Can you catalog examples of LLM edits that encourage diversity through an evolutionary lineage? \\n\\n**Response**:\\nWe appreciate your comment, which is really thought-provoking. We agree that comparing our diversity reflection mechanism with random editing would provide a more compelling evaluation. In response, we re-implemented our experiments with DiRect replaced by random voxel mutations. We found that for DiRect, the fitnesses of robot designs before and after DiRect modification show **no significant difference** (**$p$=0.19**). In contrast, the fitnesses of randomly mutated robot designs are **significantly lower** than their pre-editing counterparts (**$p$<0.001**). The evolution with random editing also suffers from **reduced optimization efficiency** (Appendix F in revised paper), as the \\u201cuninformed\\u201d exploratory behavior often disrupts essential functional structures. LLM-aided diversity reflection, on the other hand, builds on successful designs discovered along the evolutionary trajectory, and hence **promotes exploration without compromising optimization efficiency**. We believe this also addresses the question you raised in the second paragraph of Weakness 4 (early convergence). Specifically, the \\u201c***informed***\\u201d rather than random mutations of LLM is the key reason for the faster convergence, as it promotes more thorough exploration of the design space while keeping the functionality of robot designs largely intact. \\n\\nTo further address your concerns and demonstrate that the LLM is indeed providing intelligent mutations, we present **some examples of diversity reflection** throughout the evolutionary process (Appendix F in revised paper). These examples include both the pre- and post-editing morphologies, along with explanations provided by the LLM for its decision making. We find that the LLM is able to **identify critical substructures** within robot designs and **modifies only the voxel placements that do not affect functionality**, yet promote diversity. We believe these results provide strong evidence that the diversity reflection mechanism is reliably functioning as an ***intelligent* mutation operator**. \\n_____\\n>**Weakness 4 (Early convergence)**\\n\\n>**Question 4**: How do you explain the early convergence of your method when a primary claim relates to encouraging population level diversity and design exploration? \\n\\n**Response**: \\nThank you for your feedback. To address your concern about early convergence, we re-implemented LLM-Tuner, the most competitive baseline, for 2000 robot evaluations -- double the number used in our original experiments -- across three independent repetitions. The averaged results are presented in Appendix D of revised paper. Notably, **LLM-Tuner does not end up with higher fitness levels** than those achieved by LASeR, largely confirming that our algorithm **has not been stuck in local optima**. Meanwhile, the evidently slower convergence of LLM-Tuner (especially pronounced in Walker-v0 and Pusher-v0) indicates that our fast convergence is more likely due to the **effectiveness afforded by LLM-aided evolution and diversity reflection**, rather than an artifact of task difficulty.\"}", "{\"metareview\": \"The paper introduces LASeR (Large Language Model-Aided Evolutionary Search for Robot Design Automation), which advances robot design optimization by using a novel reflection mechanism (DiRect) to balance exploration and exploitation, improve solution diversity, and enable inter-task reasoning for generalizable and zero-shot robot proposals. The method demonstrates superior performance in voxel-based soft robot experiments.\\n\\nThe reviewers acknowledged the paper's significant contributions, highlighting (1) the relevance and importance of the addressed problem, (2) the novelty and interest of the proposed idea, (3) extensive experimental validation, and (4) the clarity and structure of the presentation.\\n\\nDuring the Author-Reviewer Discussion phase, the authors provided thorough and well-reasoned responses, successfully addressing most concerns and convincing some reviewers to raise their scores. The AC encourages the authors to carefully consider both pre- and post-rebuttal comments to address any remaining concerns in a future revision.\", \"additional_comments_on_reviewer_discussion\": \"During the Reviewer Discussion phase, Reviewer D75R remained negative but did not engage to provide compelling arguments against the paper. After thoroughly reviewing the reviews, rebuttal, and discussion, the AC concludes that the authors have adequately addressed Reviewer D75R\\u2019s concerns and provided reasonable justifications. Therefore, the AC recommends accepting the paper.\"}", "{\"summary\": \"In this paper, the authors introduce LASER (Large Language Model-Aided Evolutionary Search for Robot Design Automation), a novel approach that leverages Large Language Models (LLMs) to enhance the efficiency and diversity of robot design automation. In LASER, LLM is integrated into the bi-level optimization framework, and the LLM is prompted to be the mutation operator for generating new robot morphologies.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The authors proposed a reflection mechanism for automated robot design, DiRect, to encourage more knowledgeable exploratory behaviors from LLMs based on past search trajectories. Besides, they effectively leverage the reasoning and decision-making capabilities of LLMs to improve the inner-task transfer propagability.\", \"weaknesses\": \"1. The use of LLM as an evolutionary operator (powered by prompt engineering) is interesting, similar ideas such as \\\"Evolution through Large Models (ELM)\\\" and [1-2] have been proposed. The paper shows a possible pipeline of integrating LLM into the co-design process of VSR, but does not provide a deeper analysis about \\\"Why LLM works well?\\\". The black-box nature of LLMs can make it challenging to understand the reasoning behind the generated designs, Adding more explanations in the LLM's decision-making process would be beneficial.\\n\\n2. The paper mentions experimenting with different temperatures but does not provide a detailed sensitivity analysis of different prompts. In my opinion, the explanation of the intuition of your designed prompts is more important than the proposed pipeline. \\n\\n3. This paper is missing a comparison with some important baseline algorithms.\\n\\n4. The test tasks chosen for this paper are too simple to demonstrate the superiority and validity of large language models.\\n\\nReferences\\n\\n[1] Lange, Robert, Yingtao Tian, and Yujin Tang. \\\"Large language models as evolution strategies.\\\" Proceedings of the Genetic and Evolutionary Computation Conference Companion. 2024.\\n\\n[2]Hemberg, Erik, Stephen Moskal, and Una-May O\\u2019Reilly. \\\"Evolving code with a large language model.\\\" Genetic Programming and Evolvable Machines 25.2 (2024): 21.\", \"questions\": \"1. The testing tasks such as Walker-v0 and Carrier-v0 used in the paper are too simple, can you test your method on more complex tasks (\\\"Climber-v0\\\", \\\"Catcher-v0\\\", \\\"Thrower-v0\\\" and \\\"Lifter-v0\\\"), which I think are more suitable to answer the question \\\"Does LLM really have an advantage ?\\\"\\n\\n2. Can large language models really align a robot's design with its task performance? Is it enough to just use prompt engineering for more complex tasks? Can it be used as a surrogate model to predict the performance of a robot design? Can the authors give some explanations?\\n\\n3. Can the current framework scale to more complex and larger robot designs (10x10 design space for walking)? If not, what are the potential bottlenecks? In larger design space (10 x 10), does LLM still work well? For some tasks, random combinations of voxels generated by LLM or evolutionary operators don't always work well.\\n\\n4. To further improve this paper, it is better to show the designed robots by LLM and add analysis of the differences between llm-generated robot designs and GA-generated robot designs.\\n\\n5. While the paper demonstrates inter-task knowledge transfer, how well does LASER generalize to tasks that are significantly different from the ones used in the experiments? What are the limitations of this generalization?\\n\\n6. The authors need to compare their approach with those that also use LLM as a mutation operator\\uff0c such as openELM (Evolution through Large Models (ELM)) and more recent brain-body co-design methods (does not use LLM) which also use EvoGym platform, to show the effectiveness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"No.\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the author\\u2019s effort and believe that with the promised repeated trials this paper should be published. I am raising my score to reflect this.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Response to Reviewer drYB (4/5)\", \"comment\": \">**Weakness 5 (Marginal gains in performance)**\\n\\n>**Weakness 6 (Reproducibility)**\\n\\n>**Question 3**: Can you run these experiments again several times over to provide more meaningful performance measures, confidence intervals and statistical hypothesis testing? \\n\\n**Response**: \\nWe appreciate your valuable suggestion, and performed two more sets of repeated experiments. However, due to limited computation resources, we were only able to run these experiments on LASeR and LLM-Tuner (the most competitive baseline). We plan to continue with the remaining baselines and include the complete results once they are available.\\n\\nAs shown in Appendix H of revised paper, with a total of 5 repeated experiments, the advantage of LASeR over LLM-Tuner **remains evident**, in terms of both optimization efficiency and diversity. Moreover, we would like to clarify that we have chosen a **sufficiently large number of robot evaluations** to hopefully allow all baseline algorithms to converge, enabling fair comparison. This explains why some baselines have ended up with rather similar evolutionary outcomes to LASeR. However, in the context of robot design automation, besides the final fitness level, **the speed at which high-performing designs are approached is an equally important aspect for evaluating design algorithms**. This is due to the heavy computational burden involved in training control policies and the manufacturing costs of physical robots when deployed in real-world applications. In this regard, LASeR achieves **considerable performance gains**. This is particularly notable in Carrier-v0 and Pusher-v0, where LASeR requires **2x** and **3x** fewer evaluations, respectively, to reach the same level of fitness as the best-performing baseline. The generally non-overlapping confidence intervals of fitness curves in Appendix H of revised paper further support the **statistical significance of this superiority**. \\n\\nHowever, we notice that both LASeR and LLM-Tuner exhibit relatively high variability in their diversity outcomes, suggesting that even more repetitions would be needed to establish statistical significance. Therefore, we will continue with additional repeated experiments.\"}", "{\"title\": \"Response to Reviewer 5ABx (2/2)\", \"comment\": \"**Response to Weakness 2**:\\n \\nTo further demonstrate the effectiveness of our approach, we performed two additional sets of repeated experiments. However, we apologize for only being able to conduct these experiments on LASeR and LLM-Tuner (the most competitive baseline), due to limited computational resources. We plan to continue with the remaining baselines and include the complete results once they are available.\\n\\nBoth the averaged results and standard deviations of the aforementioned repeated experiments are presented in Appendix H. With a total of five repeated runs, **the superiority of LASeR remains obvious**. The generally non-overlapping confidence intervals of fitness curves clearly indicate the **robust superiority** of our method in terms of optimization efficiency. In response to your concern regarding the closeness of final means, we would like to clarify that, for the sake of fair comparison, we **deliberately chose a sufficiently large number of robot evaluations** to hopefully let all baseline algorithms to converge. Despite this, our approach still achieves a fitness level that is **unattainable** by baseline methods in Walker-v0, and requires **2x** and **3x** fewer evaluations to reach optimal designs in Carrier-v0 and Pusher-v0, compared with the most competitive baseline. \\n\\nHowever, we did notice that both LASeR and LLM-Tuner exhibit relatively high variability in their diversity outcomes, and hence an even larger sample size would be needed to prove statistical significance. To this end, we will conduct additional repeated experiments. Thank you very much for your constructive feedback. \\n_____\\n**Response to Weakness 3**:\\n\\nWe appreciate your value suggestion. In response, we have expanded the discussion of our limitations and outlined several open problems for future research in Appendix P. \\n_____\\n**Response to Weakness 4**:\\n \\nThank you for highlighting this relevant article which presents an intriguing prospect of robotic design processes through interactive human-AI collaboration. We are pleased to be among the first to explore the potential of LLMs in automatically generating robot shapes, which we hope will contribute to the democratization of domain knowledge and enable non-specialists to develop effective robotic systems. We have cited this article in the revised version of our paper. \\n_____\\nWe hope that we have fully addressed all your concerns. Once again, we sincerely thank you for your insightful feedback, which is greatly conducive to our revision. \\n\\n**References**:\\n\\n[1] Dong, Heng, Junyu Zhang, and Chongjie Zhang. \\\"Leveraging Hyperbolic Embeddings for Coarse-to-Fine Robot Design.\\\" The Twelfth International Conference on Learning Representations. 2024. \\n\\n[2] Saito, Takumi, and Mizuki Oka. \\\"Effective Design and Interpretation in Voxel-Based Soft Robotics: A Part Assembly Approach with Bayesian Optimization.\\\" Artificial Life Conference Proceedings 36. Vol. 2024. No. 1. One Rogers Street, Cambridge, MA 02142-1209, USA journals-info@ mit. edu: MIT Press, 2024.\\n\\n[3] Song, Junru, et al. \\\"MorphVAE: Advancing Morphological Design of Voxel-Based Soft Robots with Variational Autoencoders.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 9. 2024.\\n\\n[4] Wang, Yuxing, et al. \\\"PreCo: Enhancing Generalization in Co-Design of Modular Soft Robots via Brain-Body Pre-Training.\\\" Conference on Robot Learning. PMLR, 2023.\"}", "{\"title\": \"Response to Reviewer D75R (2/4)\", \"comment\": \">**Weakness 2**: The paper does not provide a detailed sensitivity analysis of different prompts. In my opinion, the explanation of the intuition of your designed prompts is more important that the proposed pipeline.\\n\\n**Response**: \\nThank you for raising this point. Our prompt design consists of three major components: task-related metadata, elite design-fitness pairs, and target fitness. The **task-related metadata** primarily includes descriptions of task objectives and the simulation environment. This component is largely derived from the official documents of EvoGym (Bhatia et al., 2021), with **minimal modifications**. This metadata, which is often overlooked in previous works on LLM-aided robot design, serves two main purposes: to ground the evolutionary process in the specific context of the problem, and to facilitate the transfer of knowledge between different tasks. The second component consists of **elite design-fitness pairs** previously evaluated, where the designs are sorted according to their fitness in ascending order. This sorting is intended to leverage the **pattern-completion capabilities** of LLMs, a technique shown to be effective in prior research (Lange et al., 2024; Yang et al., 2024). The third component, the **target fitness** (referred to as the \\u201cjust-ask query\\u201d in Lim et al. (2024)), is introduced as a means of aligning the LLM\\u2019s outputs with our desired results. We would like to note that the phrasing of these components is **intentionally left simple and intuitive**, and no special techniques of prompt engineering were applied. As such, our experimental results possess a certain level of **robustness** and do not hinge on the specifics of prompt designs. However, it would be a promising direction to integrate various prompting techniques, such as chain-of-thought (Wei et al., 2022) and tree-of-thought (Yao et al., 2024), into our framework for better performances. \\n\\nTo further highlight the significance of the individual components in our prompt and to complement the intuitive explanations provided, we conducted additional ablation studies. We found that removing each of the individual components led to performance drops. The just-ask query is proven the most essential, while simulation description and ordering play less important roles. The detailed results are reported in Appendix M of revised paper. \\n_____\\n>**Weakness 3**: The paper is missing a comparison with some important baseline algorithms.\", \"question_6\": \"The authors need to compare their approach with those that also use LLM as a mutation operator, and more recent brain-body co-design methods that do not use LLM but also use EvoGym platform.\\n\\n**Response**: \\nWe appreciate your feedback and have incorporated two additional baselines into our comparative analysis. The first is OPRO (Yang et al., 2024), a recent LLM-aided evolutionary strategy, which we adapted for VSR design. The second is MorphVAE (Song et al., 2024), a state-of-the-art co-design method that does not utilize LLM but also uses the EvoGym platform. As shown in Appendix G of revised paper, our approach **still outperforms these baselines**, further demonstrating its effectiveness.\"}", "{\"title\": \"We look forward to your feedback!\", \"comment\": [\"Dear Reviewer pDhF,\", \"We very much appreciate your thoughtful suggestions and have made modifications accordingly in our re-uploaded paper. The updates that we have made in response to your feedback are listed as follows, which we believe would better illuminate our contributions.\", \"Included an analysis of computational efficiency in Appendix Q;\", \"Included a further discussion on the principles of choosing the similarity threshold, together with additional experimental results with varying thresholds, in Appendix N;\", \"Added experimental results on Walker-v0 with a 10x10 body size in Appendix I, in order to demonstrate the scalability of our approach to larger design spaces;\", \"Explained the prospect of fine-tuning LLMs for general-purpose combinatorial optimization as a promising future direction in Appendix P;\", \"Included quantitative results to prove the role of LLMs as intelligent mutation operators in Appendix F.\", \"We hope that we have fully addressed all your concerns regarding the value of our work. With the deadline of the discussion period approaching, we eagerly look forward to any additional questions or comments you may have. We genuinely hope that our responses and revisions could help you re-evaluate our paper. Once again, we greatly appreciate your time and effort dedicated to reviewing our paper. Happy Thanksgiving!\"]}", "{\"comment\": \"There appears to be little to no appreciable difference between LASeR and the provided baseline in the larger design space, if there is the effect size is once again very very small. That said, it is good to know that the method scales to larger design spaces. It will be interesting to see how the additional trials and statistics turn out. Thank you for adding this to the paper.\"}", "{\"title\": \"Response to Reviewer drYB (5/5)\", \"comment\": \">**Weakness 7 (Missing related work)**\\n\\n**Response**: \\nThank you for pointing out this issue. While we have discussed this particular paper (Lehman et al., 2023) in our related work section, we chose not to compare against it because it was originally proposed for **designing Sodaracers via code optimization**, which necessitates a **program interface** that maps between robot designs and Python codes. This mapping becomes difficult to define in the context of voxel-based soft robots. To further demonstrate the effectiveness of our approach, we conducted additional comparisons with another state-of-the-art LLM-based evolutionary algorithm, OPRO (Yang et al., 2024), which operates in the original solution space as we do. The results, presented in Appendix G, still show **a clear advantage of LASeR**. \\n\\nRegarding the choice of control policy and its training, we have adhered to the **standard practices** used in previous EvoGym-based studies, specifically the proximal policy optimization (PPO) algorithm with MLPs as control networks. While gradient-based optimization would be an interesting direction to explore, it is unfortunately inapplicable to EvoGym as it **does not support differentiable simulation**. We have clarified our rationale for choosing this control algorithm in our revised paper, and will explore a broader range of control approaches in future work, examining their potential impact on our proposed evolutionary framework. \\n_____\\nThank you again for your time and effort spent reviewing this paper! Your suggestions have been really conducive to our revision. Please let us know if your questions and concerns have been fully addressed. \\n\\n**References**: \\n\\n[1] Bhatia, Jagdeep, et al. \\\"Evolution gym: A large-scale benchmark for evolving soft robots.\\\" Advances in Neural Information Processing Systems 34 (2021): 2201-2214.\\n\\n[2] Cheney, Nick, et al. \\\"Unshackling evolution: evolving soft robots with multiple materials and a powerful generative encoding.\\\" ACM SIGEVOlution 7.1 (2014): 11-23.\\n\\n[3] Cochevelou, Fran\\u00e7ois, David Bonner, and Martin-Pierre Schmidt. \\\"Differentiable soft-robot generation.\\\" Proceedings of the Genetic and Evolutionary Computation Conference. 2023.\\n\\n[4] Dong, Heng, Junyu Zhang, and Chongjie Zhang. \\\"Leveraging Hyperbolic Embeddings for Coarse-to-Fine Robot Design.\\\" The Twelfth International Conference on Learning Representations. 2024. \\n\\n[5] Lehman, Joel, et al. \\\"Evolution through large models.\\\" In Handbook of Evolutionary Machine Learning, pp. 331\\u2013366. Springer, 2023.\\n\\n[6] Medvet, Eric, et al. \\\"Biodiversity in evolved voxel-based soft robots.\\\" Proceedings of the Genetic and Evolutionary Computation Conference. 2021.\\n\\n[7] Strgar, Luke, et al. \\\"Evolution and learning in differentiable robots.\\\" arXiv preprint arXiv:2405.14712 (2024).\\n\\n[8] Saito, Takumi, and Mizuki Oka. \\\"Effective Design and Interpretation in Voxel-Based Soft Robotics: A Part Assembly Approach with Bayesian Optimization.\\\" Artificial Life Conference Proceedings 36. Vol. 2024. No. 1. One Rogers Street, Cambridge, MA 02142-1209, USA journals-info@ mit. edu: MIT Press, 2024.\\n\\n[9] Song, Junru, et al. \\\"MorphVAE: Advancing Morphological Design of Voxel-Based Soft Robots with Variational Autoencoders.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 9. 2024.\\n\\n[10] Wang, Yuxing, et al. \\\"PreCo: Enhancing Generalization in Co-Design of Modular Soft Robots via Brain-Body Pre-Training.\\\" Conference on Robot Learning. PMLR, 2023.\\n\\n[11] Yang, Chengrun, et al. \\u201cLarge Language Models as Optimizers.\\u201d The Twelfth International Conference on Learning Representations. 2024.\"}", "{\"title\": \"Thanks for feedback\", \"comment\": \"Happy thanksgiving, and thank you for the feedback. I really appreciate the efforts made by authors and decide to raise my score to 5. However, my concerns about this work have not been fully addressed.\\n\\n1. In my opinion, the most suitable baseline for this paper is ELM, which also uses LLM-based mutation operators (not with the reflection mechanism), but the authors didn't use it.\\n\\n2. The main purpose of introducing diversity must be clarified. Is the diversity introduced to improve the final co-design quality or sth? Sometimes, to achieve a high co-design performance, we do not need too much diversity.\\n\\n3. The focus of this paper should be on using LLMs to solve robot design problems, so in the first part of the paper, I think a suitable logic would be to first describe the problems with robot design, what are the problems with current solutions, and how large models happen to be well suited to solving these problems due to their own characteristics, such as exploring in a wide range of design spaces, but also some limitations. In order to address these limitations, LASeR is proposed in this paper.\"}", "{\"title\": \"Response to Reviewer drYB (1/5)\", \"comment\": \"Dear reviewer,\\n\\nThank you for your constructive feedback. The following is our detailed response to the questions you raised. Please let us know if you have any trouble accessing our revised paper. \\n\\n>**Weakness 1 (Small design space)**\\n\\n>**Question 6**: Can you run experiments with a larger design space? Perhaps 9x9? \\n\\n**Response**:\\nThank you for the question. Although our approach is conceptually adaptable to larger design spaces, such as those consisting of hundreds or thousands of functional units, several challenges would arise when scaling to such environments. For one, the increased degrees of freedom would result in **a surge in control complexity**. To address this, existing works typically employ either periodic **open-loop actuation** patterns for locomotion on relatively simple terrains (Cheney et al., 2014), or **differentiable simulations** to facilitate more sample-efficient control learning (Strgar et al., 2024; Cochevelou et al., 2023). The former is **limited in its applicability to more complex tasks**, such as those involving varying terrains or object manipulation, while the latter introduces **considerable computational and memory overhead** (Cochevelou et al., 2023). \\n\\nThese challenges are likely why most existing voxel-based soft robot (VSR) studies focus on smaller design spaces, such as 5x5 configurations (Song et al., 2024; Saito and Oka, 2024; Dong et al., 2024; Wang et al., 2023; Bhatia et al., 2021). On one hand, a 5x5 body size with five material types yields a design space of $2.98\\\\times10^{17}$ possible solutions, which is **sufficiently expressive for generating complex and diverse morphological structures**. On the other hand, the infinite degrees of freedom inherent in soft materials make it **nearly impossible to substantially scale up robot sizes without sacrificing the ability to evaluate VSRs in more demanding task settings**. Nonetheless, we believe this dilemma can be mitigated with advances in more efficient simulation engines and control algorithms. \\n\\nFor now, we have tried a 10x10 Walker environment (Appendix I in revised paper), where **LASeR still holds advantage**. We are excited to evaluate our approach in even larger design spaces in our future work. We note that the current results are averaged across two independent runs. We planned to conduct three runs but there remains one of them unfinished. We would post the complete results once they are available. For your reference, we notice that our experiments on 10x10 Walker-v0 take on average **1.5 times longer** than the 5x5 case.\"}", "{\"comment\": \">...the advantage of LASeR over LLM-Tuner remains evident, in terms of both optimization efficiency and diversity.\\n\\n>The generally non-overlapping confidence intervals of fitness curves in Appendix H of revised paper further support the statistical significance of this superiority.\\n\\nThe advantage is not evident and the results do not support statistical significance. \\n\\nIt would be helpful if the authors can plot a 95%+ confidence interval instead of a standard deviation, conduct a statistical test, and report the p-val.\\n\\n>However, we notice that both LASeR and LLM-Tuner exhibit relatively high variability in their diversity outcomes, suggesting that even more repetitions would be needed to establish statistical significance. Therefore, we will continue with additional repeated experiments.\\n\\nThis is the opposite of what you just said!\"}", "{\"title\": \"Further Response to Reviewer D75R (3/4)\", \"comment\": \"**Revised introduction: (the first three paragraphs)**\\n\\n*Robot design automation represents an established and persistent challenge in modern robotics, aiming to autonomously evolve robot morphology with minimal human intervention (Hu et al., 2022; 2023; Song et al., 2024a). Existing approaches predominantly rely on traditional evolutionary algorithms and are primarily focused on rigid robots (Chocron & Bidaud, 1997; Leger, 2012; Wang et al., 2019; Sims, 2023; Hu et al., 2023). Recently, modular soft robots have attracted considerable attention due to their remarkable versatility, expressiveness, and biomimetic properties (Hiller & Lipson, 2011; Bhatia et al., 2021; Medvet et al., 2021). However, these advantages also introduce significant challenges to robot design automation. Specifically, modular soft robots often involve combinatorially vast design spaces and intricate interaction dynamics, rendering existing approaches prone to local optima. This calls for more efficient search algorithms that can navigate the vast design space while ensuring progressive improvement in functionality (Cheney et al., 2014; Song et al., 2024). Furthermore, most of current approaches rely heavily on problem-specific mathematical formulations and manually designed search heuristics, leaving under-explored the potential for a more accessible design process driven by natural language instructions.* \\n\\n*In recent years, large language models (LLMs) have demonstrated impressive reasoning, decision-making, and generalization capabilities (Achiam et al., 2023; Touvron et al., 2023; Team et al., 2023; Team, 2023), sparking a flurry of research interest in their application to optimization problems. Earlier efforts embarked on employing LLMs to aid traditional search heuristics, such as selecting parent solutions for mutation and crossover (Liu et al., 2024a; Ye et al., 2024), or serving as surrogate models and candidate samplers in Bayesian Optimization (Liu et al., 2024b). More recent studies have explored the use of LLMs as \\u201cintelligent search operators\\u201d. By receiving previously found solutions through prompts, LLMs effectively leverage their in-context learning and pattern-completion capabilities to iteratively propose improved candidate solutions (Brahmachary et al., 2024; Huang et al., 2024b; Yang et al., 2024; Morris et al., 2024; Romera-Paredes et al., 2024; Lange et al., 2024). These LLM-aided evolutionary frameworks have also shown great promise in reducing reliance on handcrafted search heuristics, facilitating convenient problem specification in natural language and rendering evolutionary processes more interpretable. To date, they have showcased proficiency in both classic optimization problems (Liu et al., 2024a; Brahmachary et al., 2024; Huang et al., 2024a) and real-world applications (Morris et al., 2024; Romera-Paredes et al., 2024; Lange et al., 2024; Tran & Hy, 2024).*\\n\\n*With the promise of higher efficiency and interpretability for evolutionary computation, LLMs have also made their way into the realm of robot design automation. To our best knowledge, the only relevant studies are Zhang (2024), Qiu et al. (2024) and Lehman et al. (2023). While Zhang (2024) explores the use of LLMs to tune hyperparameters of traditional evolutionary algorithms, the latter two pioneer the use of LLMs as search operators for robot design. Nonetheless, these studies still bear some major limitations. Firstly, as highlighted by several related works, LLMs often struggle to balance exploration and exploitation, leading to inferior solution diversity (Huang et al., 2024b; Tran & Hy, 2024). This issue remains largely unaddressed in the aforementioned studies. **Enhancing diversity in evolved solutions is especially relevant in robot design automation as it is critical for adapting robot ecosystems to hazardous and dynamic environments (David et al., 2017; Medvet et al., 2021). Achieving both diversity and quality has long been a dilemma in evolutionary robotics (Karine et al., 2020; Medvet et al., 2021), and it remains to be investigated whether the reasoning capabilities of LLMs could be exploited to promote more intelligent exploratory behaviors in the design space.** Secondly, current LLM-aided evolutionary approaches generally lack strong connections to the specific context of real-world problems, resulting in suboptimal performances and solutions that can not generalize well. As it is common to have access to task-related metadata and a repository of pre-designed robots when designing for new applications, it is highly pertinent to ground evolutionary search in such contextual information, so as to enable inter-task experience transfer and foster more generalizable design processes.*\"}", "{\"summary\": \"The paper presents LASeR, an framework leveraging Large Language Models (LLMs) to optimize robot design with evolutionary algorithms. The proposed approach addresses the limitations of existing LLM-driven optimization techniques, such as limited solution diversity and generalizability across tasks. LLM is employed as the intelligent search operator and diversity reflecter, instead of a tool for hyperparameter tuning. By introducing a Diversity Reflection Mechanism (DiRect), LASeR refines the exploration-exploitation tradeoff, enhancing diversity and performance of the robot design automation tasks, comparted to baselines. Through task-grounded prompts, LASeR also enables effective knowledge transfer across different robot design tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is clear and well-structured.\", \"The idea of applying LLMs in generating offspring for robot design evolutionary algorithms is interesting.\", \"Extensive experimental results on EvoGym are provided to validate that the proposed method outforms baselines in both design efficiency/performance and diversity.\"], \"weaknesses\": [\"The experiments of the paper did not mention the time taken of the LLM-based methods for the evolutionary algorithm, which should be considered into the evaluation of robot design efficiency.\", \"The similarity threshold seems to be an important hyperparameter for the proposed method, while it is not discussed in the paper.\", \"The experiments in the paper are restricted to relatively simple voxel-based soft robots within predefined settings.\", \"The core method does not involve learning or fine-tuning for LLMs besides PPO utilized for the fitness evaluation.\"], \"questions\": [\"What considerations were taken when selecting similarity thresholds? How do you balance the diversity and performance of the generated designs?\", \"In Section 3.2.1, you mentioned that the fitness performance of robot designs would not change significantly after being modified by DiRect. Do you have quantative results to support this conclusion?\", \"How does the computation time of LASeR compared to the baselines, including LLM-Tuner and the ones without using LLMs?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer D75R (3/4)\", \"comment\": \">**Weakness 4**: The test tasks chosen for this paper are too simple to demonstrate the superiority and validity of LLMs.\\n\\n>**Question 1**: The testing tasks such as Walker-v0 and Carrier-v0 used in the paper are too simple. Can you test your method on more complex tasks, which I think are more suitable to answer the question \\u201cDoes LLM really have an advantage?\\u201d\\n\\n**Response**: \\nThank you for your valuable feedback. In response, we evaluated LASeR, along with LLM-Tuner (the most competitive baseline), on an additional task namely **Catcher-v0**, which is among the most challenging ones in the EvoGym task suite. The result, which can be found in Appendix E of paper, shows that our approach **continues to demonstrate notable advantages**, even in more complex task settings. \\n_____\\n>**Question 3**: Can the current framework scale to more complex and larger robot designs (10x10 design space for walking)? If not, what are the potential bottlenecks? In larger design spaces (10x10), does LLM still work well? For some tasks, random combinations of voxels generated by LLM or evolutionary operators do not always work well. \\n\\n**Response**: \\nWe appreciate your insightful comment. To evaluate the scalability of our approach to larger design spaces, we tested both LASeR and LLM-Tuner (the most competitive baseline) on 10x10 Walker-v0, with results provided in Appendix I of paper). Our findings demonstrate that LASeR **continues to outperform the baseline** in terms of optimization efficiency, even in this larger design space. We attribute this success to the unique capabilities of LLMs, which **do not rely on random mutations** (as seen in genetic algorithms and other heuristics). Instead, LLMs leverage their reasoning capabilities to identify favorable voxel assembly patterns (such as effective use of actuators) within high-performing designs. Based on these insights, they carry out more ***informed* mutations and recombinations** to generate offspring solutions (Please refer to Appendix J for LLM\\u2019s explanation of its decision process). It is worth noting that **the 10x10 configuration results in a design space that is** $2.65\\\\times 10^{52}$ **times larger than the 5x5 case**, due to combinatorial explosion. Therefore, the promising results indicate a remarkable potential of our approach to scale to even larger and more complex robot design problems. \\n\\nWe note that the current results (as reported in Appendix I) are averaged across two independent runs. We planned to conduct three runs but there remains one of them unfinished. We would post the complete results once they are available. For your reference, we notice that our experiments on 10x10 Walker-v0 take on average **1.5 times longer** than the 5x5 case.\"}", "{\"summary\": \"The paper investigates the use of large language models in designing and evolving robots. To this end the paper uses an LLM to reflect and propose novel 'soft' robot designs in simulation in an evolutionary design loop. The paper compares its proposed LLM-evolution loop against different baselines and presents in-depth ablations of different effects LLM parameters have on the design loop.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"To the best of my knowledge the proposed framework is novel, the usage of LLMs in the problem of robot designs and their evolutions is under-researched\", \"The conclusions the paper makes, and its application are relevant to the robot learning community\", \"The paper compares its proposed approach versus several baselines\", \"The performed ablation studies are very interesting and insightful. I appreciate them.\"], \"weaknesses\": [\"Weaknesses:\", \"The environments in which the method is tested are relatively simple. However, I appreciate the hardness of the overall problem; designing and evolving robot hardware is not easy.\", \"A critical remark is that while the mean shows (in plots and tables) that the proposed method works, I think it is likely not statistically significant due to the standard deviation and the closeness of the final means.\", \"I think the paper could overall more critically discuss its limitations and open problems.\", \"The paper should probably cite and discuss this preliminary work discussing the potential of LLMs for the robot design process: Stella, Francesco, Cosimo Della Santina, and Josie Hughes. \\\"How can LLMs transform the robotic design process?.\\\"\\u00a0Nature machine intelligence\\u00a05, no. 6 (2023): 561-564.\"], \"questions\": \"I have no questions, overall I think the paper is in a good state and interesting to the ML/robotics community.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Further Response to Reviewer drYB by Authors (5/5)\", \"comment\": \"Thank you for the question. We have no intention to gloss over previous works on robot design, as they unarguably inspired generations of researchers (including us) and led to today\\u2019s state of the art. In our related work section, we have discussed the history of robot design automation, from Sims\\u2019 pioneering work that dates back to 1990s all the way to more recent works that resort to combinations of evolutionary algorithms and deep probabilistic generative models, and to today\\u2019s work that leverages LLMs. Now, to pay due respect to all these works and their contributions, we have moved Related Work to Section 2, right after Introduction.\\n\\n**Robot design automation** is an established research problem in Robotics that aims to automatically evolve robot morphology according to specific task objectives with minimal human intervention. We believe the primary reason why this is considered an important research field is the recognized significance of morphology to intelligent behavior (known as ***morphological intelligence***) (Ghazi-Zahedi, 2019; Gupta et al., 2021). We share common view with you that robot design automation should eventually be applied to physical robots to truly benefit humankind, which is also our long-term vision. However, due to the prohibitive labor and manufacturing costs involved in deploying physical robots, it has become nearly a consensus that **design algorithms should be developed and prototyped in simulation**. To this end, numerous unselfish researchers developed simulation environments that are efficient and easy to use, and meanwhile emulate real-world physics with high fidelity. The development of these environments usually involves considerable efforts and cross-disciplinary knowledge (such as materials science and mechanics), but once developed, would become reliable platforms that subsequent works (involving both robot design and control) could build upon. In this work, we employed **Evolution Gym** (Bhatia et al., 2021), a latest, dedicated simulation environment developed by MIT specifically for voxel-based soft robots (VSRs). Specifically, EvoGym utilizes a classic **mass-spring system with cross-braced traction** to simulate elastic building blocks. Many other considerations, such as bounding box trees for collision detection and computation of penalty-based contact forces and frictional forces, are taken to ensure the simulation fidelity. We have chosen EvoGym also due to its **popularity in past literature**, which proves its reliability. There are numerous simulation environments other than EvoGym that are also available, including **2D-VSR-Sim** (Medvet et al., 2020) and **diffTaichi** (Hu et al., 2019), serving as the basis of hundreds of works each year and greatly contributing to the advances of robot design automation. \\n\\nAt the meantime, there is also ongoing research on the realization of soft robotics in the physical world, using polymers with pneumatic chambers (Kriegman et al., 2020 (b); Legrand et al., 2023) or even self-replicating cells (Kriegman et al., 2020 (a) and 2021) and **continually narrowing the sim-to-real gap**. These studies also revealed promising avenues through which our approach could be applied in real world. We have cited these papers with a brief discussion in Section 2. We believe that with the collective efforts of material scientists, computer scientists, (bio)mechanical engineers, etc., soft robotics would see rapid advances and finds its way to everyday life in the near future. One of the major implications of our particular work to robot design is that it **reveals a remarkable potential of Large Language Models to design morphology**, with the aid of a meticulously designed reflection mechanism. This, together with rapidly progressing foundation models (FMs) and FM-aided control strategies, points to a promising prospect where intelligent agents are capable of both designing and controlling their own embodiments.\"}", "{\"comment\": \"The robot design problem reads as an afterthought and this field of work and its rich history are glossed over, pushed to the very end of the paper and the appendix.\\n\\nWhy is robot design important?\\n\\nWhy should we care about this problem?\\n\\nYou call your agents \\\"robots\\\" but failed to explain how they can transfer from 2D simulation to reality? Has this been done before? How? \\n\\nAre there any implications of this work for the future of real robots?\"}", "{\"title\": \"Response to Reviewer D75R (1/4)\", \"comment\": \"Dear reviewer,\\n\\nThank you for your constructive feedback. The following is our detailed response to the questions you raised. Please let us know if you have any trouble accessing our revised paper. \\n\\n>**Weakness 1**: The paper does not provide a deeper analysis about \\u201cwhy LLM works well\\u201d. The black-box nature of LLMs can make it challenging to understand the reasoning behind the generated designs. Adding more explanations in the LLM\\u2019s decision-making process would be beneficial. \\n\\n>**Question 2**: Can LLMs really align a robot\\u2019s design with its task performance? Is it enough to just use prompt engineering for more complex tasks? Can it be used as a surrogate model to predict the performance of a robot design? \\n\\n**Response**: \\nThank you for your thoughtful feedback. In response to Weakness 1, we indeed included LLM reasoning in our preliminary experiments, where the LLM was prompted to explicitly explain its design choices (similar to chain-of-thought). However, we observed no significant performance gains from this practice. Consequently, we opted to remove the reasoning process to streamline evolution and reduce computational costs. \\n\\nHowever, we would like to clarify that **our approach is capable of affording higher interpretability**. To address your concerns, we have included additional results in Appendix J of revised paper, where the LLM is explicitly instructed to explain its decision-making process, rather than functioning as a black box. The explanations provided are insightful and reasonable, **revealing the advantageous structures present in high-performing designs**. This directly answers the question of whether LLMs can align robot designs with task performances, and sheds light on why LLMs work well. Specifically, LLMs are able to **identify favorable voxel assembly patterns** within designs and **leverage these insights to generate improved offspring solutions**. In response to Question 2, we have also tested our approach in more complex tasks and found that it continues to demonstrate notable advantages (as discussed in our answer to Question 1 and Appendix E). \\n\\nRegarding the use of LLMs as surrogate models, prior work has indeed explored this approach (Liu et al., 2024), where LLMs are employed to predict objective function values for computing the acquisition function in Bayesian Optimization. However, this study focus on a hyper-parameter fine-tuning task with less than ten decision variables. Another work in this line focuses on even simpler tasks, specifically two-dimensional mathematical functions (Hao et al., 2024). We speculate that LLMs may not perform as well in more complex optimization problems, particularly those involving high-dimensional, non-linear functional mappings, such as the voxel-based soft robot design in our study. \\n\\nGiven these challenges, we argue that LLMs may not be suitable as direct function approximators for such complex problems. Instead, it would be more advisable to use LLMs as search operators, as this approach does not rely on precise function approximation, thereby avoiding the risk of mistakenly ruling out promising solutions. Meanwhile, it still leverages the reasoning capabilities of LLMs and the insights gleaned from successful solutions to expedite the evolutionary search process.\"}", "{\"title\": \"References\", \"comment\": \"**References:**\\n\\n[1] Bhatia, Jagdeep, et al. \\\"Evolution gym: A large-scale benchmark for evolving soft robots.\\\" Advances in Neural Information Processing Systems 34 (2021): 2201-2214.\\n\\n[2] Ghazi-Zahedi, Keyan. \\\"Morphological Intelligence.\\\" Cham: Springer (2019).\\n\\n[3] Gupta, Agrim, et al. \\\"Embodied intelligence via learning and evolution.\\\" Nature communications 12.1 (2021): 5721.\\n\\n[4] Hu, Yuanming, et al. \\\"Difftaichi: Differentiable programming for physical simulation.\\\" arXiv preprint arXiv:1910.00935 (2019).\\n\\n[5] Kriegman, Sam, et al. \\\"A scalable pipeline for designing reconfigurable organisms.\\\" Proceedings of the National Academy of Sciences 117.4 (2020,a): 1853-1859.\\n\\n[6] Kriegman, Sam, et al. \\\"Scalable sim-to-real transfer of soft robot designs.\\\" 2020 3rd IEEE international conference on soft robotics (RoboSoft). IEEE, 2020 (b).\\n\\n[7] Kriegman, Sam, et al. \\\"Kinematic self-replication in reconfigurable organisms.\\\" Proceedings of the National Academy of Sciences 118.49 (2021): e2112672118.\\n\\n[8] Legrand, Julie, et al. \\\"Reconfigurable, multi-material, voxel-based soft robots.\\\" IEEE Robotics and Automation Letters 8.3 (2023): 1255-1262.\\n\\n[9] Medvet, Eric, et al. \\\"2D-VSR-Sim: A simulation tool for the optimization of 2-D voxel-based soft robots.\\\" SoftwareX 12 (2020): 100573.\\n\\n[10] Song, Junru, et al. \\\"MorphVAE: Advancing Morphological Design of Voxel-Based Soft Robots with Variational Autoencoders.\\\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 9. 2024.\"}" ] }
7mdi1i1mSd
NoisyTraj: Robust Trajectory Prediction with Noisy Observations
[ "Rongqing Li", "Changsheng Li", "Ruilin Lv", "Yuhang Li", "Ye Yuan", "Guoren Wang" ]
Trajectory prediction aims to forecast an agent's future trajectories based on its historical observed trajectories, which is a critical task for various applications such as autonomous driving, robotics, and surveillance systems. Most existing trajectory prediction methods assume that the observed trajectories collected for forecasting are clean. However, in real-world scenarios, noise is inevitably introduced into the observations due to errors from sensors, detection, and tracking processes, resulting in the collapse of the existing approaches. Therefore, it is essential to perform robust trajectory prediction based on noisy observations, which is a more practical scenario. In this paper, we propose NoisyTraj, a noise-agnostic approach capable of tackling the problem of trajectory prediction with arbitrary types of noisy observations. Specifically, we put forward a mutual information-based mechanism to denoise the original noisy observations. This mechanism optimizes the produced trajectories to exhibit a pattern that closely resembles the clean trajectory pattern while deviating from the noisy one. Considering that the trajectory structure may be destroyed through the only optimization of mutual information, we introduce an additional reconstruction loss to preserve the structure information of the produced observed trajectories. Moreover, we further propose a ranking loss based on the intuitive idea that prediction performance using denoised trajectories should surpass that using the original noisy observations, thereby further enhancing performance. Because NoisyTraj does not rely on any specific module tailored to particular noise distributions, it can handle arbitrary types of noise in principle. Additionally, our proposed NoisyTraj can be easily integrated into existing trajectory prediction models. Extensive experiments conducted on the ETH/UCY and Stanford Drone datasets (SDD) demonstrate that NoisyTraj significantly improves the accuracy of trajectory prediction with noisy observations, compared to the baselines.
[ "Trajectory prediction" ]
https://openreview.net/pdf?id=7mdi1i1mSd
https://openreview.net/forum?id=7mdi1i1mSd
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uPKXuorZsK", "lWP2iicofq", "lPQSRNuA8b", "L2rLRo9RX1", "IxqRu046Ny", "FfryroEC4t" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730044771301, 1730683662376, 1730506115759, 1731112801335, 1731557705974, 1729511648310 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4119/Reviewer_gjxC" ], [ "ICLR.cc/2025/Conference/Submission4119/Reviewer_5VCh" ], [ "ICLR.cc/2025/Conference/Submission4119/Reviewer_xtpN" ], [ "ICLR.cc/2025/Conference/Submission4119/Reviewer_WDuf" ], [ "ICLR.cc/2025/Conference/Submission4119/Authors" ], [ "ICLR.cc/2025/Conference/Submission4119/Reviewer_fxit" ] ], "structured_content_str": [ "{\"summary\": \"Conventional trajectory prediction assumes that clean trajectories are provided; however, actual trajectory data contains noise due to errors from sensors, detection, and tracking processes.\\nThese noises can degrade prediction performance, so the authors propose a mutual information-based mechanism to denoise the original noisy observations. \\nThis approach optimizes the generated trajectories to resemble clean trajectory patterns while diverging from noisy ones. \\nTo preserve structural information when using the MI-based method, they also introduce a masking-based reconstruction loss.\\nAdditionally, under the assumption that denoised trajectories lead to better prediction performance, a ranking loss is proposed.\\nSince the proposed method does not assume a specific type of noise, it is robust against various forms of noise, and its effectiveness is demonstrated on the ETH-UCY and SDD datasets, under different noise types.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": [\"The proposed concepts and their underlying motivations\\u2014mutual information-based mechanism, reconstruction loss, and ranking loss\\u2014sound solid.\", \"The mathematical modeling of minimizing the upper bound of the mutual information mechanism is well-structured, and its effectiveness demonstrated in the ablation study shows incremental improvement.\", \"The experimental results are comprehensive. The method is tested across multiple datasets and shows superior performance across all metrics. Demonstrating generalizability across various types of noise is also a strong point.\"], \"weaknesses\": [\"The recent paper (https://arxiv.org/abs/2312.15906) addresses transfer learning, but focuses specifically on making robust predictions in the presence of noisy trajectories within datasets. Since it deals with actual noise arising from errors in sensors, detection, and tracking processes, a comparison with this paper seems necessary.\", \"The assumption that $Y_{fut}$ is clean sounds somewhat unrealistic. Even for trajectories obtained via Lidar sensors, the mentioned errors from sensors, detection, and tracking processes are inevitable, although they might be less than those from camera-driven trajectories. It would be more convincing to show experimental results in settings where such two real-world different noise levels exist, for example, use clear trajectory as real Lidar-driven trajectories and noisy trajectory as real camera-driven trajectories.\", \"Typo: line 178, change X^ to X^$_{obs}$.\"], \"questions\": \"My major concern is:\\nIf obtaining a clean trajectory is possible, then it should also be possible to obtain a clean historical trajectory. In that case, instead of the proposed mutual information-based method, wouldn\\u2019t it be sufficient to use an L2 loss to train a denoiser to restore the clean observation trajectory from the noisy one? This makes me wonder if the content in Section 3.3.1 would still be necessary.\\n\\nI believe that the presentation and contributions of this paper are meaningful. However, for the proposed method in this paper to be practical, the assumption made\\u2014obtaining a clean future trajectory is feasible\\u2014needs to be convincing. I am skeptical about this aspect, which is why I have given a lower score.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes NoisyTraj to deal with noisy observations in trajectory prediction tasks. NoisyTraj is noise-agnostic and leverages a mutual information based mechanism to filter noise from the observation in the Trajectory Denoise Model (TDM) before feeding the denoised observations into the Trajectory Prediction Backbone (TPB), which predicts the future trajectory. Any trajectory prediction model can trivially be chosen as the TPB.\\nThe optimization maximizes the mutual information (MI) between produced and future trajectories, and minimizes the MI between produced and input (noisy) trajectories. In addition, reconstruction losses based on random masking and a trajectory ranking loss (and a trajectory prediction loss) are applied.\\nThe authors validate the method on ETH/UCY and SDD datasets, to which they add various synthetic noises. Results show superior performance over non-denoising and simple denoising (Wavelet, EMA) baselines, even when train and test noise are varied.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Beats baselines: NoisyTraj beats all baselines (no denoising, Wavelet denoising, EMA denoising) across all datasets (ETH/UCY and SDD) and various noise settings.\", \"No noise-free degradation: NoisyTraj performs equally well as the no denoising baseline when no noise is added (exists). In addition, NoisyTraj might generalize to different train / test noises.\", \"Ablations: Ablation studies are conducted and show that all losses contribute to NoisyTraj's performance\", \"Well presented: The work is well presented and easy to follow.\"], \"weaknesses\": [\"Limited set of applications: The work tackles a simplified problem, in which the future ground truth data is assumed to be noise-free, while only the input observations are noisy. In general trajectory prediction tasks, e.g. in autonomous driving, this setup does not hold true. The authors give only one example where this method could be helpful: Use camera-only as inputs and camera + LiDAR as ground truth. However, even in this example camera + LiDAR are not noise-free as assumed in the paper but might have a lower / different noise than the input data. It would be of significant value if such a setup, where the ground truth data is not noise-free would be evaluated to confirm whether NoisyTraj still works well or whether it needs to be adapted. Furthermore, this very restrictive assumption should be mentioned at the beginning of the paper (if I'm not mistaken, it's only first mentioned in Section 3).\", \"Evaluation with higher noise values: /sigma=0.2 and 0.4 are evaluated but both noise levels are quite similar. How does NoisyTraj perform with larger noises, e.g. /sigma=2? From the results in Table 2, it also seems that the gap between NoisyTraj and the baselines is not widened besides the 2x noise increase. More data and analysis would be helpful here.\", \"Generalizability: The results in Table 5 actually show that the gap between the methods narrows when different train / test noises are used (while NoisyTraj still outperforms the baselines). Given that the baselines are rather weak (no good baselines seem to exist), it's unclear whether NoisyTraj truly generalizes well.\"], \"synthetic_noise\": \"The paper would be stronger if validation also covered real noise, e.g. from autonomous driving tasks. Real noisy observations with noise-free ground truth data is generally difficult to obtain though. One work that I'm aware of that analyzed the noise of a detection + tracking pipeline is https://arxiv.org/pdf/2004.01288, but they did not publish the dataset.\", \"questions\": [\"How does NoisyTraj perform with more noisy data, e.g. /sigma=2?\", \"How does NoisyTraj perform when the ground truth data also contains some (likely less) noise?\", \"Can you evaluate NoisyTraj on real noised trajectories?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The manuscript tackles trajectory prediction with noisy inputs by proposing a denoising module with two main steps: i) generating clean trajectories via mutual information maximization, and ii) reconstructing noisy trajectories from masked segments to retain structure. The authors also introduced a ranking loss to produce more accurate prediction. The proposed denoising module is a plug-and-play module and is validated on ETH/UCY and Stanford Drone datasets (SDD) using different existing backbones.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The manuscript is well-organized and easy to follow.\\n\\n2. The use of mutual information to denoise input trajectories is both intuitive and effective. The reconstruction loss is well-motivated.\\n\\n3. Experimental results highlight the capability and generalizability of the proposed plug-and-play trajectory denoising module. \\n\\n4. The experiment section and the ablations in the appendix is very convincing.\", \"weaknesses\": \"The task is new and challenging, there is not much real data to use for validation. This reviewer appreciate the author's efforts but maybe using additive gaussian noise is not the best way to simulate the realistic sensor input. It may cause model overfitting of certain types of the noise.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a method called NoisyTraj for trajectory prediction upon noisy observations. The key idea of the method is to learn a trajectory denoise model using the mutual information between trajectories. This method can be applied to different baseline prediction models. The effectiveness of the method is verified on two popular human trajectory prediction benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper is well written. The motivation, method, and experiments are well presented.\", \"weaknesses\": \"1. The major weakness of this paper is that the problem formulation does not align with real-world scenarios. The paper assumes that observed trajectories are corrupted by noise, and the authors add specific noise (e.g., Gaussian) to clear observations to simulate noisy observations. However, in practical applications, trajectory observation noise and errors are much more complex. In detection, not only do bounding boxes have noise, but there are also numerous false positives and false negatives. In subsequent tracking, bounding boxes belonging to the same subject need to be associated across different timestamps. Incorrect associations may link one subject's box at time T to another subject's box at T+1. Therefore, errors in the association process can result in tracked trajectories that significantly deviate from ground truth trajectories. The paper's assumption that noisy observations only stem from imprecise observed positions is fundamentally different from actual noisy/erroneous observations obtained from tracking, making it far from practical applications.\\n\\n2. Another major weakness is the omission of several important prior works that have already explored trajectory prediction with noisy observations:\\n\\n[a] Yu, Rui, and Zihan Zhou. \\\"Towards robust human trajectory prediction in raw videos.\\\" 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2021.\\n\\n[b] Weng, Xinshuo, Boris Ivanovic, and Marco Pavone. \\\"MTP: Multi-hypothesis tracking and prediction for reduced error propagation.\\\" 2022 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2022.\\n\\n[c] Weng, Xinshuo, et al. \\\"Whose track is it anyway? improving robustness to tracking errors with affinity-based trajectory prediction.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\\n\\n[d] Zhang, Pu, et al. \\\"Towards trajectory forecasting from detection.\\\" IEEE Transactions on Pattern Analysis and Machine Intelligence 45.10 (2023): 12550-12561.\\n\\nDifferent from NoiseTraj in this paper that focuses solely on position-based noise, these prior works consider the comprehensive impact of tracking-related issues on prediction, For instance, paper [a] provides an analysis of how various tracking problems (including noisy tracks, missed detections, spurious tracks, and ID switches) affect trajectory prediction performance. NoiseTraj could serve as a trajectory smoothing component, similar to how the Holt-Winters method is employed in [a].\\n\\n3. Due to the strong coupling between tracking and prediction, addressing prediction upon noisy observations requires considering both tracking and prediction. However, this paper only focuses on prediction without considering tracking. Artificially adding noise to clear observations creates an artificial problem setting. Moreover, the evaluation metrics cannot simply adopt ADE and FDE from prediction upon clean observations. Since observed trajectories might be incorrect (rather than just having inaccurate positions), evaluation metrics need to consider both tracking and prediction. For example, ADE-over-recall curves are used in [a]&[e].\\n\\n[e] Weng, Xinshuo, et al. \\\"Inverting the pose forecasting pipeline with SPF2: Sequential pointcloud forecasting for sequential pose forecasting.\\\" Conference on robot learning. PMLR, 2021.\\n\\n4. Figure 3 demonstrates that artificially noisy trajectories do not match the patterns of real-world noisy tracklets. The offset in the noisy (blue) trajectory exceeds several person-distances. In practical tracking associations, these points would never be associated with the same trajectory. For reference on normal noise patterns, see Fig. 5 in [a] (IROS'21).\\n\\n5. In Section 4.2, the noise added to the SDD dataset is measured in meters, while the evaluation is in pixels. These experimental details need clarification.\\n\\n6. Expression issues:\\na) Line 360: \\\"blue\\\" arrow should refer to the \\\"green\\\" arrow in Figure 2?\\nb) Line 451: \\\"Kalman\\\" should refer to \\\"Wavelet\\\"?\", \"questions\": \"To address the identified weaknesses, the following suggestions are provided:\\n1. Discuss how the proposed method could be extended to handle realistic noise or error scenarios, such as false positives/negatives and association errors.\\n2. Provide a comparison of the proposed method with prior works.\\n\\nTo further improve the paper, the following suggestions are provided: \\n\\n3. Conduct additional experiments using realistic noise/error data instead of synthetic noise. \\n4. Clarify the scope of the proposed method and discuss its limitations in real-world scenarios. \\n5. Explore potential ways to incorporate NoiseTraj with prior works. \\n6. Discuss how the proposed method could be integrated with or extended to include tracking components in the future.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper addresses the challenge of noise in real-world trajectory prediction tasks, which arises from errors in sensors, detection, and tracking processes. The authors propose a noise-agnostic approach, NoisyTraj, capable of handling arbitrary types of noisy observations. NoisyTraj employs three key strategies: a mutual information-based mechanism for denoising, a reconstruction task to preserve trajectory structure, and a ranking loss to ensure superior performance with denoised data. The method has been evaluated on the ETH/UCY and Stanford Drone datasets (SDD), where it demonstrated significant improvements over baseline models in predicting trajectories from noisy observations.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"The concept of denoising historical trajectory data is innovative.\", \"The proposed denoising method is simple to implement and can be integrated into existing models as a plug-and-play component.\", \"The model shows promising performance in the experimental conditions set by the authors.\"], \"weaknesses\": [\"The relationship between the three proposed improvements, particularly the ranking loss, lacks a clear intrinsic connection. While the paper states that the ranking loss enhances performance, it does not provide a detailed explanation of why this is the case. It would be beneficial for the authors to include ablation studies on the hyperparameters of the loss function and clarify why adding ranking loss leads to better results compared to designing a prediction loss that only accounts for denoised prediction results.\", \"The experimental setup is insufficient. The two baselines used are overly simplistic, limiting the ability to assess the model\\u2019s effectiveness fully. Including the prediction results from the historical trajectory data before adding noise in all experiments would strengthen the comparisons.\", \"The paper lacks sufficient detail on the model\\u2019s training process. Specifically, whether the training dataset was also subjected to noise is unclear. Additionally, the limitation that the model training must be tailored to a specific trajectory prediction backbone, and that freezing the trajectory prediction backbone parameters may degrade performance, should be addressed in the main text.\"], \"questions\": \"The paper introduces noise into the original dataset to create noisy historical trajectory data. If the original data is considered noise-free, should the experiment include a similarity comparison between the denoised input and the pre-noise data? Alternatively, if the original data itself is considered noisy, should the experiment demonstrate that the model\\u2019s predictions using denoised data outperform those using the data before noise was added?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"none\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
7lpDn2MhM2
CHiP: Cross-modal Hierarchical Direct Preference Optimization for Multimodal LLMs
[ "Jinlan Fu", "huangfushenzhen", "Hao Fei", "Xiaoyu Shen", "Bryan Hooi", "Xipeng Qiu", "See-Kiong Ng" ]
Multimodal Large Language Models (MLLMs) still struggle with hallucinations despite their impressive capabilities. Recent studies have attempted to mitigate this by applying Direct Preference Optimization (DPO) to multimodal scenarios using preference pairs from text-based responses. However, our analysis of representation distributions reveals that multimodal DPO struggles to align image and text representations and to distinguish between hallucinated and non-hallucinated descriptions. To address these challenges, In this work, we propose a Cross-modal Hierarchical Direct Preference Optimization (CHiP) to address these limitations. We introduce a visual preference optimization module within the DPO framework, enabling MLLMs to learn from both textual and visual preferences simultaneously. Furthermore, we propose a hierarchical textual preference optimization module that allows the model to capture preferences at multiple granular levels, including response, segment, and token levels. We evaluate CHiP through both quantitative and qualitative analyses, with results across multiple benchmarks demonstrating its effectiveness in reducing hallucinations. On the Object HalBench dataset, CHiP outperforms DPO in hallucination reduction, achieving improvements of 52.7% and 55.5% relative points based on the base model Muffin and LLaVA models, respectively. We make all our datasets and code publicly available.
[ "Multimodal Large Language Models", "Preference Optimization", "Direct Preference Optimization", "Hallucination" ]
Accept (Poster)
https://openreview.net/pdf?id=7lpDn2MhM2
https://openreview.net/forum?id=7lpDn2MhM2
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vvnhsTwyEj", "ZLrNRUwgRf", "PJZN70e4Uc", "B5dh9Wpexw", "7dDJhc1wfB" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review", "decision" ], "note_created": [ 1734645918666, 1730464702069, 1730434520228, 1730448688936, 1737523837092 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7411/Area_Chair_T1WE" ], [ "ICLR.cc/2025/Conference/Submission7411/Reviewer_NXL3" ], [ "ICLR.cc/2025/Conference/Submission7411/Reviewer_xtwk" ], [ "ICLR.cc/2025/Conference/Submission7411/Reviewer_Gv4h" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"metareview\": \"The paper studies hallucination in the MLLM. The author found that directly using DPO is not enough for alleviating the hallucination in the MLLM, as the model cannot distinguish the distribution b/w the non-hallucinated one and the hallucinated one. The author proposed a cross-modal hierarchical DPO to alleviate this issue. Concretely, the author proposed a combination loss on the response level, segment level, and token level to assign rewards at different granularity. The author also proposed to incorporate a preference loss on the image side.\", \"strength\": \"1. The paper is easy to read and easy to follow\\n2. The baselines are clearly presented\\n3. The proposed approach achieves good performance by reducing the hallucination.\\n4. The proposed approach is novel.\", \"weakness\": \"1. Didn't clearly describe the tech detail in the original version.\\n2. Didn't show the performance on non-hallucination benchmarks.\\n\\nThe author addressed those weaknesses during the rebuttal. I would recommend accept.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers are mainly complained about two things:\\n1. tech details are not very clear.\\n2. lack the non-hallucination benchmarks.\\n3. the selection of images used in the rejected samples.\\n\\nDuring the rebuttal, the author responded both clearly. One reviewer improved the score. The second reviewer was satisfied with the results.\\n\\nOnly one reviewer voted borderline reject. During the rebuttal, the reviewer is satisfied with the response. Therefore, I would consider the concerns have been addressed.\"}", "{\"summary\": \"Introduces a new optimisation recipe to reduce hallucinations for multimodal models.\", \"hierarchical_textual_preference_optimisation_tries_to_solve_the_problem_that_response_level_preference_optimizations_have\": \"they can\\u2019t clearly identify what segments/tokens contain hallucinations. They consider three levels of preference optimisation for text: response-level, segment-level, and token-level.\\nVisual Preference Optimisation tries to reduce reliance on large language models, which start hallucinating (ignoring the image) once the image is not well aligned. They take the ground-truth image and create a synthetic rejection image (using e.g. rotating, cropping, adding noise etc).\\nCombining both the hierarchical textual preference optimisation, on all three levels, and the visual preference optimisation, they introduce CHiP. They ablate each component individually, but show that combining all significantly improves performance on hallucination benchmarks.\\n\\n(update: and they show equality in non-hallucination benchmarks.)\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": [\"All code and datasets are public.\", \"Well written.\", \"Is very clear about how baselines should be compared (not everything can be directly compared, but they still include them).\", \"Introduces all benchmarks and evaluation tasks/metrics in the paper with a quick overview.\", \"Results seem strong for the benchmarks this paper focuses on. Good analysis of the components that were used to achieve this.\"], \"weaknesses\": \"1. While showing improvements on the benchmarks that this paper focused on, it does not show the impact on non-hallucination benchmarks. This makes it harder to estimate whether this is useful in training recipes in \\u2018the wild\\u2019.\\n2. All of the ablations/experiments to hill-climb results were done using the final evaluation sets (?). HallusionBench is used in the final overview, and it still outperforms DPO, but the results don\\u2019t seem to be as clear as the benchmarks that were mainly used during hyper parameter estimations.\\n3. Nit: \\\"Reference results (these results cannot be directly comparable)\\\" (page 7) -> \\\"cannot be directly compared\\\" or \\\"are not directly comparable\\\"\\n4. Nit: not sure if we already need to mention \\\"We select the strategy with the best performance, which generates the rejection image by adding noise ... \\\" (page 5) there, since it seems to be part of results.\", \"questions\": \"1. \\u201cEmpirically, we set the weight of response-level preference optimisation to 1\\u201d (page 8) \\u2014 can you share those empirical results?\\n2. \\u201cWe found that the best performance was achieved when \\u03bb = 1 and \\u03bb = 3 for the Muffin and LLaVA-1.6 frameworks\\u201d (page 8) \\u2014 are you not afraid of overfitting on the dataset with these hyperparamters? You\\u2019re not using a validation set vs test set here, right?\\n3. On page 6, \\\"Implementation Details\\\", how were 0.5 and 0.1 chosen?\\n3. What is the impact of this preference optimisation stage for the performance on existing benchmarks? You show it improves on hallucination benchmarks, does it keep its performance on previously measured benchmarks?\\n4. \\\"To make the results more reliable, we invited experts to manually annotate the data to compare\\\" (page 7) -- Do you have details about these annotators? How many/anything special that can be shared?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"Direct Preference Optimization (DPO) is frequently used to address the hallucination problem in Vision-Language Models (VLMs). The paper identifies a limitation with DPO, noting that it does not effectively align image and text representations, and proposes Cross-modal Hierarchical Direct Preference Optimization (CHiP) as a solution to this issue.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is highly readable.\", \"It presents a novel perspective on the limitations of traditional DPO, highlighting that it does not apply weight for important segments, resulting in the possibility of applying DPO even to non-hallucinatory segments of a rejected response.\", \"The idea of assigning higher rewards to important segments is innovative and convincing.\", \"The approach of learning both Textual and Visual Preferences is new and valuable.\", \"The experiments effectively demonstrate the hallucination mitigation, and the overall results are strong.\"], \"weaknesses\": [\"While the paper mentions that Visual Preference Optimization trains the model to understand images that better match the chosen response, the concept of \\\"Visual Preference\\\" itself is unclear.\", \"The use of augmented images as rejection images is confusing, as it is unclear why semantically similar images are used as rejection images. In Figure 5, except for (c) Blackness and (f) Random, the other images share the same semantics. This approach differs from typical augmentation techniques that increase model robustness by training with transformed images. If (c) and (f) are used as rejection images, it can be understood as a way to prevent generating responses without observing the image. However, other images retain the same semantics, making it difficult to consider them as proper rejection images.\"], \"questions\": [\"How are segments determined in Segment-level Preference Optimization? Are partially pre-modified sentences prepared for this purpose?\", \"I initially thought Response-level Preference Optimization also involved optimization at the token level. Could you clarify the difference between Token-level and Response-level Preference Optimization?\", \"In Equation (8), does \\\"sg\\\" refer to stop gradient? I\\u2019m curious about the reasoning behind this.\", \"What could be the reason for the lower performance in fA and Cover in Table 1?\", \"In Table 3, what insight is expected from the experiment comparing training vs. freezing the Visual Encoder, and what is the purpose of this comparison?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Cross-modal Hierarchical Direct Preference Optimization (CHiP). This approach incorporates a visual preference optimization module alongside a hierarchical textual preference optimization module (preference learning in response, segment, and token-level), allowing MLLMs to learn from both visual and textual preferences across various levels of granularity.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. This paper proposes to eliminate hallucinations from multiple levels, including response-level, segment-level, and token-level.\\n\\n2. The proposed Visual Preference Optimization is interesting, make model less over-relied on language counter part.\\n\\n3. The proposed method achieves solid results on multiple hallucination benchmarks.\", \"weaknesses\": \"1. The technical details of some methods are not clear enough.\\n\\n2. lacks experiments on general capability benchmarks.\\n\\n3. The selection of rejection image in proposed Visual Preference Optimization remain improvement.\", \"questions\": \"1. The explanation of Segment-level Preference Optimization is not clear. Author claims that \\\"we assign higher rewards to the segments that differ between the chosen response and the rejection response.\\\". How the non-hallucinated and hallucinated segment pair is determined? If not human labeled, will the wrongly labeled segment affect optimization effect? Meanwhile, the Token-level Preference Optimization seems strange. From the equation9 we can see the whole sequence is calculated during the reward calculating, but why use the whole sentence when non-hallucinated and hallucinated segment pair is labeled? Will the token-level strategy work if only consider the token in labled segment?\\n\\n2. Lacks general capability evaluation, it is known that preference learning may harm general understanding capability. Can arthors provide results on general capability benchmarks, such as MMMU and MMBench? Meanwhile, more human evaluation is required, does proposed method eleiminate hallucination by making model less talkative, for example, the cover rate in AMBER decreases after optimization.\\n\\n3. The proposed visual preference optimization and the result is interesting. However the IMAGE CONSTRUCTION STRATEGY remain exploration. From results, we can see that rejection image that differs a lot from original image can lead to sub-optimal results. A strategy may outperform diffusion is finding rejection images as close as possible. For example, in figure5, a rejection image maybe also include a bear, ocean, mountain, but inconsistent with the correct response.\\n\\n4. Formatting errors. Name unclear and inconsistence (CMDPO in Table3 but not mentioned, and HCMDPO in figure9), table 6 written as figure.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
7liN6uHAQZ
Sketching for Convex and Nonconvex Regularized Least Squares with Sharp Guarantees
[ "Yingzhen Yang", "Ping Li" ]
Randomized algorithms play a crucial role in efficiently solving large-scale optimization problems. In this paper, we introduce Sketching for Regularized Optimization (SRO), a fast sketching algorithm designed for least squares problems with convex or nonconvex regularization. SRO operates by first creating a sketch of the original data matrix and then solving the sketched problem. We establish minimax optimal rates for sparse signal estimation by addressing the sketched sparse convex and nonconvex learning problems. Furthermore, we propose a novel Iterative SRO algorithm, which reduces the approximation error geometrically for sketched convex regularized problems. To the best of our knowledge, this work is among the first to provide a unified theoretical framework demonstrating minimax rates for convex and nonconvex sparse learning problems via sketching. Experimental results validate the efficiency and effectiveness of both the SRO and Iterative SRO algorithms.
[ "Sketching", "Random Projection", "Minimax Rates" ]
Accept (Poster)
https://openreview.net/pdf?id=7liN6uHAQZ
https://openreview.net/forum?id=7liN6uHAQZ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "x21WQK6Lo2", "vUjcIOlms1", "uWGMdYKmzm", "s9s2Oah4tH", "ppoSbaF4iF", "nJJjoOicBT", "nICWw8JsRy", "n62MjcGex0", "lZ8XiW9JV2", "gVApsWpYVs", "e9Yuj12628", "crW4i97u5n", "VnrmBX6Icz", "UaUDaQHcf7", "SrvpuRLglF", "QXVLwD9K6R", "PCwNGUip0C", "P728mwRlvm", "Lrk3SXqV51", "HfSd6n14zp", "AHIWwXdvys", "7AMTUVki3J", "4EzfgCZKbY" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "decision" ], "note_created": [ 1733164757639, 1732988969401, 1731440890315, 1732907636105, 1733288610233, 1732921123730, 1733175544647, 1732916982399, 1733173865603, 1730692436805, 1733151308891, 1733177953224, 1734504732331, 1733164039074, 1732917208277, 1733288764006, 1730174386678, 1733157091607, 1733173613467, 1730474008837, 1732920939101, 1732907294253, 1737524133011 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11596/Reviewer_HrLh" ], [ "ICLR.cc/2025/Conference/Submission11596/Authors" ], [ "ICLR.cc/2025/Conference/Submission11596/Reviewer_2raj" ], [ "ICLR.cc/2025/Conference/Submission11596/Authors" ], [ "ICLR.cc/2025/Conference/Submission11596/Authors" ], [ "ICLR.cc/2025/Conference/Submission11596/Authors" ], [ "ICLR.cc/2025/Conference/Submission11596/Reviewer_Y2Sn" ], [ "ICLR.cc/2025/Conference/Submission11596/Authors" ], [ "ICLR.cc/2025/Conference/Submission11596/Authors" ], [ "ICLR.cc/2025/Conference/Submission11596/Reviewer_X44j" ], [ "ICLR.cc/2025/Conference/Submission11596/Reviewer_X44j" ], [ "ICLR.cc/2025/Conference/Submission11596/Authors" ], [ "ICLR.cc/2025/Conference/Submission11596/Area_Chair_p96u" ], [ "ICLR.cc/2025/Conference/Submission11596/Authors" ], [ "ICLR.cc/2025/Conference/Submission11596/Authors" ], [ "ICLR.cc/2025/Conference/Submission11596/Authors" ], [ "ICLR.cc/2025/Conference/Submission11596/Reviewer_HrLh" ], [ "ICLR.cc/2025/Conference/Submission11596/Reviewer_Y2Sn" ], [ "ICLR.cc/2025/Conference/Submission11596/Authors" ], [ "ICLR.cc/2025/Conference/Submission11596/Reviewer_Y2Sn" ], [ "ICLR.cc/2025/Conference/Submission11596/Authors" ], [ "ICLR.cc/2025/Conference/Submission11596/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Thanks for the response\", \"comment\": \"Thanks authors for the effort in addressing my misunderstandings and confusion. I have raised the score for the paper.\"}", "{\"title\": \"Response to Reviewer 2raj Part 1\", \"comment\": \"We appreciate the review and the suggestions in this review. The raised issues are addressed below. In the following text, the line numbers are for the revised paper without special notes.\\n\\n**(1) Novelty of Our Results and Their Significant Difference from [Pilanci2016]**\\n\\n**We respectfully point out that the claim \\u201cAn incremental result is not necessarily a bad result\\u2026\\u201d in this review is a factual misunderstanding**. Our results are novel and significantly different from those in [Pilanci2016], which is detailed in line 437-459 of the revied paper and copied below for your convenience.\\nIt is remarked that [Pilanci2016] only handles convex constrained least square problems of the form $\\\\min _ {\\\\mathbf X \\\\in \\\\mathcal C} || \\\\mathbf X \\\\mathbf \\\\beta-\\\\mathbf y ||^2$ where the constraint set $\\\\mathcal C$ is a convex set, while our results cover regularized convex and nonconvex problems with minimax optimal rates. It is emphasized that the techniques in [Pilanci2016] can never be applied to the regularized problems considered in this paper. [Pilanci2016] heavily relies on certain complexity measure of the constraint set $\\\\mathcal C$, such as the Gaussian width. It shows that the complexity of such constraint set $\\\\mathcal C$ is bounded, so that sketching with such constraint set $\\\\mathcal C$ of limited complexity only incurs a relatively small approximation error. However, there is never such constraint set in the original problem (Eq. (1)) or the sketched problem (Eq. (2)), so that such complexity based analysis for sketching cannot be applied to this work. Furthermore, as mentioned in Section 1.1, Iterative SRO does not need to sample the projection matrix and compute the sketched matrix at each iteration, in contrast with IHS [Pilanci2016] where a separate projection matrix is sampled and the sketched matrix is computed at each iteration. As evidenced by Table 1 in Section 7.1, Iterative SRO is more efficient than its \\u201cHIS\\u201d counterpart where sketching is performed at every iteration while enjoying comparable approximation error.\\n\\n**(2) Improved Presentation**\\n\\nThe meaning of \\u2026$\\\\tilde {\\\\mathbf \\\\beta}^*$ has been clearly indicated in every theoretical result in the revised paper. In particular, when using Iterative SRO for sparse convex learning in Section 4.1, it is clearly stated in Theorem 4.1 that $\\\\tilde {\\\\mathbf \\\\beta}^* = \\\\mathbf \\\\beta^{(N)}$ (line 262-263 of the revised paper or line 254-255 of the original submission). When using SRO for sparse nonconvex learning in Section 4.2, it is clearly stated in Theorem 4.2 that $\\\\tilde {\\\\mathbf \\\\beta}^*$ is the optimization result of the sketched problem (Eq. (2)) in line 368-369 of the revised paper. To make the meaning of $\\\\tilde {\\\\mathbf \\\\beta}^*$ even clearer, it is stated in line 237-238 of the revised paper that \\u201cIn Section 4.1, $\\\\tilde {\\\\mathbf \\\\beta}^*$ is obtained by Algorithm 1 through $\\\\tilde {\\\\mathbf \\\\beta}^* = \\\\mathbf \\\\beta^{(N)}$. In Section 4.2, $\\\\tilde {\\\\mathbf \\\\beta}^*$ is the optimization result of the sketched problem (2)\\u201d (Section 4.1 in line 238 should be Section 4.2).\\n\\nWe have improved the organization of the theoretical results, and Theorem 5.2 and Theorem 5.4 (which now become Theorem D.2 and Theorem D.3 of the revised paper), as the intermediate result for the proof of Theorem 3.1, has been moved to Section D of the revised paper, following the suggestion of this review for the improved clarity of this paper. We have also fixed the presentation issues or typos mentioned in all the questions in this review. \\n\\n**\\u201d\\u2026wouldn\\u2019t it be more efficient to draw a new sketch at each iteration in the iterative SRO algorithm ?\\u201d** \\n\\nAs explained in line 443-447 of the revised paper, if we draw a new sketch at each iteration of Iterative SRO, then at each iteration of Iterative SRO a new projection matrix $\\\\mathbf P \\\\in {\\\\mathbb R}^{\\\\tilde n \\\\times n}$ is sampled and a new sketched matrix $\\\\tilde {\\\\mathbf X} = \\\\mathbf P \\\\mathbf X$ is computed, which incurs considerably more computational cost for large-scale problem with large data size $n$ compared to the proposed Iterative SRO described in Algorithm 1 where only a single projection matrix is sampled and a single sketch matrix is computed throughout all the iterations of the Iterative SRO.\\n\\n**References**\\n\\n[Pilanci2016] Pilanci et al. Iterative hessian sketch: Fast and accurate solution approximation for constrained least-squares. Journal of Machine Learning Research, 2016.\"}", "{\"summary\": \"The paper introduces an extension of the Iterative Hessian Sketch from Pilanci and Wainwright to the regularized setting. The authors first show convergence of the algorithm for sparse signal estimation (i.e. the LASSO) for both the traditional l1 norm as well as for a non convex penalty. In both cases, under various assumptions, they are able to derive a minimax error bound on the deviation between the N^th iterate of the single step algorithm (which they call SRO and which amounts to a minimization of the sketched formulation) and the groundtruth. They then extend their result to a general convex setting and show how to improve convergence by means of an iterative version of the SRO algorithm which is essentially an extension of the IHS algorithm from P and W to regularized formulations.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The paper is well written on the whole. The beginning of the paper remains very close to the paper \\u201cIterative Hessian Sketch: Fast and Accurate Solution Approximation for Constrained Least-Squares\\u201d from Pilanci and Wainwright. In particular compare Equation (5) with Equation (25) in this last paper, or (24) with (7). From what I understand, the main difference lies in the use of a regularizer. An incremental result is not necessarily a bad result but I\\u2019m still not fully decided about whether it is sufficiently original to be accepted.\", \"weaknesses\": \"The paper is not bad but it should clearly be reorganized. The distinction between the solution of the sketched program and the final iterate of the SRO/iterative SRO algorithms is not very clear. Does \\\\tilde{\\\\beta} represent a critical point of (2) or the N^th iterate of Iterative SRO ? This is especially unclear for section 4.2. where the meaning of \\\\tilde{\\\\beta} is not clarified in the statements of Corollary 4.2., and Theorem 4.5. but is mentioned as a critical point in the statement of Theorem 5.2.\\n\\nAlso your key contribution is the sketching step so the \\\\tilde{n} should appear more clearly in all your statements (see my comments below)\", \"questions\": [\"line 30 \\u201cSketching algorithms has been used to approximately\\u201d \\u2014> \\u201cSketching algorithms have been used to approximately\\u201d\", \"line 62, Equation (30) the lines 129-131 should appear way earlier. In particular, you want to better motivate the fact that the sketching is restricted to the quadratic term. I understand that restricting the sketching to the quadratic term might improve convergence but is that really necessary? If you still need to compute the product y^TX when do you the gradient descent, isn\\u2019t that equivalent to the X^TX*\\\\beta ? I.e. in the latter you just have 2 more applications of a vector to the matrix X.\", \"line 68, you give the definition of the semi-norm ||u||_X twice\", \"line 116 I would use the cardinality of the support to denote the number of non zero elements of a matrix X instead of introducing the notation nnz. I.e. |supp(X)|. It is a detail though\", \"line 123: \\u201cIt is comprised of \\u201d \\u2014> \\u201cIt consist of\\u201d\", \"lines 125-127 are quite obvious to me. I think you could just remove them\", \"line 136-140, in the Definition 2.1. is there any reason you use \\\\ell^2 instead of \\\\ell_2 ?\", \"line 142, Definition 2.2. looks more like a proposition (Lemma perhaps) to me.\", \"line 151 : \\u201cTaking derivative with respect to\\u201d \\u2014> \\u201ctaking the derivative with respect to\\u201d?\", \"line 189-190 \\u201cis a more accuracy\\u201d \\u2014> \\u201cis a more accurate\\u201d\", \"line 170 \\u201cconsectively\\u201d \\u2014> \\u201cconsecutively\\u201d or \\u201citeratively/alternatively\\u201d?\", \"The structure of the proof sketch for Theorem 3.1. is good but your exposition of formulations (5) (6) and (7) is unclear. You should either keep the formulation from (6) to (7) (aside from the sketching) or provide at least a to (short) sentence indicating that (7) is obtained by moving the \\\\beta^{(t-1)} out of the quadratic term to the linear term.\", \"Lines 295-296, Statement of Assumption 3, is the concavity parameter \\\\zeta_- that you introduce in Assumption 3 the same as the zeta_- that you use in Assumption 2? if so it should be clarified in Assumption 2 (i.e. you could say something like \\u201cwhere zeta_- is known as the concavity parameter\\u201d or slightly modify Assumption 3 by saying \\u201cThe concavity parameter \\\\zeta_- introduced in Assumption 2\\u2026\\u201d)\", \"In the statements of Theorem 4.1, Corollary 4.2, Theorem 4.5 (this is especially true for Theorem 4.5), I would remove the explicit constants. The relation between the values you get for \\\\tilde{n} and the constants is not clear anyways so those should go (keep the \\\\varepsilon/1-\\\\varepsilon for Corollary 4.2). If you really want you can add a one line sentence for Corollary 4.2. indicating how they depend on \\\\rho_{\\\\mathcal{L}, +} and \\\\rho_{\\\\mathcal{L}, -}.\", \"I recommend that you change the statement of Theorems 4.1, 4.5 and 5.2. so that they look more like the statement of Corollary 4.2. in terms of how you introduce the \\\\tilde{n}. I.e. the lower bounds on \\\\tilde{n} should not be introduced as a side comment. \\\\tilde{n} should appear before you introduce \\\\tilde{\\\\beta}. E.g. Let \\\\tilde{\\\\beta} denote the solution to (5) with a sketching operator of size \\\\tilde{n}\\\\times n (or even better \\u201cfor the sketching parameter \\\\tilde{n}\\u2026\\u201d)\", \"I\\u2019m not sure Figure 1 is really meaningful. It only indicates a constant reduction in the relative error. What is more interesting is a bound such as (8)\", \"Lines 403-407, Definition 5.2. From what I understand, the degree of non convexity for a particular value of kappa gives you a measure of whether the function is more convex than the function \\\\kappa \\\\|x\\\\|^2 ? I.e \\\\theta_h will give you a negative value if the Hessian of the function is more psd than -\\\\kappa I and positive otherwise? Remark 5.1. could be clearer\", \"Lines 416-420, in the statement of Theorem 5.2., what do you mean by an \\u201coblivious\\u201d \\\\ell^2 subspace embedding? Are you referring to an embedding that satisfies Definition 2.1. ? Then why not just say it like that? If you use space to introduce Definition 2.1, leverage it.\", \"line 424 - 426 : \\u201cby arbitrary critical point \\u201d \\u2014> \\u201cby any arbitrary critical point\\u201d\", \"lines 428-431, from what I understand \\\\theta_{h_\\\\lambda} is lower bounded by \\\\kappa? it might be good to add even a short clarifying sentence\", \"The sentence on line 430 does not make sense. Do you mean that \\u201cif the if the subdifferential is Lipschitz or if \\\\kappa can be set to 0 then you get an admissible error bound?\\u201d If so, this is not what Theorem 5.4. states.\", \"Statement of Theorem 5.4., I would remove the sentence following Equation (17) as it refers to a result that does not appear?\", \"I think I would reorganize sections 5.1. and 5.2 . At the end of the day, the two Theorems 5.2. and 5.4 are really steps towards the proof of Theorem 3.1. I would add a one paragraph below this last theorem explaining how you use the degree of non convexity to prove it. This would make it possible to remove the statements of Theorems 5.1 and 5.2\", \"general question: wouldn\\u2019t it be more efficient to draw a new sketch at each iteration in the iterative SRO algorithm ?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer X44j Part 2\", \"comment\": \"**(3) Assumption of full column rank in Theorem 3.1**\\n\\nWe would like to emphasize that Theorem 3.1 holds for either of the two cases: (1) the regularizer $h = h_{\\\\lambda}$ is convex, or (2) $\\\\mathbf X$ is of full rank. Throughout this paper, we only apply Theorem 3.1 for convex regularizer $h$, such as the proof of the minimax optimal rates by sketching for sparse convex learning in Theorem 4.1, and in such applications of Theorem 3.1 we do not require $\\\\mathbf X$ to have full rank. **In summary, all the theoretical and empirical results of this paper using Theorem 3.1 do not need $\\\\mathbf X$ to have full rank**.\\n\\nOn the other hand, Theorem 3.1 can also be applied to nonconvex regularizer $h$. In order to ensure that the Iterative ROS algorithm can also enjoy the exponential decay of the approximation error (In Eq. (8) of Theorem 3.1), we need a full rank $\\\\mathbf X$. It is still open in the sketching literature that how iterative sketching can render the same guarantee for nonconvex functions as that for convex functions, and our Theorem 3.1 gives a partial solution to this open problem with the condition that $\\\\mathbf X$ is of full rank.\\n\\n**(4) Numerical Results for Nonconvex Regularization** \\n\\nThank you for your suggestion, and we present the empirical study of SRO for sparse nonconvex learning with capped-$\\\\ell^1$ regularization in Section C.5 of the revised paper.\\n\\n\\n**References**\\n\\n[YangLi2021] FROS: fast regularized optimization by sketching. In IEEE Interna tional Symposium on Information Theory, 2021.\"}", "{\"title\": \"Explanation and remedies for the unintended, coincidental and the very limited text overlap with [YangLi2021] (only in the introduction to the basic background)\", \"comment\": \"Dear AC and Reviewers,\\n\\nThank you for your time reviewing and handling this paper. All the technical concerns about this paper have been solved in the rebuttal phase.\\n\\nWe would like to let you know that there is truly unintended, coincidental, and very limited text overlap with [YangLi2021] when we present an introduction to the basic background about sketching for convex and nonconvex optimization problems in this paper. We thank reviewer Y2Sn who mentioned such similarities to [YangLi2021]. We would like to emphasize again that such coincidental text overlap is limited to basic introduction to the general background in sketching for convex and nonconvex optimization (in the abstract and the beginning part of the introduction section), and it never affects the novelty of our results and their significant differences from [YangLi2021]. In fact, reviewer X44j mentioned that the issue about the difference from [YangLi2021] has been solved. Importantly, reviewer Y2Sn mentioned that \\\"I completely agree that there is a lot of theory contained in this paper which is not present in [YangLi2021]. As I have stated, I think that the difference between the papers points to enough novelty!\\\" and \\\"I reiterate that my score is a 6, and I hence think that the quality of the paper is good enough to be accepted.\\\"\\n\\n\\n**Remedies.** **As a fundamentally important remedy, [YangLi2021] has already been cited sufficiently in the discussion in Section 5 of the revised paper where the differences of our results from [YangLi2021] are sufficiently discussed and acknowledged by reviewer X44j and reviewer Y2Sn, and also in the new Introduction attached below**. Moreover, we have presented a new revised version for both the Abstract and the Introduction section (attached below), and now there is no overlap with [YangLi2021] in the Abstract and the Introduction section except for commonly used standard mathematical description in this literature. We will update the Abstract and the Introduction section of the final paper accordingly using their revised version.\\n\\n \\n**New Abstract (to replace the current Abstract)**\\n\\nRandomized algorithms play a crucial role in efficiently solving large-scale optimization problems. In this paper, we introduce Sketching for Regularized Optimization (SRO), a fast sketching algorithm designed for least squares problems with convex or nonconvex regularization. SRO operates by first creating a sketch of the original data matrix and then solving the sketched problem. We establish minimax optimal rates for sparse signal estimation by addressing the sketched sparse convex and nonconvex learning problems. Furthermore, we propose a novel Iterative SRO algorithm, which significantly reduces the approximation error geometrically for sketched convex regularized problems. To the best of our knowledge, this work is among the first to provide a unified theoretical framework demonstrating minimax rates for convex and nonconvex sparse learning problems via sketching. Experimental results validate the efficiency and effectiveness of both the SRO and Iterative SRO algorithms.\\n\\n**New Introduction (to replace the corresponding parts in the current Introduction Section)**\\n\\nRandomized algorithms for efficient optimization are a critical area of research in machine learning and optimization, with wide-ranging applications in numerical linear algebra, data analysis, and scientific computing. Among these, matrix sketching and random projection techniques have gained significant attention for solving sketched problems at a much smaller scale (Vempala, 2004; Bout- sidis & Drineas, 2009; Drineas et al., 2011; Mahoney, 2011; Kane & Nelson, 2014). These methods have been successfully applied to large-scale problems such as least squares regression, robust regression, low-rank approximation, singular value decomposition, and matrix factorization (Halko et al., 2011; Lu et al., 2013; Alaoui & Mahoney, 2015; Raskutti & Mahoney, 2016; Yang et al., 2015; Drineas & Mahoney, 2016; Oymak et al., 2018; Oymak & Tropp, 2017; Tropp et al., 2017). Regularized optimization problems with convex or nonconvex regularization, such as the widely used in regularized least squares such as Lasso and ridge regression, play a fundamental role in machine learning and statistics. While prior research has extensively explored random projection and sketching methods for problems with standard convex regularization (Zhang et al., 2016b) or convex constraints (Pilanci & Wainwright, 2016), there has been limited focus on analyzing regularized problems with general convex or nonconvex regularization frameworks. \\n\\nWe would like to emphasize that while [YangLi2021] also studies sketching for regularized optimization problem, the focus and results of this work are completely different from that in [YangLi2021], with a detailed discussion deferred to Section 5. (to be continued)\"}", "{\"title\": \"Response to Reviewer HrLh Part 2\", \"comment\": \"**(5) \\\"\\u2026why the running time for SRO is longer than those iterative algorithm?...\\\"**\\n\\nThe reason that the running time of SRO is longer than that of Iterative SRO is explained in line 511-518 of the revised paper, which is copied below for your convenience. In summary, the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) is used as the optimization algorithm to solve the original and the sketched problems, and Iterative SRO can use much less iterations of FISTA compared to SRO, explaining that Iterative SRO takes less running time than SRO in Table 1.\\n\\nThe maximum iteration number of FISTA for Iterative SRO ($2000$) is much smaller than that for SRO ($10000$). This is because Iterative SRO uses an iterative sketching process where the approximation error is geometrically reduced with respect to the iteration number $t$ in Algorithm 1, so that each iteration of Iterative SRO is only required to have a moderate approximation error which can be larger than the approximation error of SRO thus a smaller iteration number of FISTA suffices for each iteration of Iterative SRO. Such analysis also explains the fact that Iterative SRO is much faster than SRO, and in our experiment the maximum iteration number $N$ for Iterative SRO described in Algorithm 1 is always not greater than $5$.\\n\\n**References**\\n\\n[Pilanci2016] Pilanci et al. Iterative hessian sketch: Fast and accurate solution approximation for constrained least-squares. Journal of Machine Learning Research, 2016.\"}", "{\"title\": \"Clarifications\", \"comment\": \"*The constant in Theorem D.2*\\n\\nRegarding the $\\\\alpha_0$, I think now understand the issue. First, let me apologize for misunderstanding. I simply did not understand where the $\\\\alpha_0$ came from. The reason for this is that I probably do not know the concrete concentration inequality the authors are referring to. The one I know (from e.g Vershynin) is $\\\\mathbb{P}(|\\\\frac{1}{n}\\\\Vert{g}\\\\Vert^2-1|\\\\geq u) \\\\leq 2\\\\exp(-cn\\\\max(u,u^2))$, which in this setting would mean that $|F(v)-1|\\\\leq \\\\Theta(\\\\delta^2)$ with a probability smaller than $2\\\\exp(-\\\\tilde{c}n\\\\delta^2)$. I know realize that there is a multiplicative 2 in front of the exponential term which is not present in the authors bound -- this can of course be removed by bringing the $\\\\alpha_0$ into the game and then argue that $n$ is large enough. This is not a huge issue - it is ultimately a question of readability and taste. \\n\\n*The ethical flagging*\\n\\nI completely agree that there is a lot of theory contained in this paper which is not present in [YangLi2021]. As I have stated, I think that the difference between the papers points to enough novelty! \\n\\nTo explain my wording 'highly unlikely', I was referring to the exactly equal experimental results in the first version of the paper, not to the theorems - and I only used the wording due to the randomized nature of the algorithms. Using the word 'error bounds' here is a genuine mistake - I apologize.\\n\\nThe reason I still think this issue should be looked on by someone else (hence the flag) is that in in [the version of [YangLi2021] that I can find](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9517998), there are long passages that are verbatim the same as in the paper at hand. As I have explained, this may or may not be a problem depending on circumstances that I cannot check unless I try to circumvent the double blind reviewing process, I will keep the flag. I will refrain from commenting any more on this.\\n\\n*Final word*\\n\\nI reiterate that my score is a 6, and I hence think that the quality of the paper is good enough to be accepted.\"}", "{\"title\": \"Response to Reviewer Reviewer Y2Sn Part 1\", \"comment\": \"We appreciate the review and the suggestions in this review. The raised issues are addressed below. In the following text, the line numbers are for the revised paper.\\n\\n**(1) Improved Presentation**\\n\\n**\\\"It takes some effort to sort out the relations between the problems (2), (5) and (6)\\u2026\\\"** \\n\\nHerein we provide a detailed description of problems (2), (5) and (6). Problem (2) is the sketched problem corresponding to the original problem (1), where the data matrix $\\\\mathbf X$ is replaced by its sketched version $\\\\tilde {\\\\mathbf X}$ in problem (2). \\n\\nProblem (5) is the intermediate problem at every iteration of the Iterative SRO described in Algorithm 1. The solution to problem (5), $\\\\mathbf \\\\beta^{(t)}$, is supposed to approximate the original solution $\\\\mathbf \\\\beta^*$ better than its predecessor, $\\\\mathbf \\\\beta^{(t-1)}$ obtained at the previous iteration of the Iterative SRO. Theorem 3.1 shows that $\\\\mathbf \\\\beta^{(t)}$ is geometrically close to $\\\\mathbf \\\\beta^*$ (by Eq. (8)).\\n\\nProblem (6)-(7) are used to explain the proof of Theorem 3.1, in particular, why $\\\\mathbf \\\\beta^{(t)}$ can be geometrically close to $\\\\mathbf \\\\beta^*$. In particular, at the $t$-th iteration of Iterative SRO, the solution to problem (6) is in fact the gap between $\\\\mathbf \\\\beta^*$ and $\\\\mathbf \\\\beta^{(t)}$, or \\n$\\\\mathbf \\\\beta^* - \\\\mathbf \\\\beta^{(t)}$ . We then solve the sketched version of problem (6), which is problem (7), so that the solution $\\\\hat {\\\\mathbf \\\\beta}$ to problem (7) is an approximation to $\\\\mathbf \\\\beta^* - \\\\mathbf \\\\beta^{(t)}$. If such approximation can have a relative-error approximation error in Eq. (3), that is, \\n$\\\\hat {\\\\mathbf \\\\beta} \\u2013 (\\\\mathbf \\\\beta^* - \\\\mathbf \\\\beta^{(t)}) \\\\le \\\\rho (\\\\mathbf \\\\beta^* - \\\\mathbf \\\\beta^{(t)})$, then since $\\\\mathbf \\\\beta^{(t)} = \\\\mathbf \\\\beta^{(t-1)} + \\\\hat {\\\\mathbf \\\\beta} $, we have\\n$|| \\\\mathbf \\\\beta^{(t)} - \\\\mathbf \\\\beta^*|| _ {\\\\mathbf X} = \\n|| \\\\hat {\\\\mathbf \\\\beta} \\u2013 (\\\\mathbf \\\\beta^* -\\\\mathbf \\\\beta^{(t-1)}) || _ {\\\\mathbf X} \\\\le \\\\rho || \\\\mathbf \\\\beta^* -\\\\mathbf \\\\beta^{(t-1)} || _ {\\\\mathbf X} $. As a result, by mathematical induction, we have $|| \\\\mathbf \\\\beta^{(t)} - \\\\mathbf \\\\beta^*|| _ {\\\\mathbf X} \\\\le \\\\rho^{t} ||\\\\mathbf \\\\beta^*|| _ {\\\\mathbf X} $ for all $t \\\\ge 0$. \\nWe will put the above detailed description in the final version of this paper.\\n\\n**\\\"\\u2026proof of Theorem 5.2 are written down in an unnecessarily complicated\\u2026\\\"**, \\n**\\\"$\\\\kappa$ is still present \\u2026 set to $0$\\\"**. \\n\\nWe have revised the proof of this Theorem following your suggestions (now it becomes Theorem D. 2 of the revised paper). Please kindly refer to the \\u201cProof of Theorem D.2\\u201d in Section D of the appendix of the revised paper. In addition, we have removed the term with $\\\\kappa$ in this equation (now Eq. (37) of the revised paper).\\n\\n**\\\"There are many steps in the proof of the main result that are only sketched, in particular for $L_h$-smooth $h$...\\\"**. \\n\\nThe detailed steps for $L_h$-smooth $h$ are provided in the revised paper, and please kindly refer to line 1080-1100 in the proof of Theorem D.3, and line 1121-1133 in the proof of Theorem 3.1 for these detailed steps.\\n\\n**\\\"The term JLT in Theorem D.4 is undefined\\u2026\\\"**. \\n\\nIn Theorem D.6 of the revised paper, it is now clearly mentioned that JLT refers to the Johnson\\u2013Lindenstrauss Transform, and it is defined in line 1307-1308.\\n\\nWe would also like to mention that Theorem D.2 and Theorem D.3, as the intermediate results for the proof of Theorem 3.1, have been moved to Section D of the revised paper, following the suggestion of Reviewer 2raj for the improved clarity of this paper. All the other presentation issues have either been fixed in the revised paper, such as $\\\\mathcal O(\\\\sqrt{\\\\bar s \\\\log d/n})$ instead of $\\\\mathcal O(\\\\bar s \\\\log d/n)$ in the beginning of Section D.3, or will be fixed in the final version of this paper (such as the reference to Eq. (51) instead of (52) in line 1253).\"}", "{\"title\": \"Clarification regarding the factually wrong statements in the \\\"Ethics Concerns\\\" Part 2\", \"comment\": \"**New Abstract (to replace the current Abstract)**\\n\\nRandomized algorithms play a crucial role in efficiently solving large-scale optimization problems. In this paper, we introduce Sketching for Regularized Optimization (SRO), a fast sketching algorithm designed for least squares problems with convex or nonconvex regularization. SRO operates by first creating a sketch of the original data matrix and then solving the sketched problem. We establish minimax optimal rates for sparse signal estimation by addressing the sketched sparse convex and nonconvex learning problems. Furthermore, we propose a novel Iterative SRO algorithm, which significantly reduces the approximation error geometrically for sketched convex regularized problems. To the best of our knowledge, this work is among the first to provide a unified theoretical framework demonstrating minimax rates for convex and nonconvex sparse learning problems via sketching. Experimental results validate the efficiency and effectiveness of both the SRO and Iterative SRO algorithms.\\n\\n\\n**New Introduction (to replace the corresponding parts in the current Introduction Section)**\\n\\nRandomized algorithms for efficient optimization are a critical area of research in machine learning and optimization, with wide-ranging applications in numerical linear algebra, data analysis, and scientific computing. Among these, matrix sketching and random projection techniques have gained significant attention for solving sketched problems at a much smaller scale (Vempala, 2004; Bout-\\nsidis & Drineas, 2009; Drineas et al., 2011; Mahoney, 2011; Kane & Nelson, 2014). These methods have been successfully applied to large-scale problems such as least squares regression, robust regression, low-rank approximation, singular value decomposition, and matrix factorization (Halko et al., 2011; Lu et al., 2013; Alaoui & Mahoney, 2015; Raskutti & Mahoney, 2016; Yang et al., 2015;\\nDrineas & Mahoney, 2016; Oymak et al., 2018; Oymak & Tropp, 2017; Tropp et al., 2017). Regularized optimization problems with convex or nonconvex regularization, such as the widely used in $\\\\ell^1$ or $\\\\ell^2$-norm regularized least squares commonly known as Lasso and ridge regression, play a fundamental role in machine learning and statistics. While prior research has extensively explored random projection and sketching methods for problems with standard convex regularization (Zhang\\net al., 2016b) or convex constraints (Pilanci & Wainwright, 2016), there has been limited focus on analyzing regularized problems with general convex or nonconvex regularization frameworks.\\n\\n\\nWe would like to emphasize that while [YangLi2021] also studies sketching for regularized optimization problem, the focus and results of this work are completely different from that in [YangLi2021], with a detailed discussion deferred to Section 5. In particular, due to our novel result in the approximation error bound (Theorem D.2-Theorem D.3), the proposed iterative sketching algorithm, Iterative SRO, does not need to sample a new projection matrix and compute the sketched matrix at every iteration, in a strong contrast to [YangLi2021]. Moreover, the focus of this work is to establish minimax optimal rates for sparse convex and nonconvex learning problems by sketching, which has not been addressed by existing works in the literature including [YangLi2021]. While [YangLi2021] only focuses on the optimization perspective, that is, approximating the solution to the original optimization problem by the solution to the sketched problem, the focus of this work needs much more efforts beyond the efforts made in [YangLi2021] for optimization only: we need to show that the solution to the sketched problem can still enjoy the minimax optimal rates for estimation of the sparse parameter vector for both sparse convex and nonconvex learning problems. Such efforts and results in minimax optimal rates for sparse convex and nonconvex learning problems by sketching have not been offered by previous works including [YangLi2021], which are provided in Section 4 of this paper. Such minimax optimal results are established in a highly non-trivial manner. For example, to the best of our knowledge, Theorem 4.1 is among the first in the literature which uses an iteratively sketching algorithm to achieve the minimax optimal rate for sparse convex learning. Furthermore, Theorem 4.5 shows that sketching can also lead to the minimax optimal rate even for sparse nonconvex problems, while sketching for nonconvex problems is still considered difficult and open in the literature.\"}", "{\"summary\": \"The paper presents an approach called Sketching for Regularized Optimization (SRO), which can tackle a broad class of regularized optimization problems by employing sketching techniques. This method is applicable to both convex and nonconvex forms of regularization. To address potential approximation errors in SRO, this paper introduces an enhanced variant, that is, Iterative SRO, which systematically reduces these errors at a geometric rate. A unified theoretical framework is also proposed to establish the minimax rates for sparse signal estimation in both convex and nonconvex settings.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This work addresses a wide range of optimization problems using sketching techniques, effectively handling both convex and nonconvex regularizations.\\n2. The introduction of SRO and its refined version demonstrate a well-thought-out approach that balances efficiency and accuracy in solving complex optimization tasks.\\n3. This work presents a unified framework that derives minimax rates for sparse signal estimations, solidifying the method's effectiveness for both convex and nonconvex settings.\", \"weaknesses\": \"1. This work appears to be a specific case of the work proposed by Yang and Li, ISIT (2021). The similarities include the theoretical framework (such as the general bound in Theorem 5.2), the algorithmic approach (the SRO and Iterative SRO), experimental design and results (the findings presented in Figure 1 is exactly the same), and even the writing style. Please clarify the differences between these two works.\\n2. Please provide more explanation regarding Definition 5.2. For convex functions, the value of theta is less than or equal to 0. So what about the impact of nonconvex functions on this value? For example, what range might it fall within? Additionally, could you provide some examples of nonconvex functions?\\n3. It seems that the assumption of full column rank in the explanation below Theorem 3.1 is not realistic for practical applications. In other words, if the smallest singular value is relatively small, would the smoothness constant become very small or even approach zero?\\n4. Some numerical experiments on nonconvex regularization should be conducted.\\n\\nReferences\\n[1] Yang, Y., & Li, P. FROS: Fast Regularized Optimization by Sketching. In 2021 IEEE International Symposium on Information Theory (ISIT), pp. 2780-2785.\", \"questions\": \"Please see the weeknesses part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Actually, the authors have addressed most of my concerns. As such, I would like to raise my score to 6.\"}", "{\"title\": \"Thank you for your clarification, and our further response\", \"comment\": \"Dear Reviewer Y2Sn,\\n\\nWe really appreciate your timely response. We apologize for not addressing your concern for \\\"exactly the same experimental results\\\" in our previous response. It has been addressed in our updated comments, which is copied here for your convenience: \\\"We would like to respectfully point out that we obtained the same Figure 1 because we used the same data as that in [YangLi2021] in the original Figure 1, which has already been mentioned in our response to Reviewer X44j. However, Figure 1 has been updated with different data in the experiment in the revised paper.\\\"\\n\\n**To solve the issue about text overlap with [YangLi2021] in the abstract and the introduction section, we have provided an updated \\\"Abstract\\\" and \\\"Introduction\\\" which will be used to replace the existing \\\"Abstract\\\" and the existing parts of the Introduction section of the current version of this paper, and they are provided in our previous response** titled \\\"Clarification regarding the factually wrong statements in the \\\"Ethics Concerns\\\" Part 2\\\". Please kindly let us know if you have remaining concerns for such text overlap issue, and **we would like to mention that so far [YangLi2021] has been cited sufficiently in both Introduction and the discussion in Section 5 of the revised paper and there is now no overlap in the \\\"Abstract\\\" and the \\\"Introduction\\\" sections except for commonly used standard mathematical description in this literature**. As you already kindly mentioned in your comment, a description and introduction to such applied mathematical results are standard. **We have tried our best to avoid any text overlap and there is in fact no intention to copy any text from [YangLi2021]**.\\n\\nThank you again for your time!\\n\\nBest Regards,\\n\\nThe Authors\"}", "{\"metareview\": \"Dear Authors,\\n\\nThank you for your valuable contribution to the ICLR and the ML community. Your submitted paper has undergone a rigorous review process, and I have carefully read and considered the feedback provided by the reviewers.\\n\\nThis paper introduces an extension of the Iterative Hessian Sketch to the regularized setting. The methods Sketching for Regularized Optimization (SRO) and its iterative version can handle convex and non-convex regularizers. A theoretical framework is also presented to establish minimax rates. Overall, the paper received mostly positive response from the reviewers (8,6,6,6) scores.\\n\\nGiven this positive assessment, I am willing to recommend the acceptance of your paper for publication.\\n\\nI would like to remind you to carefully review the reviewer feedback and the resulting discussion. While most reviews were positive, the reviewers have offered valuable suggestions that can further strengthen the quality of the paper. Please take another careful look a the 'weaknesses' section of each reviewer comment. I encourage you to use this feedback to make any necessary improvements and refinements before submitting the final version of your paper.\\n\\nOnce again, thank you for submitting your work to ICLR.\\n\\nBest,\\nArea Chair\", \"additional_comments_on_reviewer_discussion\": \"Reviewers pointed out issues in presentation and writing. They also questioned the improvements with respect to the Iterative Hessian Sketch work. The rebuttal helped clarify these issues and led to uniformly positive scores.\"}", "{\"title\": \"Clarification that $\\\\alpha_0$ does not depend on the dimension-of-freedom parameter of the chi-squared variable, and the justification that the raised research integrity flag does not reflect a factually existing issue.\", \"comment\": \"Thank you for your feedback. Below are our further clarifications about $\\\\alpha_0$ and the raised overlap issue, and we respectfully point out that the raised research integrity flag does not reflect a factually existing issue due to factual misunderstandings.\\n\\n**(1) Clarification that $\\\\alpha_0$ does not depend on the dimension-of-freedom parameter of the chi-squared variable.**\\n\\nWe would like to mention that the parameter $\\\\alpha_0$ is chosen so that the chi-squared variable, $v \\\\coloneqq ||\\\\mathbf A \\\\mathbf v||^2 \\\\sim \\\\chi^2(n)$ (where $||\\\\mathbf A \\\\mathbf v||^2$ is introduced in line 1182-1183 of the revised paper) satisfies the standard concentration equality\\n$\\\\| \\\\frac vn - 1 \\\\| \\\\le \\\\Theta(n^{(\\\\alpha_0-1)/2}+n^{\\\\alpha_0-1})$ which holds with probability $1-\\\\exp(n^{-\\\\alpha_0})$, and we need \\n$\\\\Theta(n^{(\\\\alpha_0-1)/2}+n^{\\\\alpha_0-1}) \\\\overset{n \\\\to \\\\infty}{\\\\to} 0$ so that $v = ||\\\\mathbf A \\\\mathbf v||^2 \\\\overset{n \\\\to \\\\infty}{\\\\to} 1$ with probability approaching to $1$ as $n \\\\to \\\\infty$. Please note that $n$ is the dimension-of-freedom parameter of the chi-squared variable $v$, and **$\\\\alpha_0$ can be chosen as an arbitrary constant in $(0,1)$ independent of the dimension-of-freedom parameter $n$**.\\n\\n\\n**(2) Justification that this work is significantly different from [YangLi2021]**\\n\\nWe would like to emphasize that the focus of this paper is completely different from [YangLi2021] with more details about the novelty of our results with their significant difference from [YangLi2021].\\n\\n**First of all, we respectfully point that the proof of the iterative algorithm in [YangLi2021] needs to resample a new projection matrix $\\\\mathbf P$ and compute a new sketched matrix $\\\\tilde {\\\\mathbf X} = \\\\mathbf P \\\\mathbf X$ at every iteration, and the proof of Theorem 3 of [YangLi2021], which shows the theoretical guarantee of their iterative algorithm, in fact depends on a separate projection matrix and a new sketched matrix at every iteration**. The fundamental reason of such resampling in [YangLi2021] is that every iteration of the [YangLi2021]'s iterative algorithm needs to apply the approximation error bound in Theorem 1 of [YangLi2021] which is not based on oblivious $\\\\ell^2$-subspace embedding. In a strong contrast, our Theorem D.2 exhibits the approximation error bound which is based on oblivious $\\\\ell^2$-subspace embedding, so that all the iterations of our Iterative SRO can use the same projection matrix and the same sketched matrix taking advantage of the low-rankness of the data matrix. Such difference constitutes significant advantages over [YangLi2021] both empirically (better efficiency of Iterative SRO in Table 1) and theoretically (Theorem D.2 with the general approximation error bound for oblivious $\\\\ell^2$-subspace embedding and a clear specification of the sketch size $\\\\tilde n$ for different subspace embeddings, which are not offered by [YangLi2021]). \\n\\n**Second, we respectfully point out that the focus of this work is to establish minimax optimal rates for sparse convex and nonconvex learning problems by sketching, which has not been addressed by existing works in the literature including [YangLi2021]**. While [YangLi2021] only focuses on the optimization perspective, that is, approximating the solution to the original optimization problem by the solution to the sketched problem, the focus of this work needs much more efforts beyond the efforts made in [YangLi2021] for optimization only: we need to show that the solution to the sketched problem can still enjoy the minimax optimal rates for estimation of the sparse parameter vector for both sparse convex and nonconvex learning problems. Such efforts and results in minimax optimal rates for sparse convex and nonconvex learning problems by sketching have not been offered by previous works including [YangLi2021], which are provided in Section 4 of this paper. Such minimax optimal results are established in a highly non-trivial manner. For example, to the best of our knowledge, Theorem 4.1 is among the first in the literature which uses an iteratively sketching algorithm to achieve the minimax optimal rate for sparse convex learning. Furthermore, Theorem 4.5 shows that sketching can also lead to the minimax optimal rate even for sparse nonconvex problems, while sketching for nonconvex problems is still considered difficult and open in the literature.\\n\\nWe will further emphasize such difference from [YangLi2021] in the introduction section of this paper so that readers can more easily understand such difference and the novelty and significance of this work. **At this point, we sincerely request this reviewer to remove the research integrity flag as it does not factually reflect an existing issue. Please also kindly note that Reviewer X44j, who originally raised the same overlap issue, has clearly indicated that the overlap issue has been solved**.\\n\\nThank you for your time and efforts carefully reviewing this paper, and we look forward to your response.\"}", "{\"title\": \"Response to Reviewer Reviewer Y2Sn Part 2\", \"comment\": \"**(2) Improved Theorem D.2 (now Theorem D.4 in the revised paper)**.\\n\\nWe have revised Theorem D.4 and provided more details in its proof in the revised paper. Now $n$ should satisfy $n \\\\ge (\\\\Theta(s\\\\log d))^{1/\\\\alpha_0}$ where $\\\\alpha_0$ is is an arbitrary\\npositive constant such that $\\\\alpha_0 \\\\in (0,1)$. It is noted that we need $n \\\\ge (\\\\Theta(s\\\\log d))^{1/\\\\alpha_0}$ so that $\\\\textup{RIP}(\\\\delta,s)$ in Theorem D.4 happens with probability arbitrarily close to $1$ as $n \\\\to \\\\infty$. \\n\\n**We have also discussed the impact of $\\\\alpha_0$ and the alterative construction of the low-rank matrix suggested in this review, and the trade-off between our construction and the suggested construction** in line 1209-1223 of the revised paper. **In particular, we show that the low-rank matrix constructed by the suggested way in fact satisfies $\\\\textup{RIP}(\\\\delta,s)$ with the optimal size $n \\\\asymp \\\\Theta(s \\\\log d) $**. \\n\\nFirst, by letting the constant $\\\\alpha_0 \\\\to 1$, the lower bound for $n$ is $(\\\\Theta(s \\\\log d))^{1/\\\\alpha_0}$ in Theorem D. 4, which can be close to the sample optimal $\\\\Theta(s \\\\log d)$. Furthermore, one can also construct the low-rank matrix $\\\\mathbf X = \\\\mathbf U \\\\mathbf A$ where $\\\\mathbf U \\\\in {\\\\mathbb R}^{n \\\\times m}$ is sampled from the Stiefel manifold $V_{m}({\\\\mathbb R}^{n})$ comprising all the $n \\\\times m$ matrices of orthogonal columns with $n \\\\ge m$, and all the elements of $\\\\mathbf A \\\\in {\\\\mathbb R}^{m \\\\times d}$ are i.i.d. Gaussian random variables with $\\\\mathbf A_{ij} \\\\sim \\\\mathcal N(0,1/m)$ for $i \\\\in [m], j \\\\in [d]$. Then $\\\\textup{rank}(\\\\mathbf X) \\\\le m$, and it follows by Theorem 5.2 of [Baraniuk2008] that when $m \\\\ge \\\\Theta(s \\\\log d)$, then $\\\\mathbf A$ satisfies $\\\\textup{RIP}(\\\\delta,s)$. Since $\\\\mathbf X^{\\\\top} \\\\mathbf X = \\\\mathbf A^{\\\\top} \\\\mathbf A$, it follows that w.h.p. (with high probability) $\\\\mathbf X$ also satisfies $\\\\textup{RIP}(\\\\delta,s)$. The latter construction can admit the optimal size $n \\\\asymp \\\\Theta(s \\\\log d) $. On the other hand, $\\\\textup{rank}(\\\\mathbf X) \\\\to \\\\infty$ as $d \\\\to \\\\infty$ in latter construction, while the construction in Theorem D.4 allows for arbitrarily specified rank of the constructed $\\\\mathbf X$.\\n\\n**References**\\n\\n[Baraniuk2008] Baraniuk et al. A simple proof of the restricted isometry property for random matrices. Constructive Approximation, 2008.\"}", "{\"title\": \"The New Introduction (cont'd)\", \"comment\": \"(cont'd) In particular, due to our novel result in the approximation error bound (Theorem D.2-Theorem D.3), the proposed iterative sketching algorithm, Iterative SRO, does not need to sample a new projection matrix and compute the sketched matrix at every iteration, in a strong contrast to [YangLi2021]. Moreover, the focus of this work is to establish minimax optimal rates for sparse convex and nonconvex learning problems by sketching, which has not been addressed by existing works in the literature including [YangLi2021]. While [YangLi2021] only focuses on the optimization perspective, that is, approximating the solution to the original optimization problem by the solution to the sketched problem, the focus of this work needs much more efforts beyond the efforts made in [YangLi2021] for optimization only: we need to show that the solution to the sketched problem can still enjoy the minimax optimal rates for estimation of the sparse parameter vector for both sparse convex and nonconvex learning problems. Such efforts and results in minimax optimal rates for sparse convex and nonconvex learning problems by sketching have not been offered by previous works including [YangLi2021], which are provided in Section 4 of this paper. Such minimax optimal results are established in a highly non-trivial manner. For example, to the best of our knowledge, Theorem 4.1 is among the first in the literature which uses an iteratively sketching algorithm to achieve the minimax optimal rate for sparse convex learning. Furthermore, Theorem 4.5 shows that sketching can also lead to the minimax optimal rate even for sparse nonconvex problems, while sketching for nonconvex problems is still considered difficult and open in the literature. (end of the new Introduction)\\n\\nFinally, we will be happy to work with the ethical reviewer and the AC to solve any remaining text overlap issues (if there are any). \\n\\nBest Regards,\\n\\nThe Authors\"}", "{\"summary\": \"This paper proposes a sketching method for convex and nonconvex regularized least squares and sharp theoretical guarantees are provided for the algorithms.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The introduction of the method is clear and easy to follow.\\n2. The motivation is clearly conveyed.\", \"weaknesses\": \"Please see following questions.\", \"questions\": \"1. This work appears to be a straightforward extension of (Pilanci & Wainwright, 2016) with the addition of a regularization term. Authors should clarify the unique difficulties and challenges introduced by this regularization term. It does mention some differences like sampling the projection matrix only once. However, can this be easily adjusted in (Pilanci & Wainwright, 2016)? In addition, what are the primary obstacles to applying traditional analyses of convex and nonconvex regularizations in a least squares setting without sketching? What are the difficulties by extending the analysis in (Pilanci & Wainwright, 2016)? What are the new techniques introduced in this paper to study the error bound?\\n\\n2. The notations are not very clear. In the paper, $\\\\tilde{\\\\beta}^*$ represents the critical point of the objective function (2). When studying the theoretical properties, is it the solution of SRO algorithm or the solution to iterative SRO algorithm? What are the errors bound for solution of SRO and what are the error bounds for iterative SRO? \\n\\n3. In Section 3 about iterative SRO, it only concerns the convex regularization for $h_\\\\lambda(\\\\beta)$. Is there any particular reason that it cannot be applied to nonvex regularization? How about the error bound for the nonconvex regularization with iterative SRO?\\n\\n4. The organization of the theoretical guarantees and proof sketch is not very clear. For instance, Corollary 4.2 is introduced before introducing Theorem 5.2, while Corollary 4.2 needs the conditions in Theorem 5.2. \\n\\n5. In Table 1, why the running time for SRO is longer than those iterative algorithm? The first step of iterative SRO is somehow equivalent to SRO?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to the update.\", \"comment\": \"I thank the reviewers for the update. It in large addresses the issues of the presentation I had with the first version of the paper - with the exception of the two below concerns.\\n\\nFirst, the improved theorem D.2 is still not entirely clear to me -- the $\\\\alpha_0$-constant now seems to be a constant that can be chosen freely. However, \\\\alpha_0 only appears (without specification) when concentration of $\\\\xi_2$-variables is applied. This makes it look like alpha_0 is the dimensions-of-freedom parameter of the $\\\\chi_2$-variable? Is this the case? Then, $\\\\alpha_0$ can not be arbitrariliy chosen. Also, in the proof, $n$ is still required to be larger than a threshold with unclear dependence of $\\\\alpha_0$. This can and should still be made clearer in my opinion.\\n\\nAlso, reviewer X44j makes a good point that the paper has a significant overlap with [YangLi2021]. I am not entirely convinced that the 'only-one-sketch' aspect of the work at hand is significant enough to claim novelty - while the algorithm in [YangLi2021] specifies that the random projections should be redrawn, it does not appear that this is used in their analysis. In my opinion, there are however still enough new theoretical results and proofs in the paper at hand compared to [YanLi2021] to justify a publication. However, I cannot fairly judge how large of an issue the overlap is given the information I have, and have therefore flagged the paper for an ethical review to make sure a fair judgement is made.\\n\\nAll in all, the authors have made an effort to increase the readability of the paper, and addressed my concerns. Leaving the potential issue with the overlap aside, I will therefore raise my score to a 6.\"}", "{\"title\": \"Clarification regarding the factually wrong statements in the \\\"Ethics Concerns\\\" Part 1\", \"comment\": \"We would like to provide further clarification regarding the \\\"Ethics Concerns\\\" provided in the updated review, and we respectfully point out that the \\\"Ethics Concerns\\\" contain several factually wrong statements which result in the wrong conclusion for the raised research integrity issue.\\n\\n **\\\"The ideas of the paper are also more or less identical...\\\"** This statement is factually and completely wrong. We would like to emphasize that while [YangLi2021] also studies sketching for regularized optimization problem with an iterative sketching algorithm, the focus and results of this work are completely different from that in [YangLi2021], with a detailed discussion in Section 5 of the revised paper. In particular, due to our novel results in the approximation error bounds in Theorem D.2 and Theorem D.3, the proposed iterative sketching algorithm, Iterative SRO, does not need to sample a new projection matrix and compute the sketched matrix at every iteration, in a strong contrast to [YangLi2021]. Moreover, the focus of this work is to establish minimax optimal rates for sparse convex and nonconvex learning problems by sketching, which has not been addressed by existing works in the literature including [YangLi2021]. While [YangLi2021] only focuses on the optimization perspective, that is, approximating the solution to the original optimization problem by the solution to the sketched problem, the focus of this work needs much more efforts beyond the efforts made in [YangLi2021] for optimization only: we need to show that the solution to the sketched problem can still enjoy the minimax optimal rates for estimation of the sparse parameter vector for both sparse convex and nonconvex learning problems. Such minimax optimal results are established in a highly non-trivial manner. For example, to the best of our knowledge, Theorem 4.1 is among the first in the literature which uses an iteratively sketching algorithm to achieve the minimax optimal rate for sparse convex learning. Furthermore, Theorem 4.5 shows that sketching can also lead to the minimax optimal rate even for sparse nonconvex problems, while sketching for nonconvex problems is still considered difficult and open in the literature.\\n\\n**\\\"In particular, if the authors of this article have applied a different strategy to draw the random sketches compared to [YangLi2021] to generate Figure 1, I find it highly unlikely that they arrive at the exact same error bounds.\\\"** This statement is factually and completely \\n wrong. As mentioned above and in Section 5, we rigorously prove in Theorem 3.1 that the new sampling strategy used in our Iterative SRO achieves the approximation error in Eq. (8), which is the same theoretical guarantee in Theorem 3 of [YangLi2021]. However, our new sampling strategy is novel and efficient as explained above and in Section 5 of the revised paper. We would like to respectfully point out that we obtained the same Figure 1 because we used the same data as that in [YangLi2021] in the original Figure 1, which has already been mentioned in our response to Reviewer X44j. However, Figure 1 has been updated with different data in the experiment in the revised paper.\\n\\n**\\\"Since [YangLi2021] was not cited in the submitted version of the paper, this raises ethical concerns in my view\\\"**. We would respectfully point out that [YangLi2021] was not cited initially due to negligence, and we cited [YangLi2021] with a detailed discussion about the novelty of our results and their significant differences from [YangLi2021]. Such detailed discussion has been acknowledged by Reviewer X44j, but unfortunately missed in this review. \\n\\nFinally, as mentioned in this comment, applied mathematical results admit common and standard description. To this end, we provide as follows a revised abstract and revised introduction for this paper, which will replace the corresponding and existing parts in the abstract and introduction of the current version of this paper and solve the overlapping text issue. **We sincerely hope this reviewer would reconsider the raised \\\"research integrity issue\\\" which was based on all the factual misunderstandings described above**.\"}", "{\"summary\": \"The article is concerned with the solution of sketch-based solution of regularized least squares problems, i.e. problems of the form $\\\\min 1/2 \\\\Vert X\\\\beta-y \\\\Vert^2 + f(\\\\beta)$, where $X$ is a matrix and $f$ is a regularization term. If $X$ is of low rank, it can be well approximated via a sketch, i.e a matrix $\\\\widetilde{X}=PX$, where $P$ is a random projector. The idea of sketching is to use this fact to reduce the complexity of the optimization problems.\\n\\nConcretely, the authors propose the iterative SRO method. In SRO, the quadratic term $\\\\langle X\\\\beta, X\\\\beta\\\\rangle$ of the loss function is exchanged with a term based on a sketch, i.e $\\\\langle \\\\widetilde{X}\\\\beta, \\\\widetilde{X}\\\\beta\\\\rangle$. Since $\\\\widetilde{X}$ is a matrix of smaller dimension than $X$, the latter problem is less expensice to solve. In iterative SRO, this procedure is repeated to obtain a better and better estimate of the true solution $\\\\beta_*$.\\n\\nThe main result of the paper says that as soon as $P$ is an $\\\\ell_2$-subspace embedding for the random matrix $X$, the iterative SRO will converge at a linear rate towards the true solution $\\\\beta_*$. The authors also prove a result about applying the method to do sparse regularization - they show that with a moderate amount of iterations, the iterative SRO achieves the minimax sampling rate when solving the LASSO problem.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": \"This article has many positive sides. The research questions of this article are sound, as is the algorithm it proposes. Its results are relevant and the proofs are modulo typos correct. The experimental verification is also reasonable.\\nI would like to highlight the generality of the results (there is much freedom to choose both the regularization term and the sketching matrix $P$). I am also fond of the fact that the authors take the time to prove that there indeed are matrices $X$ that are both of low rank (making their sketching approach viable) but still has the RIP.\", \"weaknesses\": [\"A big problem with this article is its presentation. Let me address some of the problems, in order of appearance.\", \"It takes some effort to sort out the relations between the problems (2), (5) and (6) in Section 3 (they are all equivalent - but it is non trivial to realize this due to the notation). Some ironing out of this would be good, possibly with an explicit calculation in an appendix.\", \"Many of the arguments in the beginning of proof of Theorem 5.2 are written down in an unnecessarily complicated: Since the vector in (25) is zero, there is no need to apply the Cauchy-Schwarz inequality to arrive at the inequality (26)\", \"In equation 35, $\\\\kappa$ is still present although $\\\\kappa$ has been set to 0 just before.\", \"There are many steps in the proof of the main result that are only sketched, in particular for $L_h$-smooth $h$ -- I think that I can reproduce the steps, but I really think that the proof of the main result of the paper deserves to have this step written out. *This is in my opinion the most pressing issue*.\", \"The sentence \\\"We show that RIP($\\\\delta,s$)\\\" has appearantly lost its ending.\", \"Reference to equation (45) in the proof of Theorem 4.1 seems to be spurious.\", \"The term JLT in Theorem D.4 is undefined - I suppose it has something to do with a Johnson Lindenstrauss embedding. The meaning of $f_0$ is also unclear.\", \"Theorem D.6 has no statement.\", \"Another weakness of the paper, in my opinion, is Theorem D.2. Due to the appearance of the term $\\\\alpha_0$ in its current form, one needs to use $n\\\\geq C(s\\\\log(d))^{1/\\\\alpha_0}$ to get a sparse recovery, which, due to the unknown size of $\\\\alpha_0$, could be arbitrarily far away from the sample optimal $s\\\\log(d)$. This question should at least be discussed.\", \"However, I suspect that theorem D.2. can be made stronger. Would it not suffice to just set $X= UA$ where $U\\\\in \\\\mathbb{R}^{n,m}$ is in the Stiefel manifold (with $m=cn$) and $A\\\\in \\\\mathbb{R}^{m,d}$ Gaussian with $m\\\\sim s\\\\log(d)$?\"], \"questions\": [\"In the beginning of section D.2, is $O(s\\\\log(d)/n)$ a typo (should there not be a square root there)?\", \"Can the matrix in Theorem D.2 be constructed as I have outlined above, i.e. $X=UA$ with $U$ in the Stiefel manifold (and hence isometric) and $A$ a standard RIP matrix?\", \"In the formulation of Theorem 3.2, it is stated that $n\\\\ll d$ and $\\\\log(d) \\\\ll n^\\\\alpha_0$ -- it is unclear what this means and how it is used later in the proof.\", \"Can the authors provide some more details in their proof of their main theorem?\"], \"flag_for_ethics_review\": \"['Yes, Research integrity issues (e.g., plagiarism, dual submission)']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\", \"details_of_ethics_concerns\": \"I think that the concerns of reviewer X44j bring up needs some attention. The paper the reviewer brings up indeed shares *large* similarities to the manuscript at hand. There are large sections of the text that are more or less verbatim copies of the version of [YangLi2021]. The ideas of the paper are also more or less identical, with the exception that Yang and Li do not propose to make a new sketch in each iteration. I however also do not see where the new draws of the matrix are used in the theoretical analysis in [YangLi2021]. In particular, if the authors of this article have applied a different strategy to draw the random sketches compared to [YangLi2021] to generate Figure 1, I find it highly unlikely that they arrive at the exact same error bounds. Since [YangLi2021] was not cited in the submitted version of the paper, this raises ethical concerns in my view.\\n\\nIn my personal opinion, the extent of these concerns depend very much on whether this is a case of self-plagiarism or 'bona-fide'-plagiarism. Note that the former could to some extent be justified: In certain academic disciplines, particularly in this type of applied mathematics, it is customary to first write a 'conference version' of a paper, which does not contain details of proofs etc., and send it to a more prestigious venue for 'actual' publication. I very much suspect that this is what has happened here. Due to the double blind policy of ICLR, it has been impossible to acknowledge the previous version, since that would reveal the identities of the authors. I am however not sure of this -- I have of course not tried to verify this, since this would break the double-blind review process from my perspective.\"}", "{\"title\": \"Response to Reviewer HrLh Part 1\", \"comment\": \"We appreciate the review and the suggestions in this review. The raised issues are addressed below. In the following text, the line numbers are for the revised paper.\\n\\n**(1) Novelty of Our Results and Their Significant Difference from [Pilanci2016]**\\n\\n**We respectfully point out that the claim \\u201cThis work appears to be a straightforward extension of [Pilanci2016] with the addition of a regularization term\\u2026\\u201d in this review is a factual misunderstanding**. Our results are novel and significantly different from those in [Pilanci2016], which is detailed in line 437-459 of the revied paper and copied below for your convenience.\\nIt is remarked that [Pilanci2016] only handles convex constrained least square problems of the form $\\\\min _ {\\\\mathbf X \\\\in \\\\mathcal C} || \\\\mathbf X \\\\mathbf \\\\beta-\\\\mathbf y ||^2$ where the constraint set $\\\\mathcal C$ is a convex set, while our results cover regularized convex and nonconvex problems with minimax optimal rates. It is emphasized that the techniques in [Pilanci2016] can never be applied to the regularized problems considered in this paper. [Pilanci2016] heavily relies on certain complexity measure of the constraint set $\\\\mathcal C$, such as the Gaussian width. It shows that the complexity of such constraint set $\\\\mathcal C$ is bounded, so that sketching with such constraint set $\\\\mathcal C$ of limited complexity only incurs a relatively small approximation error. However, there is never such constraint set in the original problem (Eq. (1)) or the sketched problem (Eq. (2)), so that such complexity based analysis for sketching cannot be applied to this work. Furthermore, as mentioned in Section 1.1, Iterative SRO does not need to sample the projection matrix and compute the sketched matrix at each iteration, in contrast with IHS [Pilanci2016] where a separate projection matrix is sampled and the sketched matrix is computed at each iteration. As evidenced by Table 1 in Section 7.1, Iterative SRO is more efficient than its \\u201cHIS\\u201d counterpart where sketching is performed at every iteration while enjoying comparable approximation error.\\n\\n**(2) Improved Presentation**\\n\\n**\\u201dThe notations are not very clear. \\u2026represents the critical point of the objective function (2) ...\\\"**.\\n\\nThe meaning of \\u2026$\\\\tilde {\\\\mathbf \\\\beta}^*$ has been clearly indicated in every theoretical result in the revised paper. In particular, when using Iterative SRO for sparse convex learning in Section 4.1, it is clearly stated in Theorem 4.1 that $\\\\tilde {\\\\mathbf \\\\beta}^* = \\\\mathbf \\\\beta^{(N)}$ (line 262-263 of the revised paper or line 254-255 of the original submission). When using SRO for sparse nonconvex learning in Section 4.2, it is clearly stated in Theorem 4.2 that $\\\\tilde {\\\\mathbf \\\\beta}^*$ is the optimization result of the sketched problem (Eq. (2)) in line 368-369 of the revised paper. To make the meaning of $\\\\tilde {\\\\mathbf \\\\beta}^*$ even clearer, it is stated in line 237-238 of the revised paper that \\u201cIn Section 4.1, $\\\\tilde {\\\\mathbf \\\\beta}^*$ is obtained by Algorithm 1 through $\\\\tilde {\\\\mathbf \\\\beta}^* = \\\\mathbf \\\\beta^{(N)}$. In Section 4.2, $\\\\tilde {\\\\mathbf \\\\beta}^*$ is the optimization result of the sketched problem (2)\\u201d (Section 4.1 in line 238 should be Section 4.2).\\n\\n**(3) \\\"\\u2026iterative SRO, it only concerns the convex regularization for $h_{\\\\lambda}(\\\\mathbf \\\\beta)$. Is there any particular reason that it cannot be applied to nonvex regularization?...\\\"**\\n\\nWe respectfully point out that it is a factual misunderstanding that Iterative SRO cannot be applied to nonconvex regularizer. Theorem 3.1 (in both original submission and the revised paper) shows that iterative SRO can handle nonconvex regularizer $h = h_{\\\\lambda}(\\\\cdot)$ as long as the Frechet subdifferential of $h$ is $L_h$-smooth, and please refer to line 215-216 for the definition of $L_h$-smoothness. It is remarked that a $L_h$-smooth function $h$ can definitely be nonconvex. \\n\\n\\n**(4) \\\"organization\\u2026is not very clear. For instance, Corollary 4.2 is introduced before introducing Theorem 5.2, while Corollary 4.2 needs the conditions in Theorem 5.2.\\\"**\\n\\nWe have improved the organization of the theoretical results. In particular, Theorem 5.2 and Theorem 5.4 (which now become Theorem D.2 and Theorem D.3 of the revised paper), as the intermediate steps for the proof of Theorem 3.1, have been moved to Section D of the revised paper, following the suggestion of Reviewer 2raj for the improved clarity of this paper. Now all the results presented before Theorem D.2 in the revised paper, including Corollary 4.2, do not depend on the conditions of the original Theorem 5.2 (or Theorem D.2 of the revised paper).\"}", "{\"title\": \"Response to Reviewer X44j Part 1\", \"comment\": \"We appreciate the review and the suggestions in this review. The raised issues are addressed below. In the following text, the line numbers are for the revised paper.\\n\\n**(1) Novelty of Our Results and Their Significant Difference from [YangLi2021]**\\n\\nIn Line 437-495 of the revised paper, we explain the novelty of our Results and their significant difference from [YangLi2021], which is copied below for your convenience.\\n\\nIt is remarked that our results, including the Iterative SRO algorithm in Algorithm 1and its theoretical guarantee in Theorem 3.1, Theorem D.2-Theorem D.3, and the minimax optimal rates by sketching for sparse convex learning in Theorem 3.1 and sparse nonconvex learning in Theorem 4.5, are all novel and significantly different from [YangLi2021] in the following two aspects, although [YangLi2021] also presents an iterative sketching algorithm for regularized optimization problems. First, Iterative SRO does not need to sample a projection matrix $\\\\mathbf P \\\\in {\\\\mathbb R}^{\\\\tilde n \\\\times n}$ and compute the sketched matrix $\\\\tilde {\\\\mathbf X} = \\\\mathbf P \\\\mathbf X$ at every iteration, while the iterative sketching algorithm in [YangLi2021] samples a different projection matrix and computes the sketched matrix at every iteration which incurs considerable computational cost for large-scale problem with large data size $n$. Such advantage of Iterative SRO over [YangLi2021] is attributed to the novel theoretical results in Theorem D.2-Theorem D.3 and Theorem 3.1. In contrast with Theorem 1 in [YangLi2021], the approximation error bound Theorem D.2 is derived for sketching low-rank data matrix by oblivious $\\\\ell^2$-subspace embedding with the sketched size $\\\\tilde n$ clearly specified. As a result, Theorem D.3 presents the approximation error bounds for convex and certain nonconvex regularization by sketching with oblivious $\\\\ell^2$-subspace embedding. Based on such results, Theorem 3.1 shows that a single projection matrix suffices for the iterative sketching process. Second, minimax optimal rates for convex and nonconvex sparse learning problems by sketching are established by our results, while there are no such minimax optimal rates by a sketching algorithm in [YangLi2021]. Theorem 4.1, to the best of our knowledge, is among the first in the literature which uses an iteratively sketching algorithm to achieve the minimax optimal rate for sparse convex learning. Furthermore, Theorem 4.5 shows that sketching can also lead to the minimax optimal rate even for sparse nonconvex problems, while sketching for nonconvex problems is still considered difficult and open in the literature.\\n\\nWe would also like to mention that Theorem D.2 and Theorem D.3, as the intermediate steps for the proof of Theorem 3.1, have been moved to Section D of the revised paper, following the suggestion of Reviewer 2raj for the improved clarity of this paper. The presentation style of Theorem D.2 and Theorem D.3 has also been revised. Figure 1 in the original paper coincided with that of [YangLi2021] because we used the same data for the experiment with Generalized Lasso as [YangLi2021]. In the revised paper, new experimental results are reported with different generated data and different $\\\\gamma$ (recall that $\\\\gamma$ decides the sketch size $\\\\tilde n$) so that both Figure 1 and Table 1 have been updated in the revised paper.\\n\\n**(2) More Explanation regarding Definition 5.2**\\n\\nIn Remark 5.1 of the revised paper, we provide more explanation regarding the degree of nonconvexity defined in Definition 5.2. In particular, the impact of a nonconvex function $h$ on the degree of nonconvexity is explained as follows. The degree of nonconvexity of a second-order differentiable and nonconvex function $h$ satisfies $\\\\theta_{h}(\\\\mathbf t,\\\\kappa) \\\\le 0$ if $h$ is \\u201cmore PSD\\u201d than $-\\\\kappa || \\\\cdot ||^2$, that is, the smallest eigenvalue of its Hessian is not less than $-\\\\kappa$. In other words, when the smallest eigenvalue of its Hessian of the nonconvex function $h$ is not less than $-\\\\kappa$, then its degree of nonconvexity with $\\\\kappa$, $\\\\theta_{h}(\\\\mathbf t,\\\\kappa)$, is always not greater than $0$. \\n\\nTwo commonly used nonconvex functions for sparse learning, the smoothly clipped absolute deviation (SCAD) and the minimax concave penalty (MCP), are introduced in Section A of both the original paper and the revised paper. These two nonconvex functions satisfy Assumption 2, so there exists a positive concavity parameter $\\\\zeta_{-} > 0$ such that the smallest eigenvalue of its Hessian is not less than $-\\\\zeta_{-} > 0$. As a result, the degree of nonconvexity of both SCAD and MCP satisfies $\\\\theta_{h}(\\\\mathbf t, -\\\\zeta_{-}) \\\\le 0$.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}" ] }
7lUdo8Vuqa
Generalization through variance: how noise shapes inductive biases in diffusion models
[ "John Vastola" ]
How diffusion models generalize beyond their training set is not known, and is somewhat mysterious given two facts: the optimum of the denoising score matching (DSM) objective usually used to train diffusion models is the score function of the training distribution; and the networks usually used to learn the score function are expressive enough to learn this score to high accuracy. We claim that a certain feature of the DSM objective—the fact that its target is not the training distribution's score, but a noisy quantity only equal to it in expectation—strongly impacts whether and to what extent diffusion models generalize. In this paper, we develop a mathematical theory that partly explains this 'generalization through variance' phenomenon. Our theoretical analysis exploits a physics-inspired path integral approach to compute the distributions typically learned by a few paradigmatic under- and overparameterized diffusion models. We find that the distributions diffusion models effectively learn to sample from resemble their training distributions, but with `gaps' filled in, and that this inductive bias is due to the covariance structure of the noisy target used during training. We also characterize how this inductive bias interacts with feature-related inductive biases.
[ "diffusion models", "generalization", "inductive biases", "theory", "infinite-width neural networks", "generative models", "path integral" ]
Accept (Poster)
https://openreview.net/pdf?id=7lUdo8Vuqa
https://openreview.net/forum?id=7lUdo8Vuqa
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zvMXLaeCz3", "XJcWpCgQYo", "WDTvDdtz6p", "VDtgohBQQE", "OwoWHY2296", "OQQ2uiRYHr", "MKF7tOsMtK", "HhIZuqQicz", "HfjxwCfuZM", "CVy3gpgcgc", "BdYGsUWH9Z", "3arg0fsdVD", "2ryZenNzMb" ], "note_type": [ "official_review", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment" ], "note_created": [ 1730968586359, 1730698330390, 1732758757635, 1732758389390, 1732993308069, 1734892237130, 1730721387665, 1733209618595, 1732757783549, 1732755886913, 1732754451850, 1737524178686, 1732759294732 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12303/Reviewer_DoAH" ], [ "ICLR.cc/2025/Conference/Submission12303/Reviewer_AyYq" ], [ "ICLR.cc/2025/Conference/Submission12303/Authors" ], [ "ICLR.cc/2025/Conference/Submission12303/Authors" ], [ "ICLR.cc/2025/Conference/Submission12303/Reviewer_hd21" ], [ "ICLR.cc/2025/Conference/Submission12303/Area_Chair_m2Ec" ], [ "ICLR.cc/2025/Conference/Submission12303/Reviewer_hd21" ], [ "ICLR.cc/2025/Conference/Submission12303/Reviewer_DoAH" ], [ "ICLR.cc/2025/Conference/Submission12303/Authors" ], [ "ICLR.cc/2025/Conference/Submission12303/Authors" ], [ "ICLR.cc/2025/Conference/Submission12303/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12303/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The main idea of the paper is that diffusion models generalize since the training step of estimating the score uses the \\u201cproxy score\\u201d as its target, which is a noisy version of the true score, or more accurately, a pointwise version of the score, conditioned also on the training data point. This introduction of this randomness into the nonlinear estimation process, and then into the diffusion process is claimed to lead to randomness in the generation process, and in turn, to generalization.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"Understating the empirical success of diffusion models in creating new samples that are similar to the training set is a timely and central question. The analysis made in the paper is dedicated to the generative question, and does not impose generalization measures from other problems (e.g., the MSE of the score function). The intricate details of the diffusion models considered are clearly discussed, and how they affect the generalization process. The paper is well written and conveys its ideas in an interesting way.\\n\\nAn analytic stochastic path integral technique is used to derive closed-form expressions for the sampling distribution of new samples, with focus on the covariance structure, which highlights the role of the variance of the score proxy in the generation of new samples. It is also a direct performance measure compared to the MSE of the score. This technique is exemplified on two analyzable architectures.\", \"weaknesses\": \"1) The paper assures that the generated samples are different from the training samples due to the noise in the score proxy, but this does not seem to explain why the generated samples are meaningful in some sense (e.g., have similar features to the training samples). If one assumes a ground truth distribution then this is obvious, but the authors emphasize that they refrain from that, but it is not clear what replaces this assumption (beyond the generated sampled being different from the training samples).\\n\\n2) Except for the appearance of a non zero covariance matrix in noise of the reverse diffusion process, the expressions in Propositions 5.1 and 5.2 are somewhat implicit. For example, it is not obvious how easy it is to compute them, or how its form affects generalization.\", \"questions\": \"1. Line 67: What is exactly meant by \\u201cboundary regions\\u201d ? One example is regions that are close to multiple training points, but how these are defined in general ? As high likelihood regions ? This term is used repeatedly afterwards so it would be good to accurately define it.\\n\\n2. Line 75: How does the claim \\u201cmodels generalize better when they are somewhat underparameterized\\u201d agree with the (Karras et al, 2024) paper mentioned on line 48 ? \\n\\n3. Line 78: Why interpolation considered a non-trivial generalization ? \\n\\n4. Line 138: It would be better to introduce the proxy score before (4). \\n\\n5. How is the operator \\\\mathcal{D} is defined on (6) ? \\n\\n6. Section 4: It is explained that even using the generated samples to estimate the score leads generalization. But what is the starting point of the process X_T (the noise point). Isn't this point generated from the training data in a noisy manner, and this also contributes to generalization ? \\n\\n7. Line 207: Once the higher order terms in (7) are neglected, it is mentioned that this implies that the estimator distribution is approximately Gaussian. However, due to the Gaussianity of the forward process, doesn't this tacitly also approximate the data distribution as Gaussian ? In other words, if we assume that the data distribution is Gaussian, would the result be the approximated integral of (7) ?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors study the generalization in diffusion models. They highlight six key factors influencing generalization: noisy objective functions, forward process dynamics, nonlinear score dependencies, model capacity, architectural features, and the structure of the training set. They\\nattribute the generalization of diffusion models to the fact that the training objective is in fact equal to the ground truth score function only in expectation. The key statement is that generalization occurs if and only if the V-kernel is not zero. Then they characterize the generalizability of diffusion models by analyzing the structures of V-kernel under several simplified conditions.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Expressing PF-ODE in terms of the path integral is novel and offers important insight i.e, generalization occurs if and only if the V-kernel is nonzero.\\n\\n2. The paper provides comprehensive study on three distinct cases through the V-kenrl: (i) memorization (ii) linear model (iii) NTK model. These results demonstrate V-kernel as a useful tool toward understanding generalization.\", \"weaknesses\": \"(i) The authors claim that generalization occurs if and only if V-kernel is nonzero since otherwise the reverse sampling process can only produce training examples. However, this definition seems restricted since when diffusion models generalize, they not only differ from the PF-ODE dynamics, but generate \\\"high quality\\\" samples as well. In other words, the V-kernel should have certain benign properties. This point is not adequately addressed in the current paper.\\n\\n(ii) In section 5 the authors derive the analytical form of the V-kernels under three circumstances. However, the resulting equations (9), (13) and (17) are really hard for me to interpret. Can the authors provide more explanation on the unique properties of these different V-kernels and how they shape the distributions corresponding to the reverse sampling process?\\n\\n(iii) As a researcher focuses on empirical works, I don't find the results of this paper very useful. I think the paper could benefit from a further discussion on how the results can help improving practical diffusion models. \\n\\n(iv) Overall, despite the weaknesses I listed above, I think the results are interesting and worth being published.\", \"questions\": \"See my questions in the weakness part.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Review response part 1\", \"comment\": \"We thank the reviewer for their time and helpful comments. Before we address them, we would like to note that there is something of a qualitative distinction between two questions regarding how diffusion models generalize:\\n\\n1. Given a finite number of samples M from some data distribution, what do diffusion models learn? (Relatedly: do models learn to simply regurgitate one of the $M$ samples? Do they learn something simple, like a kernel density estimate, or not?)\\n2. In the limit as $M \\\\to \\\\infty$, do diffusion models learn the ground truth distribution? (Relatedly: if yes, how does convergence depend on $M$?)\\n\\nThese questions are both interesting and worthy of study. In our work, we focus almost exclusively on the first question. Our comment \\u00a0\\u201cAt present, there is arguably no theory that describes\\u2026\\u201d is intended to refer to the fact that, while a large number of papers address question 2 (including the Li et al. and Fu et al. papers you provide as examples), to our knowledge there is a theoretical gap regarding question 1. \\n\\nThe distinction between the two may be somewhat confusing, as there are two asymptotic limits one can consider. One is the limit in which the model has access to an infinite number of samples from the data distribution (e.g., an infinite number of pictures of real dogs). The other is the limit in which, during training, the model has access to an infinite number $P$ of *training examples*, which consist of noise-corrupted versions of data distribution samples. We take the $P \\\\to \\\\infty$ limit but *not* the $M \\\\to \\\\infty$ limit, whereas most authors take both to infinity. This drastically changes the nature of the corresponding mathematical problem, and we think that this emphasis contributes to the novelty of our work.\\n\\nSee below for a point-by-point response.\\n\\n**Weaknesses**\\n\\n> 1. The paper's main weakness is the lack of empirical validation of the theoretical claims. While the theoretical intuition is clear, the paper would be greatly strengthened by including simulations or experiments that demonstrate the practical impact of the \\\"generalization through variance\\\" phenomenon.\\n\\nWe agree that this is a major weakness, and are working on a **new main text section** (\\\"Generalization through variance: consequences and examples\\\") before the Discussion to remedy this. The new section considers the impact of generalization through variance (GTV) in the context of a variety of 1D and 2D examples. While these examples are admittedly toy given that real diffusion models typically involve high-dimensional data, these simple examples are easy to visualize, they make the effects of GTV easy to see and understand, and they allow many quantities of interest to be straightforwardly computed. (They are also simple enough that we can consider a large number of models, since training is not an issue.)\\n\\nWe show that generalization, as we claim, generally happens in regions where proxy score covariance is high, which corresponds to regions between multiple training examples. The details of this are modulated somewhat by the choice of feature maps in the linear case: for example, we show that rotating the data distribution and/or feature maps can yield a slightly different kind of generalization.\"}", "{\"title\": \"Review response\", \"comment\": \"We thank the reviewer for their time and thoughtful comments. See below for a point-by-point response.\\n\\n> (i) The authors claim that generalization occurs if and only if V-kernel is nonzero since otherwise the reverse sampling process can only produce training examples. However, this definition seems restricted since when diffusion models generalize, they not only differ from the PF-ODE dynamics, but generate \\\"high quality\\\" samples as well. In other words, the V-kernel should have certain benign properties. This point is not adequately addressed in the current paper.\\n\\nThis is a great point, and we strongly agree that it is important to address. We will address this point in two ways. First, we will add a **new section** to the main text (\\\"Generalization through variance: consequences and examples\\\") that explicitly discusses various \\\"benign\\\" properties of the V-kernel that support reasonable generalization. These properties include the fact that it tends not to add noise far from training data, that it tends not to increase the effective dimensionality of the training distribution (e.g., if training data lie in a plane, to good approximation so will the generalized distribution), and that training data is more likely to be sampled than novel states. The new section will also contain various numerical experiments that validate our results in simple 1D and 2D settings. \\n\\nSecond, we will add a new appendix (\\\"Benign properties of generalization through variance\\\") that contains additional discussion of this point, including more math.\\n\\n> (ii) In section 5 the authors derive the analytical form of the V-kernels under three circumstances. However, the resulting equations (9), (13) and (17) are really hard for me to interpret. Can the authors provide more explanation on the unique properties of these different V-kernels and how they shape the distributions corresponding to the reverse sampling process?\\n\\nWe agree that the results of our original draft are fairly formal, and that it would be extremely helpful to illustrate them using a series of concrete examples. The new main text section we are writing shows precisely what the V-kernel looks like for a variety of simple models trained on 1D and 2D point clouds, and in particular shows how generalization through variance is affected by (i) training set structure (specifically, gaps between training data and duplications of training data), (ii) anisotropy in the forward process, and (iii) the feature set used (in the case of linear models). Throughout this section, we will provide additional commentary linking the experimental results to our math results. \\n\\n> (iii) As a researcher focuses on empirical works, I don't find the results of this paper very useful. I think the paper could benefit from a further discussion on how the results can help improving practical diffusion models.\\n\\nAlthough our focus is more 'scientific' (i.e., understanding an observed phenomenon) than 'practical' (i.e., improving model performance), we plan to add comments related to improving practical models to the Discussion, and hope that the new experiments we add to the main text are also helpful. One finding of the experiments is that feature-related inductive biases appear to matter a lot in practice, with the choice of feature set, and how it interacts with the proxy score covariance, sometimes strongly affecting the form of generalization through variance.\"}", "{\"title\": \"Response\", \"comment\": \"Dear authors,\\n\\nThank you for your response. I\\u2019ve updated my score accordingly.\"}", "{\"metareview\": \"This paper studies how diffusion models generalize when they are trained using denoising score matching. The analysis in the paper is intriguing and focuses on how the score is targeted with a noisy version (the conditional score) with regression rather than directly impacts the generalization. They then develop theory to understand this kind of generalization through variance. The paper is clearly written. There were weaknesses written by the reviewers around empirical validation and some of the assumptions, but the concerns were minor enough to have all the reviewers accept this paper.\", \"additional_comments_on_reviewer_discussion\": \"All the reviewers were positive. The authors replies changed the opinion of several reviewers as well.\"}", "{\"summary\": \"This paper investigates the generalization capabilities of diffusion models through the perspective of \\\"generalization through variance,\\\" a novel concept where the inherent noise in the denoising score matching (DSM) objective used in training significantly influences model generalization. The authors develop a mathematical theory using a physics-inspired path integral approach, which reveals how the covariance structure of the noisy training target impacts inductive biases. The findings suggest that diffusion models are not merely reproducing training data but are capable of filling in 'gaps' that enhance their generative abilities.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The paper's originality lies in its novel theoretical framework, which is a significant leap in understanding the mechanics behind diffusion models' ability to generalize. The quality of the theoretical analysis is high, and the writing is clear. The significance of the work is evident as it gives a new understanding of the generalization of diffusion models.\", \"weaknesses\": \"1. The paper's main weakness is the lack of empirical validation of the theoretical claims. While the theoretical intuition is clear, the paper would be greatly strengthened by including simulations or experiments that demonstrate the practical impact of the \\\"generalization through variance\\\" phenomenon.\\n2. There is a lack of theory directly bridging the gap between variance and generalization error.\", \"questions\": \"**1.** On the bottom of page 1 \\u201cAt present, there is arguably no theory that describes\\u2026\\u201d It seems there exists a bunch of papers analyzing the generalization ability of diffusion models including their generalization time and its relation to data structure, see [1], [2].\\n\\n[1] Li, P., Li, Z., Zhang, H., & Bian, J. (2023). On the generalization properties of diffusion models. Advances in Neural Information Processing Systems, 36, 2097-2127.\\n\\n[2] Fu, S., Zhang, S., Wang, Y., Tian, X., & Tao, D. (2024). Towards Theoretical Understandings of Self-Consuming Generative Models. arXiv preprint arXiv:2402.11778.\\n\\n**2.** It would be clearer to add more description to Figure 1, including the scale of the variance heat map, etc.\\n\\n**3**. How does the \\\"generalization through variance\\\" compare with other known mechanisms of generalization in diffusion models, such as those related to model capacity and training set structure? In particular, is there a specific formula between V-kernel with any kind of generalization error (or its bound)? If so, the generalization could be more quantitively measured based on V-kernel. \\n\\n**4**. Could the authors provide empirical evidence or simulations that specifically illustrate the effects of the covariance structure on generalization as hypothesized in the paper?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Updated score\", \"comment\": \"Thank you for the detailed answers. I have further increased the score, and keeping my positive judgment.\"}", "{\"title\": \"Review response part 2\", \"comment\": \"> 2. Line 75: How does the claim \\u201cmodels generalize better when they are somewhat underparameterized\\u201d agree with the (Karras et al, 2024) paper mentioned on line 48 ?\\n\\nWe should probably be a bit more careful here. What we really mean is: in the top-performing models of Karras et al. (2024), and also those of Karras et al. (2022), the number of samples used during training is somewhat greater than the number of model parameters. For example, the best class-conditional ImageNet-512 model has ~ 200 million parameters and is trained on ~ 2000 million samples. In Karras et al. (2024), Fig. 12 shows that overfitting happens without dropout beyond a certain model size, but the authors do not conclusively show what happens when the number of parameters is comparable to or larger than the number of model parameters.\\n\\nWe will change our wording slightly to make what we're claiming clearer and more accurate.\\n\\n> 3. Line 78: Why interpolation considered a non-trivial generalization ?\\n\\nWe probably could have been more precise here. In this setting, we consider \\\"trivial\\\" generalization to coincide with $p(x, t)$ at some small $t$ (like the reverse process cutoff $\\\\epsilon$), i.e., to be a Gaussian kernel density estimate. It is trivial firstly because there is no need to train a diffusion model to sample from it, and secondly because its generalization performance (e.g., in terms of FID score) is empirically underwhelming. \\n\\nInterpolation in the sense of adding probability mass between data points, but not elsewhere, is not what such kernel density estimators do, and hence we consider it nontrivial. We will modify the text to be more clear on this point, and apologize for the confusion.\\n\\n> 4. Line 138: It would be better to introduce the proxy score before (4).\\n\\nGood catch, we have adjusted how it is introduced to fix this.\\n\\n> 5. How is the operator \\\\mathcal{D} is defined on (6) ?\\n\\nThis is notation for the path integral measure, which itself is shorthand for the measure of a large number of integrals. The bracket notation is unconventional, and probably confusing, so we have modified it to look more measure-like and clarified this in the text.\\n \\n> 6. Section 4: It is explained that even using the generated samples to estimate the score leads generalization. But what is the starting point of the process X_T (the noise point). Isn't this point generated from the training data in a noisy manner, and this also contributes to generalization ?\\n\\nHaving a noisy starting point $x_T$ (which here and elsewhere is sampled from a Gaussian with variance $\\\\sigma_T^2$) does not contribute to generalization, since integrating the PF-ODE with this initial condition and the true score ought to yield the data distribution (at least, for a small enough step size and reverse process cutoff). One needs extra noise from somewhere else to get something different than the data distribution: in Sec. 4, this is noise that comes from using the naive score estimator instead of the true score. \\n\\nWe think the main text could have been clearer on this point, and will modify it somewhat. We will also add comments about an instructive limiting case of the naive score estimator, which allows one to more explicitly see how the V-kernel influences generalization.\\n\\n\\n> 7. Line 207: Once the higher order terms in (7) are neglected, it is mentioned that this implies that the estimator distribution is approximately Gaussian. However, due to the Gaussianity of the forward process, doesn't this tacitly also approximate the data distribution as Gaussian ? In other words, if we assume that the data distribution is Gaussian, would the result be the approximated integral of (7) ?\\n\\nNo, and this is an important and subtle point. If the estimator $\\\\hat{s}(x, t)$ is approximately Gaussian (in the sense that the value of $\\\\hat{s}$ at some specific inputs $x$ and $t$ varies in a Gaussian fashion across training runs), we obtain Eq. 7. But this does *not* imply the data distribution is approximated as Gaussian. The data distribution is described by an effective SDE. Conditional on an initial condition $x_T$, this SDE does not necessarily yield a Gaussian distribution for the final point $x_0$, even though each small-time transition probability is Gaussian, because the noise term is generically state-dependent and the drift term is generically not linear.\", \"one_would_not_obtain_a_gaussian_approximation_of_the_data_distribution_unless_two_things_are_true\": \"the score function is linear in $x$, and the V-kernel is independent of $x$. Neither is true in general. More broadly, one should keep in mind that the family of distributions describable as the solution to some set of SDEs is extremely rich.\\n\\nWe have added comments clarifying this point in both the main text and the relevant appendices, since it is crucial for understanding what kind of approximation we're making. Experiments in our new main text section will show that this assumption is fairly reasonable.\"}", "{\"title\": \"Review response part 1\", \"comment\": \"We thank the reviewer for their time and helpful comments. See below for a point-by-point response.\\n\\n**Weaknesses**\\n> 1. The paper assures that the generated samples are different from the training samples due to the noise in the score proxy, but this does not seem to explain why the generated samples are meaningful in some sense (e.g., have similar features to the training samples). If one assumes a ground truth distribution then this is obvious, but the authors emphasize that they refrain from that, but it is not clear what replaces this assumption (beyond the generated sampled being different from the training samples).\", \"this_is_a_good_point\": \"how do we know if a given generalization of training data is 'reasonable' without a ground truth? In the revised version of the paper, we address this issue in three ways.\\n\\nFirst, we describe various generic properties of the V-kernel that make it in some sense 'benign' (to borrow a phrase used by reviewer AyYq). These properties include the fact that it tends not to add noise far from training data, that it tends not to increase the effective dimensionality of the training distribution (e.g., if training data lie in a plane, to good approximation so will the generalized distribution), and that training data is more likely to be sampled than novel states. These properties are described both at the beginning of a **new main text section** (\\\"Generalization through variance: consequences and examples\\\") and a **new appendix** (\\\"Benign properties of generalization through variance\\\"). \\n\\nSecond, in the aforementioned new main text section, we depict a number of examples of generalization through variance (GTV) in 1D and 2D settings. Although these settings are admittedly toy, they are useful because they are easy to visualize, the results are easy to interpret, and it is straightforward to numerically compute all quantities of interest. These examples provide empirical evidence that the generalization achieved is in some sense reasonable. We underline the point that there are multiple ways to reasonably generalize a point cloud, though, by showing different types of generalization of the same 2D point cloud depending on the model and hyperparameters used. \\n\\nThird, we include a **new appendix** section (and limited discussion in the main text) on generalization error, as measured in terms of the KL-divergence between the ground truth distribution and the learned distribution, in settings where there is a known ground truth. It is hard to analyze except in special cases, but an interesting empirical insight is that error tends to fall faster as a function of the number of ground truth samples for diffusion models than comparable Gaussian-based kernel density estimates.\\n\\n> 2. Except for the appearance of a non zero covariance matrix in noise of the reverse diffusion process, the expressions in Propositions 5.1 and 5.2 are somewhat implicit. For example, it is not obvious how easy it is to compute them, or how its form affects generalization.\\n\\nThis is a good point. We add additional commentary on computing them in practice as comments in the main text and relevant appendices. The V-kernel can be computed directly, since one only needs access to (i) the covariance of the proxy score (which we compute in Appendix B), whose computation involves computing a certain covariance; and (ii) the feature maps $\\\\phi(\\\\mathbf{x}, t)$ in the linear case. In the new main text section, we directly compute it in a number of cases. More generally, since the V-kernel has a form that depends on certain expectations, it is expected that it is also reasonably straightforward to compute in higher dimensions. \\n\\n\\n**Questions**\\n\\n> 1. Line 67: What is exactly meant by \\u201cboundary regions\\u201d ? One example is regions that are close to multiple training points, but how these are defined in general ? As high likelihood regions ? This term is used repeatedly afterwards so it would be good to accurately define it.\\n\\nMore generally, we mean values of $\\\\mathbf{x}$ and t for which $p(\\\\mathbf{x}_0 | \\\\mathbf{x}, t)$ is highly uncertain (above some threshold, say), which (via Bayes' rule) can be converted into a statement about the curvature/Hessian of the noise-corrupted likelihood (or log-likelihood) $p(\\\\mathbf{x} | t)$. These are precisely regions between multiple training points in the discrete case, but this definition also makes sense for other types of training distributions. \\n\\nWe are adding a **new appendix** section describing what we mean by 'boundary regions' since the idea is central to our results, and since clarity about our usage of it will probably help other readers.\"}", "{\"title\": \"Global reviewer response\", \"comment\": \"We thank all of the reviewers for their comments, which we feel have substantially improved and clarified the paper. We are still working on implementing the associated changes, so we will not be able to provide a modified PDF by the deadline, but the major and minor changes are as follows.\\n\\n**Major changes.**\\n\\n**New main text section.** We are adding a section titled \\\"**Generalization through variance: consequences and examples**\\\" after the main theoretical results and before the discussion. The goal of this section is to both empirically validate our mathematical results, and to (in the context of various simple experiments) show how generalization through variance affects distributions in practice. To ease both interpretability and computation, we are conducting these experiments in simplified 1D and 2D settings. \\n\\nThe specific layout of the section is as follows. First, we discuss various \\\"benign\\\" properties of generalization through variance (e.g., variance is not added far from training examples, and does not change the dimensionality of the data manifold) that suggest that it produces a 'reasonable' type of generalization. This is majorly due to a suggestion by reviewer AyYq. \\n\\nNext, we conduct small experiments to show how generalization through variance relates to filling (or exaggerating) gaps between training examples, the existence of outliers and duplicates in training data, asymmetry in the forward process, and different feature sets (in the case of the linear score estimator). As previously mentioned, we consider only 1D and 2D point cloud data distributions here.\\n\\nFinally, we comment on the relationship between the form of the V-kernel and generalization error, as suggested by reviewer hd21. This turns out to be a difficult problem to address mathematically in general, so we do this in a few special cases.\\n\\n**Minor changes.**\\n\\n- **New appendix on benign properties of generalization through variance.** This appendix provides more mathematical detail regarding why one generically expects generalization through variance to be 'reasonable' given its relationship to the proxy score covariance. This is due to a suggestion by reviewer AyYq.\\n- **New appendix on boundary regions, and how they relate to the proxy score covariance.** Because we use the concept of 'boundary regions' throughout the paper, we think it would be helpful to clarify what we mean. We discuss what we mean mathematically and relate this notion to concepts like \\\"score blindness\\\" which have been previously discussed in the diffusion model literature. This is due to a suggestion by reviewer DoAH.\\n- **New appendix on generalization error.** This appendix computes generalization error (which we formulate in terms of the KL-divergence between the typical learned distribution $[q]$ and some ground truth distribution $p_*$) in a few special cases, and in particular shows how it depends on the form of the V-kernel. The special cases of interest essentially correspond to when the reverse process noise is very large, and when it is very small. This is due to a suggestion by reviewer hd21.\\n- **New appendix presenting main results in terms of alternative formulations.** While we chose to present our main results in terms of score-matching, one can also derive similar results for other formulations of diffusion modeling. We present our main results (the three propositions from Sec. 4-5) for two other formulations used in practice: the 'denoiser' formulation (used by, e.g., Karras et al. 2022), and the 'noise prediction' formulation (used by, e.g., the original Stable Diffusion paper). This is mostly just a quality of life change to make our results more accessible to other researchers.\\n- **Clarification of various technical points.** We add additional commentary regarding certain technical issues, e.g., the relationship between assuming the score estimator distribution is Gaussian and assuming the data distribution is Gaussian (reviewer DoAH), and what kind of generalization counts as 'nontrivial' in this setting (also reviewer DoAH).\\n- **Small quality-of-life improvements.** We fix various small issues pointed out by reviewers, e.g., the introduction of the proxy score (reviewer DoAH), and adding more description to the Figure 1 caption (reviewer hd21).\\n- **Additional citations.** We have cited additional related work, including the two papers suggested by reviewer hd21.\\n\\nWe thank all reviewers for their patience as we work on implementing these changes and additions.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"title\": \"Review response part 2\", \"comment\": \"> 2. There is a lack of theory directly bridging the gap between variance and generalization error.\\n\\nAs we noted above, generalization error (which is typically viewed as a function of the data distribution size $M$) is not the main concern of our paper. However, we agree that the paper could be improved by some work in this direction. We have thought hard about this point, and are writing a **new appendix** section to address it.\\n\\nTo us, the interesting generalization-error-related question is: how does the form of the V-kernel affect it? Since our main mathematical results merely specify the form of the effective reverse process, but not its solution (and hence, the precise form of $[q]$), this is far from obvious. Moreover, it is highly mathematically nontrivial to get results in this direction, since it is not possible to analytically solve most SDEs of interest (especially when the noise term is state-dependent, as here).\\n\\nWe try to address this issue in two ways. First, we consider two special limits of the effective SDE: one in which most of the noise is added at the very end (the 'kernel density estimate' limit), and one in which the noise is small enough that that path integral is dominated by a single path (the 'semiclassical' limit, in physics terminology). In both cases, one can derive expressions for generalization error, although in the latter case it is still somewhat implicit. \\n\\nSecond, we consider a tractable special case: we assume the ground truth is known to be Gaussian, and consider the NTK-associated V-kernel. In this case, $[q]$ and the generalization error can be computed exactly, and it can be shown that having a nontrivial V-kernel improves the speed of convergence somewhat.\\n\\n**Questions**\\n\\n> **1.**\\u00a0On the bottom of page 1 \\u201cAt present, there is arguably no theory that describes\\u2026\\u201d It seems there exists a bunch of papers analyzing the generalization ability of diffusion models including their generalization time and its relation to data structure, see [1], [2].\", \"see_our_above_comments\": \"to our knowledge, there is a theoretical gap regarding generalization when the number $M$ of data distribution samples does not go to infinity. We consider a variety of examples with $M < 10$, which are almost certainly not addressed by asymptotic results. However, we agree that the cited papers (and other related work) are relevant, and that it is worth commenting more explicitly on how our results relate to theirs, so we will do this in a revised version.\\n\\n> **2.**\\u00a0It would be clearer to add more description to Figure 1, including the scale of the variance heat map, etc.\\n\\nGood point, we will make a few changes to improve clarity.\\n\\n> **3**. How does the \\\"generalization through variance\\\" compare with other known mechanisms of generalization in diffusion models, such as those related to model capacity and training set structure? In particular, is there a specific formula between V-kernel with any kind of generalization error (or its bound)? If so, the generalization could be more quantitively measured based on V-kernel.\", \"see_our_above_response_to_weakness_2\": \"we can come up with a formula that relates the two in a certain special case (the 'semiclassical' approximation, which assumes relatively small noise throughout), but the formula is still somewhat implicit.\\n\\nConceptually, we do not view generalization through variance as separate from generalization related to model capacity and training set structure, but entangled together with them. One could only disentangle them in principle (we claim) by using a variant of the objective that does not involve randomness. Our NTK result (Prop. 5.2) explicitly includes a prefactor that relates to model capacity, and the training set structure strongly influences the form of the proxy score covariance, which is a key factor in all of our results.\\n\\n> **4**. Could the authors provide empirical evidence or simulations that specifically illustrate the effects of the covariance structure on generalization as hypothesized in the paper?\\n\\nWe agree that this is crucial, and are currently working on various experiments. See our above response to weakness 1.\"}" ] }
7kRFnSFN89
HYBRID MODEL COLLABORATION FOR SIGN LANGUAGE TRANSLATION WITH VQ-VAE AND RAG ENHANCED LLMS
[ "Jian Ma", "Wenguan Wang", "Yi Yang", "Weili Guan", "Feng Zheng" ]
Data shortages and the phonetic disparity between sign and spoken languages have historically limited the quality of sign language translation. On another front, endowed with substantial prior knowledge, large language models perform exceptionally well across diverse tasks, significantly diminishing the demand for domain-specific training data. Building on these foundation, this paper presents VRG-SLT, an innovative framework that translates sign language into spoken language, facilitating communication between signing and non-signing communities. In practice, VRG-SLT utilizes a hierarchical VQ-VAE to convert continuous sign sequences into discrete representations, referred as sign codes, which are subsequently aligned with text by a fine-tuned pre-trained language model. Additionally, retrieval-augmented generation (RAG) is employed to extend and enhance the language model, producing more semantically coherent and precise spoken text. Featuring a hierarchical VQ-VAE and pre-trained large language models, VRG-SLT demonstrates state-of-the-art performance. It excels on modish benchmarks like How2Sign and PHOENIX-2014T. Moreover, the incorporation of additional factual knowledge through RAG further improves the accuracy of the generated text.
[ "Sign language translation", "VQ-VAE", "Large language model", "Hybrid collaboration" ]
https://openreview.net/pdf?id=7kRFnSFN89
https://openreview.net/forum?id=7kRFnSFN89
ICLR.cc/2025/Conference
2025
{ "note_id": [ "ykw4qyvDBK", "uV7R3Rkdks", "mwJLXd07Po", "l44ypSBAoW", "cUuVdfr4U6", "bL6oUJ4GMO", "ZIFiEKvruW", "XmpmntMhyz", "Woa24krpTf", "UTTokkDk9h", "T44KEzBFwu", "Nv6IxEKbQd", "GVwfLR4LSF", "ADOdR2QInB", "2yPrefykm4", "1hKiWAbMGG" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730679692087, 1732631042988, 1732631290456, 1732630884098, 1732631093932, 1730169681327, 1732631131077, 1737642411221, 1732633866769, 1732630846560, 1730411940499, 1730664193502, 1732630767781, 1732963123270, 1732962479121, 1732631250781 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2550/Reviewer_5XfD" ], [ "ICLR.cc/2025/Conference/Submission2550/Authors" ], [ "ICLR.cc/2025/Conference/Submission2550/Authors" ], [ "ICLR.cc/2025/Conference/Submission2550/Authors" ], [ "ICLR.cc/2025/Conference/Submission2550/Authors" ], [ "ICLR.cc/2025/Conference/Submission2550/Reviewer_RLDc" ], [ "ICLR.cc/2025/Conference/Submission2550/Authors" ], [ "ICLR.cc/2025/Conference/Submission2550/Authors" ], [ "ICLR.cc/2025/Conference/Submission2550/Authors" ], [ "ICLR.cc/2025/Conference/Submission2550/Authors" ], [ "ICLR.cc/2025/Conference/Submission2550/Reviewer_x9jx" ], [ "ICLR.cc/2025/Conference/Submission2550/Reviewer_pCAa" ], [ "ICLR.cc/2025/Conference/Submission2550/Authors" ], [ "ICLR.cc/2025/Conference/Submission2550/Reviewer_5XfD" ], [ "ICLR.cc/2025/Conference/Submission2550/Reviewer_5XfD" ], [ "ICLR.cc/2025/Conference/Submission2550/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The authors propose an approach to sign language translation (from sign language video to text in a spoken language) that combines several elements: (1) a sign tokenizer based on VQ-VAE to learn a vocabulary of sign segments corresponding to short multi-frame sequences, (2) a language model based on FLAN-T5 fine-tuned to accept a mixed vocabulary of sign and text tokens, and (3) retrieval augmentation to improve the quality of the results. The paper claims improved performance on two commonly used benchmarks, How2Sign (for American Sign Language to English) and PHOENIX-2014T (for German Sign Language to German). The paper also provides several ablation studies showing the effect of codebook and model sizes, sign tokenizer variations, and retrieval augmentation.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The idea of learning a tokenization for sign language video and combining it with text tokens is compelling, and the use of a VQ-VAE-based approach for this seems sensible.\", \"This work represents the first use of RAG for sign language translation, as far as I know.\"], \"weaknesses\": [\"The results do not seem to outperform SOTA results as claimed. Specifically, Rust et al. 2024 (cited in the introduction but not included in the experiments) obtains BLEU-{1,2,3,4} scores of {38.9, 24.1, 16.4, 11.6} on How2Sign in what looks like the same setting, compared to the best results in this paper of {36.38, 21.80, 13.76, 9.88}. (Please correct me if I'm wrong, of course.)\", \"The description of the approach is hard to follow, with many missing details. See questions below.\", \"The paper contains some questionable statements about sign language and sign language research. See details below.\", \"It lacks analysis of the learned tokens (e.g. do they tend to correspond with signs?). This isn't strictly necessary, and is challenging for a dataset like How2Sign without glosses, but it should be doable for PHOENIX-14T and would be very helpful.\", \"In general, the writing needs improvement. It is frequently repetitive/unclear/ungrammatical.\"], \"questions\": [\"Model details:\", \"I understand that the input m^{1:M} is a sequence of estimated keypoints. What method is used for keypoint estimation, and what data was it trained on?\", \"Do I understand correctly that the representation (z_u, z_h) that is quantized to produce the sign tokens is simply the keypoints passed through a 1D convolution layer? Have you tried using a more complex encoder to produce z_u and z_h?\", \"Have you compared FLAN-T5 to, say, a plain text translation model, e.g. T5 as used in Rust et al. 2024 cited in the paper, which would not require a prompt or combined sign+text vocabulary?\", \"In Section 3.3, what is the query for retrieval augmentation, and what are the \\\"chunks\\\"?\", \"What do pre-SignLLM and post-SignLLM mean?\", \"Other details/questions:\", \"What does \\\"VRG\\\" stand for?\", \"The paper often describes the task as translation from sign language to spoken language. I think it's worth stating more clearly that the output is written, although I understand that the intention is that it is the written form of a spoken language.\", \"What is meant by \\\"phonetic disparity between sign and spoken languages\\\"?\", \"What are \\\"modish benchmarks\\\"? (Typo?)\", \"\\\"Sign language ... possesses its own grammar, word formation, and lexicon.\\\" What is meant by \\\"word formation\\\"? Does this refer to morphology of sign languages, the process of word formation, or something else?\", \"\\\"These differences, especially in word order, make transcription between the two languages complex.\\\" I assume that \\\"transcription\\\" should be \\\"translation.\\\" Also, differences in word order are common between spoken/written languages as well. Why do word order differences make translation from a sign language any more complex than translation between spoken/written languages?\", \"I assume the authors are aware of this, but the above two sentences suggest that there is a single sign language, whereas in fact there are of course many, so it is worth re-wording to reflect that.\", \"This statement seems incorrect to me: \\\"Previous research generally divides SLT into two distinct tasks: Sign2Notation ... and Notation2Text.\\\" This doesn't seem to be true about most recent sign language translation approaches, both for the datasets/language pairs used here and for others. Some examples from the recent literature (some cited in the paper, but most not):\", \"D. Li et al., \\\"TSPNet: Hierarchical feature learning via temporal semantic pyramid for sign language translation,\\\" NeurIPS 2020.\", \"B. Shi et al., \\\"Open-Domain Sign Language Translation Learned from Online Video,\\\" EMNLP 2022.\", \"L. Tarr\\u00e9s, \\\"Sign language translation from instructional videos,\\\" CVPR 2023.\", \"K. Lin et al., \\\"Gloss-Free End-to-End Sign Language Translation,\\\" ACL 2023.\", \"P. Rust et al. \\\"Towards Privacy-Aware Sign Language Translation at Scale,\\\" arXiv:2402.09611, 2024.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank the reviewer for the time and constructive feedback. We address the main concerns below.\\n\\n**Q1. Further discussion on the potential and emerging properties of the GPT-based LLMS training system.**\\n\\n**A:** Thank you for raising an excellent point. Regarding the significant potential of GPT-based LLMS in sign language translation, we consider that it primarily manifests in: (1) the ability of LLMS to effectively recognize and generate complex gesture sequences, such as those in intricate conversational scenarios; (2) an enhanced understanding of the context and cultural meanings behind gestures; and (3) the integration of visual and linguistic information, particularly auditory data, through multimodal learning. This integration facilitates precise semantic conversions and natural interactions between spoken and sign language users. Additionally, these models support personalized training to adapt to various sign language dialects and individual expression styles, offering customized services. These are also the research goals we aim to achieve in our future work. \\n\\n**Q2. Discussion on regionalization, dialects, and variants of sign language, and how to apply transfer learning to develop foundational models.**\\n\\n**A:** Thank you for the opportunity to expand on our task. In sign language translation, developing a foundational model with transfer learning may explore several directions: Firstly, training the base model on data-rich sign language datasets and fine-tuning it for specific dialects to reduce reliance on extensive labeled data. Secondly, leveraging world knowledge embedded in resource-rich languages can enable the model to understand sign language expressions across different cultures and regions, particularly aiding languages with limited initial corpora. \\n\\n**Q3. Suggesting the addition of human-centric reliability assessments (by sign language users) to complement objective evaluations.**\\n\\n**A:** Thank you for your constructive suggestion. Integrating a human-centered reliability assessment, including feedback from sign language users, would improve our evaluation of the translation model's reliability. However, current practical constraints (such as difficulties in finding sufficient numbers of professional sign language users) pose significant challenges for us. Additionally, we employ the widely adopted and validated evaluation metrics BLEU and ROUGE, and will provide the visualization code for our sign action reconstruction. We will explore new evaluation methods and keep your suggestions in mind for future improvements and optimizations. Thank you once again for your valuable comments.\\n\\n\\n**Q4. Reasons for the absence of analysis and research on sign and text-based languages.**\\n\\n**A:**We appreciate your careful review. The analysis and study of sign and text-based languages, such as understanding structural differences between sign and spoken languages, exploring translation issues, and representing sign language in text form, have been thoroughly examined in existing literature [ref1,2,3,4]. In addition, relevant analyses of the How2Sign and PHOENIX-2014T datasets have been thoroughly studied in some literature [ref5,6,7]. This paper, however, focuses on improving the performance of sign language translation, with an emphasis on performance-related metrics.\\n\\n[ref1] Sign language structure: An outline of the visual communication systems of the american deaf. Journal of deaf studies and deaf education, 2005.\\n\\n[ref2] Language deprivation and deaf mental health. Routledge, 2018.\\n\\n[ref3] American sign language: The phonological base. Sign language studies, 1989.\\n\\n[ref4] Toward a phonetic representation of signs: Sequentiality and contrast. Sign Language Studies, 2011.\\n\\n[ref5] Neural Sign Language Translation. CVPR 2018.\\n\\n[ref6] How2Sign: A Large-Scale Multimodal Dataset for Continuous American Sign Language. CVPR 2021.\\n\\n[ref7] A survey on Sign Language machine translation. 2023.\\n\\n**Q5. The impact of speaker diversity and camera angles on codebook accuracy and final outcomes.**\\n\\n**A:** Thank you for your constructive feedback. As the How2Sign and PHOENIX-2014T datasets include multiple signers, the impact of speaker diversity on translation performance is already incorporated into our experiments. Moreover, by using human keypoints as training data, we naturally remove much identity-related information, which helps mitigate the impact of different signers on performance. Regarding the impact of camera angles, we have explored this aspect using the How2Sign dataset, which includes side-view data, and found no significant improvement in performance. This is mainly because, in the side view, crucial key points like the elbow and hand often overlap, appearing as a single point. For example, keypoints from the side view of the palm can only be observed at the pinky, while the other fingers are obscured.\"}", "{\"comment\": \"We sincerely thank the reviewer for the time and constructive feedback. We address the main concerns below.\\n\\n**Q1. Discussion on the definitive solutions for overcoming data scarcity in sign language translation tasks: data augmentation or self-supervised learning.**\\n\\n**A:** Thank you very much for your valuable comments. We strongly agree with your comment that data augmentation or self-supervised learning is the ultimate solution. Due to the small number of sign language users, especially in countries with very small populations, sign language data faces a serious challenge of insufficient data. Therefore, we use the rich prior knowledge embedded in the large model to alleviate the challenge of insufficient data, and use the understanding of world knowledge and inherent multi-domain knowledge in the large model to achieve knowledge sharing and transfer.\\n\\n**Q2. Discussion on subtle changes in meaning during text generation with the RAG model.**\\n\\n**A:** Thank you for pointing this out. Subtle changes in meaning can indeed impact the accuracy of information retrieval and the quality of generated outputs in RAG, leading to issues such as context misalignment, reduced coherence, knowledge gaps, and lowered retrieval precision. To mitigate these effects, we employ a **Top-3 retrieval** strategy (*i.e.*, selecting the top K most relevant results), ensuring that the system has access to more information during generation (**Lines 353-355**). This approach helps avoid over-reliance on a single document and better addresses subtle semantic variations.\"}", "{\"comment\": \"**Q12.\\tDoes ''word formation'' refer to the morphology of sign language, the process of word formation, or something else?**\\n\\n**A:** ''Word formation'' in sign language involves constructing words using a visual-spatial approach, unlike spoken languages. Words in sign language are created through gestures, facial expressions, and body movements, with key elements like handshapes, placement, movement, and palm orientation shaping their expressive vocabulary [ref13,14,15,16].\\n\\n[ref16] Interaction of morphology and syntax in American Sign Language. Routledge, 2016.\\n\\n**Q13. Discussion of the sentence \\u201cThese differences, especially in word order, make transcription between the two languages complex.\\u201d**\\n\\n**A:** First, I would like to clarify that this statement means: ''The differences between languages, especially in word order, make transcription from one language to another complex. In other words, due to the differences in grammatical structure, particularly word order, converting content from one language to another involves not only matching vocabulary but also considering the differences in sentence structure and word order.'' This sentence does not claim that only word order differences make translation from a sign language more complex.\\nIn that way, the grammatical differences between sign and spoken languages make translation complex because sign language conveys emotional and contextual information through gestures, facial expressions, and body language, while spoken language typically relies on detailed verbal descriptions. Thus, translating sign language requires not only grammatical adjustments but also capturing its subtle nuances [ref13,14,15,16].\\n\\n**Q14.\\tSuggestions to rephrase and acknowledge the variety of sign languages around the world.**\\n\\n**A:** We respectfully disagree. There are approximately 200 countries globally, each with its own sign language. Consistent with the standard task settings in [ref1-3, 7-12], we conduct our sign language translation within the confines of one country\\u2019s language, a standard setting in this field.\\n\\n**Q15. Discussion on dividing sign language translation into Sign2Notation and Notation2Text processes.**\\n\\n**A:** We disagree with respect. Given the formidable challenge of sign language translation, mainstream papers [ref17,18,19,20] do not translate directly from sign actions to spoken language but require an intermediary step to achieve higher translation accuracy. For example, it may involve glosses or HamNoSys ( a lexical Sign language notation), proceeding from sign to gloss to text, or from gloss to HamNoSys to text. Therefore, it is appropriate to categorize sign language translation into the steps of Sign2Notation and Notation2Text (**Lines 53-72**). Here, 'Notation' refers to any intermediary marker, such as gloss, HamNoSys, or other identifiers.\\n\\n [ref17] Neural Sign Language Translation. CVPR 2018.\\n\\n [ref18] Sign language transformers: Joint end-to-end sign language recognition and translation. CVPR 2020.\\n\\n [ref19] Spatial-Temporal Multi-Cue Network for Sign Language Recognition and Translation. TMM, 2022.\\n\\n [ref20] Ham2Pose: Animating Sign Language Notation into Pose Sequences. CVPR 2023.\"}", "{\"comment\": \"We thank the reviewer for the time and constructive feedback. We address the main questions below.\\n\\n**Q1. Skepticism about novelty and contribution.**\\n\\n**A:** We disagree. Our approach is novel and pioneering. It is unreasonable and odd to negate the novelty simply because previous work has used LLMs and VQ-VAE. To the best of our knowledge, we are the first to apply multi-level VQ-VAE and FLAN-T5 to a gloss-free sign language translation task. Furthermore, with the integration of RAG, we utilize a knowledge base to improve sign language-to-text translation, making us the first to leverage RAG for enhancing translation performance.\\n\\nThe reviewer points out that other methods also employ LLMs and emphasizes [ref1]. Our method, however, differs substantially from [ref1], mainly in the following aspects: (1) [Ref1] focuses on the discrete features and hierarchical structure of symbolic tokens, normalizing sign sentences to reflect two core linguistic properties: discreteness and hierarchy. In contrast, our sign tokenizer focuses on the hierarchical reconstruction of sign actions. (2) [Ref1] freezes the off-the-shelf LLM to retain the rich knowledge acquired during pretraining, while we fine-tune the LLM to better align with the sign tokens produced by the sign tokenizer and preserve prior knowledge. Additionally, our method outperforms [ref1] on the standardized Phoenix-2014T dataset in terms of evaluation metrics, as shown in the following table. In summary, both the framework design and experimental results demonstrate the novelty and contribution of our approach.\\n\\n| Methods | ROUGE\\u2191| BLEU-1\\u2191| BLEU-2\\u2191| BLEU-3\\u2191 | BLEU-4\\u2191|\\n|----------|----------|----------|----------|----------|----------|\\n| [ref1] | 44.49 | 45.21| 34.78| 28.05| 23.40 |\\n| Ours | 53.92| 55.74| 43.31| 36.59| 30.17 |\\n\\n[ref1] LLMs are Good Sign Language Translators. CVPR 2024.\\n\\n**Q2. Lack of analysis and discussion concerning some relevant studies.**\\n\\n**A:** Thank you for the references you provided. We have already compared some of the papers you mentioned [ref5]. However, most of the references [ref2,3,4] you mentioned are irrelevant to our task, especially [ref3], which addresses the inverse task of our sign language translation. \\n\\nAs for T5 [ref6], there is no significant difference in network architecture compared to FLAN-T5 [ref7]; the main difference lies in the training strategy and task fine-tuning. We also mention the reason for using FLAN-T5 in our manuscript (**Lines 191-193, 316-318).** In addition, we also compared different parameter versions of FLAN-T5 in the ablation experiments. Therefore, our results are convincing.\\n\\n[ref2] Exploring Latent Sign Language Representations with Isolated Signs, Sentences and In-the-Wild Data. ACL 2024.\\n\\n[ref3] G2P-DDM: Generating Sign Pose Sequence from Gloss Sequence with Discrete Diffusion Model. AAAI 2024.\\n\\n[ref4] Towards Privacy-Aware Sign Language Translation at Scale. ACL 2024.\\n\\n[ref5] Sign2GPT: Leveraging Large Language Models for Gloss-Free Sign Language Translation. ICLR 2024.\\n\\n[ref6] Exploring the limits of transfer learning with a unified text-to-text transformer. JMLR, 2020.\\n\\n[ref7] Scaling Instruction-Finetuned Language Models. JMLR, 2024.\\n\\n\\n**Q3. Unnecessary related work, such as a detailed history of SLR or sign language translation prior to 2018.** \\n\\n**A:** We do not share this view. First, we do not provide a detailed history of SLR. Instead, we outlined the development of sign language translation and emphasized areas that could lead to misunderstandings between different tasks. SLR is a fundamental task in sign language understanding and is once considered synonymous with sign language translation. Historically, sign language translation typically involves two steps: SLR and gloss-to-text [ref1,2,5]. Including an overview of SLR helps contextualize the background of sign language translation and clarify the differences between task settings. Therefore, we believe it is essential to mention some SLR-related work in the related works section of our paper.\\n\\n**Q4. The paper's description of its contributions is muddy.** \\n\\n**A:** We respectfully disagree. In Introduction section, we clearly describe the motivation behind our method and its uniqueness. Moreover, in **Q1** and **Q2**, we clearly reiterated the novelty of our approach and emphasized its differences from [ref1].\"}", "{\"summary\": \"The paper presents VRG-SLT, an innovative framework for translating sign language into spoken language, aiming to facilitate communication between signing and non-signing communities. The key contributions of the paper are:\", \"vrg_slt_framework\": \"The authors introduce a two-stage pipeline that utilizes a hierarchical VQ-VAE (Vector Quantized Variational Autoencoder) to convert continuous sign sequences into discrete representations called sign codes. These codes are then aligned with text using a fine-tuned pre-trained language model (FLAN-T5). The framework also employs a Retrieval-Augmented Generation (RAG) strategy to enhance the language model's output with relevant factual knowledge, improving the accuracy and coherence of the generated text.\", \"sign_tokenizer\": \"A novel component that captures both overall upper body and hand trajectory characteristics using a hierarchical structure. This allows for a comprehensive understanding of sign language gestures.\", \"integration_of_rag\": \"The incorporation of RAG enables the retrieval and combination of relevant knowledge to refine the initial translations, addressing issues like \\\"hallucination\\\" (producing inaccurate or unrealistic responses) in large language models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"Innovative Framework: although VQVAE has been widely adopted in computer vision area, the application to sign laugnage translation is a contribution of the paper. Furthermore, it integrats a hierarchical VQ-VAE with a pre-trained language model (FLAN-T5) and a Retrieval-Augmented Generation (RAG) strategy, the framework addresses the complexities of translating sign language into spoken text effectively.\", \"well_written\": \"readers could easily follow the story and understand how to reproduce the paper.\", \"weaknesses\": \"1. While the paper acknowledges the challenge of data scarcity in sign language, it primarily focuses on developing a model to mitigate this issue rather than addressing the issue from data perspective. I admit that the model improvement will sightly benefit the end-to-end performance, but I believe the final solution for the task is data augmentation or self-supervised learning.\\n\\nLooking back at the history of machine translation, the most effective methods have been back translation, pivot-translation (data augmentation), and self-supervised learning (GPT series models). While various methods have been proposed to address data sparsity from a modeling perspective, they remain theoretical solutions rather than definitive answers to the problem or adopted by industry.\\n\\n2. When we look at the performance, the paper misses some recent baselines (e.g. https://arxiv.org/pdf/2211.01367) and the improvement over these baselines are limited. \\n\\nOverall, although the paper is well-written, it is unlikely to revolutionize sign language translation.\", \"questions\": \"When using RAG (Retrieval-Augmented Generation) in the framework, can you guarantee that the retrieved content maintains the original meaning of the sentences? I think the retrieved content might be semantically similar but not exactly matching the intended meaning, leading to subtle shifts in meaning during generation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Q5. Discussion on the role of RAG within the overall framework and its effectiveness.**\\n\\n**A:** In this paper, RAG uses the pre-trained large-scale model BERT for information retrieval to improve the quality and accuracy of sign language translation. The key idea is that, during translation, the model not only relies on its internal knowledge (gained through pre-training) but also retrieves relevant information from external knowledge bases to refine and enhance the translation (**Lines 101-102, 113-119**). The initial translation obtained from the large model is refined by RAG based on retrieved information. Note that our ablation experiments did not compare the performance with and without RAG. The results demonstrate that the RAG module positively impacts translation performance.\\n\\n**Q6. Discussion on the limited significance of experimental results from smaller datasets.**\\n\\n**A:** We disagree with this point. As you mentioned in the previous question, most studies [ref1,5] focus on the PHOENIX-2014T dataset, leading to a lack of results for How2Sign. Moreover, How2Sign is currently one of the largest datasets in sign language research and is increasingly being adopted as a standard for sign language translation evaluation. Thus, these two datasets provide ample support for assessing the effectiveness of our approach.\\n\\n**Q7. Reasons for method selection in Table 1 and the sources of How2Sign dataset results.** \\n\\n**A:** Table 1 presents a comparison of several classic SOTA sign language translation methods with different pipelines. The results on the How2Sign dataset are derived by implementing these methods based on their open-source code and paper details, with experimental setups and dataset handling aligned with our approach.\\n\\n**Q8. Explanation of how differences, particularly in word order, complicate transcription between the two languages as discussed in Lines 48-49.** \\n\\n**A:** I beg to differ. I would like to clarify that what I meant in this sentence is that differences in word order make translation between the two languages more complex. It is not as you claimed, that I said word order is the primary reason for the difficulty in translating between spoken and sign languages. 'Becoming complex' and 'primary reason' are not the same thing. Moreover, word order is widely acknowledged in the papers as a challenge in translating between spoken and sign languages. This has become a consensus in the field of sign language translation, particularly in the reference you mentioned. Grammatical differences between sign language and spoken language complicate translation. For example, while English typically follows a subject-verb-object structure, American Sign Language (ASL) may use subject-object-verb or other patterns. This requires not only literal translation but also adjustments to align with the target language's grammar. Moreover, sign language conveys emotional and contextual nuances through gestures, facial expressions, and body language, which often require more detailed descriptions in spoken languages. As a result, translating sign language involves both grammatical conversion and capturing its subtle nuances, adding complexity to the process.\\n\\n**Q9. The explanation of the word ''transcription.''**\\n\\n**A:** **Transcription** is defined in the Oxford English Dictionary as ''the action or process of transcribing something'' and ''a written or printed version of something,'' such as text derived from speech or signs. In the field of sign linguistics, ''sign language transcription'' is a widely used term to describe the process of converting sign language actions or symbols into written form. It is reasonable for us to use ''transcription'' interchangeably with ''translation'' in our paper.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"comment\": \"To all reviewers:\\n\\nWe extend our sincere gratitude to all the reviewers for their valuable efforts. Based on your feedback, we have made revisions to our paper. Below, we summarize the main changes in the paper. Before we present the revised parts, we would like to highlight the importance and novelty of our work, which has been recognized by several reviewers. We introduce VRG-SLT, a novel sign language translation model that integrates hierarchical VQ-VAE and LLMs, enhanced by RAG for improved translation. The key contributions are: (1) A collaborative hybrid model where sign movements are treated as a unique language and jointly trained with text using LLMs; (2) A sign-tokenizer that captures both upper body and hand trajectory characteristics, leveraging a hierarchical structure to handle complex and diverse movements; (3) The integration of RAG, enabling the retrieval and incorporation of relevant knowledge for more accurate and content-rich outputs. Finally, our model achieves superior performance on standard benchmarks. We believe our contributions hold significant potential for the community. We address all reviewers' concerns and comments in a point-by-point response below.\\n\\n**The major changes are as follows:**\\n1. In response to Reviewer pCAa, we discuss issues such as the GPT-based LLM training system, the use of transfer learning for developing foundational models in different contexts, and the impact of multiple signers and different camera angles on performance.\\n2. We further discuss with the reviewer RLDc how the RAG model addresses sparsity in sign language translation and how it handles subtle changes in meaning during text generation.\\n3. We reiterate the novelty and contributions of our method and analyze the differences from the references mentioned by Reviewer 5XfD and x9jx.\\n4. We provide explanations for certain words, phrases, and technical terms, along with definitions from the Oxford English Dictionary to aid understanding, based on the majority of comments from Reviewer 5XfD and x9jx.\\n5. We provide explanations for certain words and phrases, along with definitions from the Oxford English Dictionary to aid understanding, based on the comments of Reviewer 5XfD and x9jx.\\n\\nSincerely,\\n\\nAuthors.\"}", "{\"comment\": \"**Q8.\\tWhat does \\\"VRG\\\" stand for?**\\n\\n**A:** VRG is an alias used in our paper for convenience, representing sign language translation LLMs enhanced with VQ-VAE and RAG.\\n\\n**Q9.\\tDiscussion on whether the task should be described as translating sign language to written text instead of spoken language.**\\n\\n**A:** Most existing papers on sign language translation, particularly all the references you mentioned [ref1, 9, 10, 11, 12], describe it as a translation between sign language and spoken language. Therefore, we consider it is unnecessary to replace spoken language with written text.\\n\\n[ref9] Open-Domain Sign Language Translation Learned from Online Video. EMNLP 2022.\\n\\n[ref10] Sign language translation from instructional videos. CVPR 2023.\\n\\n[ref11] Gloss-Free End-to-End Sign Language Translation. ACL 2023.\\n\\n[ref12] TSPNet: Hierarchical feature learning via temporal semantic pyramid for sign language translation. NeurIPS 2020.\\n\\n**Q10.\\tThe meaning of ''phonetic disparity between sign and spoken languages''.**\\n\\n**A:** The phrase \\\"phonetic disparity between sign and spoken languages\\\" refers to the differences in how meaning is conveyed, a well-established concept in sign language research [ref13,14,15]. Spoken languages rely on vocal sounds to convey meaning, while sign languages use visual-spatial elements such as gestures, facial expressions, and body movements. As a result, the phoneme inventories of spoken and sign languages do not fully overlap, and certain phonemes may not have direct equivalents in both types of languages. This disparity arises because sign language does not rely on sound or hearing; instead, it uses visual and spatial modes of communication. Consequently, sign languages have distinct linguistic structures and phonological systems that are fundamentally different from those of spoken languages.\\n\\n[ref13] American sign language: The phonological base. Sign language studies, 1989.\\n\\n[ref14] Toward a phonetic representation of signs: Sequentiality and contrast. Sign Language Studies, 2011.\\n\\n[ref15] The phonological organization of sign languages. Lang, Linguistics Compass, 2012.\\n\\n**Q11.\\tWhat are \\\"modish benchmarks\\\"? (Typo?)**\\n\\n**A:** This is not a typo. ''Modish benchmarks'' refers to the latest and most popular evaluation datasets; in this paper, it specifically denotes the How2Sign and PHOENIX-2014T datasets.\"}", "{\"summary\": \"The paper studies sign language translation (sign --> text) with VQ-VAE sign tokenization adapted into a pretrained encoder-decoder language model (Flan T5), on How2Sign and PHOENIX 2014T.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The method in the paper is presented relatively clearly.\", \"The experiments look reasonable.\"], \"weaknesses\": [\"At a high level, primarily I am skeptical of the novelty/contribution of the work, and secondarily some aspects of the paper seem unjustified.\", \"There is a lot of missing related work that undermines the contribution of the paper. For example:\", \"There are many prior works that use VQ-VAE for sign language. (e.g., https://arxiv.org/abs/2404.00925 (this one especially), https://aclanthology.org/2024.signlang-1.24/, https://arxiv.org/abs/2208.09141v3)\", \"There are many prior works that use \\\"LLMs\\\", even T5, as the pretrained base for sign language translation. (e.g., https://arxiv.org/abs/2306.15162, https://aclanthology.org/2024.acl-long.467/ (which you cited but not recognizing this), https://arxiv.org/abs/2406.06907, https://arxiv.org/abs/2405.04164) This is already a pretty dominating trend in sign language translation research in the last year or two.\", \"At the same time, I think there is a lot of unnecessary related work, like you don't need to go into details about the history of SLR or pre-Camgoz et al 2018 sign language translation.\", \"More broadly, the description of contributions in the paper is muddy. On lines 122-131, I would say that (1) and (2) are not novel contributions. I've given you pointers above to prior works that already train the sign language modality into LLMs, and use VQ-VAEs. It's possible there are novel elements to your work in these regards, but they are not articulated here. (3) is I suppose novel but I have more comments on that below.\", \"I don't see it articulated anywhere why we should prefer discrete tokens as input to an encoder-decoder translation model, rather than soft tokens (which don't impose an information bottleneck), as in many other sign language translation papers (and many multimodal fusion in LLM papers).\", \"I don't understand the point of RAG here. Retrieval-augmented translation typically retrieves from parallel translation datasets as a way to improve model capacity. RAG instead from knowledge bases is intended to improve factuality, but translation is not a factual task: you can translate sentences that are false. So sure, you could maybe get improvement on sentences that are factual, or when world knowledge might help you guess words by context, but this fails on nonfactual content. Also, how do you know the RAG is even being used, and it isn't just the LLM doing rewriting (or integrating its own knowledge)? It doesn't seem like you ablate LLM rewriting without any retrieved sentences, but I could be misunderstanding.\", \"I won't count it against you too much since maybe you don't have the resources to use larger datasets or can't use the datasets for license reasons, but there are much larger sign language datasets available (YouTube-ASL, BOBSL) and it is not clear how meaningful results on smaller datasets are. For example, the main downside of using VQ-VAE as an input representation is that you are imposing an information bottleneck, which may be irrelevant when training data is small enough that no results are particularly good.\", \"I am a bit confused by Table 1. You are excluding numerous works that score better on How2Sign, which I am guessing is because you are trying to compare to works that train on the same amount of sign language data. But it seems misleading to do this and call it \\\"state-of-the-art\\\" without explaining this, especially because SLTUNet trains on extra sign language gloss data, LLMs might train on sign language gloss data and explanations on the web, etc; this doesn't seem like a principled distinction. Also: where are these How2Sign numbers for prior works coming from? I looked at e.g. SignBERT+ and SLTUNet and neither of the works evaluates on How2Sign. Did you get these from personal correspondence with the authors? Did you reproduce all of these using their methods? The PHOENIX numbers I can see in the original papers.\", \"There are a bunch of weird claims throughout the paper that aren't justified. For example:\", \"48-49: \\\"These differences, especially in word order, make transcription between the two languages complex\\\": I think you mean \\\"translation\\\" here, not \\\"transcription\\\", but regardless I wouldn't say word order is a primary reason that translation between spoken languages and sign languages is hard. Different spoken languages have different word order and it isn't really an issue. The unique aspects of sign language grammar are related to use of space, non-manual markers, etc. The paper doesn't engage with this anyway, just the multimodal fusion.\", \"233-235: \\\"The two encoders of the sign-tokenizer encode global body movements and detailed hand features, respectively, achieving a comprehensive and precise understanding of sign motion.\\\": I see no evidence in the paper that the tokenizer encodes sign motion comprehensively or precisely. This could be proven by, for example, getting human ratings of the reconstructions from fluent sign language users.\", \"520: \\\"Limitations: VRG-SLT still struggles with the contextual and cultural nuances of sign languages.\\\" This is a massive understatement; models that get ~8 BLEU on How2Sign are extremely poor quality in absolute. They are nowhere close to context or cultural nuances being the main limitations.\"], \"questions\": [\"These are essentially written in Weaknesses above, but some additional questions:\", \"Could you provide examples of outputs before and after \\\"RAG\\\" LLM rewriting? Like when is a relevant sentence ever retrieved from the ECMWF weather dataset? It seems like this could only possibly help if your translation draft is something like \\\"On September 12, 2007, the weather in Cologne was [X]\\\" and your knowledge base has memorized the weather.\", \"Could you elaborate on what \\\"w/o RAG\\\", \\\" pre-SignLLM\\\", and \\\"post-SignLLM\\\" mean precisely in Table 2d?\", \"Figures 2, 3, and 4 are misleading in that they show the sign language input to the encoder model as image frames. But line 407 says that they are actually keypoints (derived from what model?).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This is a well written contribution to the field of automatic sign-language translation. The work build on state of the art LLM modelling and unsupervised deep learning representations. They adopt latest hierarchical VQ-VAE techniques in multi-modal generative AI from video and apply it to learn discrete representation of sign codes. They adopt finetune techniques of solid baseline LLMs like T5 and finally they demonstrate the benefit of applying RAG techniques to gorund the model and improve the baseline LLM response.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"4\", \"strengths\": \"This paper is a well rounded contribution, where the motivation, state of the art techniques and limitations are well presented. the work introduces the approach in a very clear in detail manner, specially the baseline techniques like VQ-VAE and the steps they build the system and approach. Figures are self explainable and authors make a good justification and introduction of RAG and how it is apply in their use case.\\n\\nI value the rigurosity of authors presenting the evaluation with statistical significant test and solid demonstration of improved results over baseline and the attempt to share the parameters and configurations in the appendix to make the approach reproducible.\\n\\nThis is a strong paper that moves the needle in the technology apply to sign language and can potentially democratize the incorporation of it at scale into multiple products and services that could have a big impact in part of the population.\", \"weaknesses\": \"This is a good paper, but would have been great if the authors deep deeper into the potential and emerging properties of training the system based on GPT LLMS. Sign languages have also regionaliations, dialects and variations. The study lacks that analysis and how transfer learning can be applied in order to build a Sign-Language Foundational model.\\n\\nLast but not least, as a complement beyond objective evaluations, would have been good in authors present a solid evaluation with human in the loop (Sign-Language users)\", \"questions\": \"a) Why authors didn't break down the analysis and study across sign and text based languages?\\nb) Why authors didn't conduct and human evaluation study?\\nc) What is the impact of speakers diversity, camera angles, etc in the accuracy of the codebook and final results?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for the time and feedback. We address the main questions below.\\n\\n**Q1. The results do not seem to outperform SOTA results as claimed.**\\n\\n **A:** We respectfully disagree. Our results surpass the SOTA under the same settings. \\nThe results you mentioned in [ref1] are obtained using the Youtube-ASL dataset [ref2] for training, not How2Sign. Under the same experimental setup, [ref1] reports BLEU-{1,2,3,4} scores of $30.2$, $16.7$, $10.5$, and $7.0$ (**Table 2 in [ref1]**). Hence, our approach demonstrates superior performance compared to [ref1].\\nAdditionally, we follow a standard task setup for sign language translation, focusing on direct sign-to-spoken language translation. By contrast, the main goal of [ref1] is to mitigate privacy risks in large-scale web-crawled datasets, as reflected in the title of [ref1]. Thus, a direct comparison between our method and [ref1] is not appropriate.\\n\\n [ref1] Towards Privacy-Aware Sign Language Translation at Scale. ACL 2024.\\n [ref2] YouTube-ASL: A Large-Scale, Open-Domain American Sign Language-English Parallel Corpus. NeurIPS 2023.\\n\\n**Q2.\\tWhat method is used for keypoint estimation, and on what dataset is it trained?**\\n\\n**A:** The keypoint estimation method used is Openpose, a widely employed tool for human pose estimation. The keypoint data we use is provided by the publicly available How2Sign raw dataset [ref3], not extracted by us. \\n\\n[ref3] How2sign: A large-scale multimodal dataset for continuous american sign language. CVPR 2021.\\n\\n**Q3.\\tLack of analysis on learned tokens (*e.g.* do they tend to correspond with signs?).**\\n\\n**A:** Due to the ambiguity in your statement, we are unable to fully understand which aspect of 'the analysis on learned tokens' you are referring to. Therefore, we are assuming that you are referring to an analysis of whether VQ-VAE can accurately reconstruct the output sign sequence. The ability of VQ-VAE to accurately replicate the input after training has been frequently validated across nearly all deep learning-related fields. Additionally, we reference [ref4] to implement sign reconstruction visualization, which provides a clear comparison of the reconstruction.\\n\\n[ref4] MotionGPT: Human Motion as a Foreign Language. NeurIPS 2023.\\n\\n**Q4. The network structure for generating sign tokens and whether more complex encoders have been tried.** \\n\\n**A:** You are right, our sign encoders are built with 1D convolutions, and we have tried using more complex encoders, such as transformers. During our exploration, more complex encoders did not yield significant performance gains but significantly increased training time and debugging difficulty. Therefore, we ultimately chose to use 1D convolutions for the sign encoders.\\n\\n**Q5.\\tHave you compared FLAN-T5 to a basic text translation model like T5, as used in [ref1], which avoids prompts and combined sign+text vocabularies?**\\n\\n**A:** I would like to clarify that [ref1] does not explicitly state that the T5 model [ref5] used avoids prompts. On the contrary, using prompts is the default setting when training T5. Additionally, prompts have been shown to be an effective training strategy in nearly all large-scale model studies, with minimal adverse effects. \\n\\nThe main reasons for not comparing FLAN-T5 [ref6] with T5 are as follows: Firstly, FLAN-T5 and T5 are language models based on the same architecture; FLAN-T5 does not alter the original T5 design. Secondly, FLAN-T5 integrates instruction tuning with extensive fine-tuning data, boosting its robustness and accuracy in multi-task and multilingual settings. We also briefly discussed the reasons for using FLAN-T5 in the manuscript (**Lines 191-193, 316-318**). For the above reasons, we conduct experimental analysis only on FLAN-T5.\\n\\n[ref5] Exploring the limits of transfer learning with a unified text-to-text transformer. JMLR, 2020.\\n[ref6] Scaling Instruction-Finetuned Language Models. JMLR, 2024.\\n\\n**Q6.In Section 3.3, what is the query for retrieval augmentation, and what are the \\\"chunks\\\"?**\\n\\n**A:** Queries typically relate to themes or keywords from the initial translations [ref7,8]. For example, the initial output for PHOENIX-2014T is \\\"am tag vor allem im norden regen.\\\" Using RAG, we verify and refine it with the query: \\\"Check the accuracy and grammar of 'am tag vor allem im norden regen' in German.\\\" 'Chunks' are retrieved information segments used to support generation tasks [ref7,8].\\n\\n[ref7] Retrieval-augmented generation for knowledge-intensive NLP tasks. NeurIPS 2020.\\n[ref8] When not to trust language modeals: Investigating effectiveness of parametric and non-parametric memories. ACL 2023.\\n\\n**Q7.The meanings of ''w/o RAG'', ''pre-SignLLM'', and ''post-SignLLM''.**\\n\\n**A:** ''w/o RAG'' indicates that the RAG module is not used in our pipeline. ''Pre-SignLLM'' and ''post-SignLLM'' refer to the execution stages of RAG in our pipeline. RAG is executed before SignLLM in 'Pre-SignLLM' and after in 'Post-SignLLM' (**Lines 502-503**).\"}", "{\"comment\": \"Thank you for the responses. Regarding the various points about word order, word formation, phonetics and phonology:\\n Unfortunately the responses do not clarify things in my opinion. I still find these descriptions about the unique aspects of sign language to be misleading. While these are relatively minor points, all together I am afraid they may add to the many misconceptions about sign language that exist in the literature. In my view, if all of these claims were simply removed from the paper, it would improve the paper's quality.\"}", "{\"comment\": \"Thank you for your responses. Some of them help to clarify things for me. I'll respond to the more important points:\\n\\nQ1. The results do not seem to outperform SOTA results as claimed.\", \"a\": \"Due to the ambiguity in your statement, ...\\n\\nI was asking for any analysis of how meaningful the learned tokens are. Measuring the reconstruction performance of the VQ-VAE wouldn't really get at this. As I mentioned, an interesting question is whether the tokens correspond to signs. This could be analyzed qualitatively, for example, by presenting example images corresponding to a given token.\\n\\nQ6, Q7\\n\\nI'm afraid I do not follow your answers, although I am familiar with RAG. The example you provide (\\\"Check the accuracy...\\\") seems like a query one can give to any LLM for post-editing, not necessarily a RAG-based model. Some examples in the appendix might help. For example, for each of \\\"w/o RAG\\\", \\\"pre-SignLLM\\\", and \\\"post-SignLLM\\\", you could provide the initial output, the query to the RAG system, the retrieved chunks, and the final output (for the cases where RAG is used).\"}", "{\"comment\": \"**Q10. Discussion on whether the tokenizer can accurately encode sign language gestures and the manual evaluation of reconstruction by fluent sign language users (Lines 233-235).**\\n\\n**A:** We find it difficult to agree. First, recruiting enough fluent users of American Sign Language and German Sign Language is both challenging and unrealistic. Second, VQ-VAE is a mature, unsupervised method that has been successfully validated in almost all deep learning fields, designed to replicate the original output. Finally, our implementation is primarily based on [ref8], and VQ-VAE reconstruction visualizations are also provided in [ref8]. Accordingly, we will also provide the code for reconstruction visualizations.\\n\\n[ref8] MotionGPT: Human Motion as a Foreign Language. NeurIPS 2023.\\n\\n**Q11. Explanation of the challenges faced by VRG-SLT regarding the contextual and cultural nuances of sign language (Line 520).** \\n\\n**A:** Sign language translation faces several challenges, including culturally specific gestures, the importance of facial expressions and body language, regional dialects, and context dependence. Gestures may vary across cultures, and facial expressions and body language further complicate translation. More challenging is that many sign language words with vastly different meanings have strikingly similar gestures, which poses a significant challenge to the model's ability to distinguish them. Additionally, the meaning of the same gesture can change depending on context, requiring models to understand multiple sign languages and their cultural nuances.\\n\\n**Q12.\\tThe example of RAG.**\\n\\n**A:** Queries typically relate to themes or keywords from the initial translations [ref7,8]. For example, the initial output for PHOENIX-2014T is \\\"am tag vor allem im norden regen.\\\" Using RAG, we verify and refine it with the query: \\\"Check the accuracy and grammar of 'am tag vor allem im norden regen' in German.\\\" \\n\\n[ref7] Retrieval-augmented generation for knowledge-intensive NLP tasks. NeurIPS 2020.\\n[ref8] When not to trust language modeals: Investigating effectiveness of parametric and non-parametric memories. ACL 2023.\\n\\n**Q13.\\tThe meanings of ''w/o RAG'', ''pre-SignLLM'', and ''post-SignLLM''.**\\n\\n**A:** ''w/o RAG'' indicates that the RAG module is not used in our pipeline. ''Pre-SignLLM'' and ''post-SignLLM'' refer to the execution stages of RAG in our pipeline. RAG is executed before SignLLM in 'Pre-SignLLM' and after in 'Post-SignLLM' (**Lines 502-503**). \\n\\n**Q14. The source of the derived training data \\uff08Line 407\\uff09using keypoints and image frames in the diagram.**\\n\\n**A:** Several papers in your references express this in the same way.\"}" ] }
7k4HVhUS9k
A Black Swan Hypothesis: The Role of Human Irrationality in AI Safety
[ "Hyunin Lee", "Chanwoo Park", "David Abel", "Ming Jin" ]
Black swan events are statistically rare occurrences that carry extremely high risks. A typical view of defining black swan events is heavily assumed to originate from an unpredictable time-varying environments; however, the community lacks a comprehensive definition of black swan events. To this end, this paper challenges that the standard view is incomplete and claims that high-risk, statistically rare events can also occur in unchanging environments due to human misperception of their value and likelihood, which we call as spatial black swan event. We first carefully categorize black swan events, focusing on spatial black swan events, and mathematically formalize the definition of black swan events. We hope these definitions can pave the way for the development of algorithms to prevent such events by rationally correcting human perception.
[ "AI Safety", "Risk", "Reinforcement Learning" ]
Accept (Poster)
https://openreview.net/pdf?id=7k4HVhUS9k
https://openreview.net/forum?id=7k4HVhUS9k
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uzHIsS32C8", "th0aoNjVed", "rOPUXQyN6Z", "qVt4HtWKAY", "pROUXeeVAk", "nSyS48Fnb4", "n4Ce43Om88", "jcDab2BxR6", "fZmJueTmFi", "c3XhlEZo3Z", "X1vapI2lMH", "Wx3ggpKOwR", "WlbmtzI0Xc", "RWHOzsZBmZ", "PlfAZURhmO", "P4pZ9oKUdq", "N4FRIKCM3S", "MTJvkjasVD", "L6PTQvUWXe", "KFli3mptVn", "IUsYRtU4f2", "CxkAj8KDWM", "9lYyi33nVj", "9WRHZvxcdu", "8jlMW0MM5g", "5SRVQrxbDW", "5PwDT3dqws", "5KgGqLmj7b", "4npvrjf6t2", "4ONwR3ruxl", "3XTS0aQ58C", "2Mz3kgZYse", "0i3XkwthzO" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731991041732, 1732349813053, 1729602459651, 1731985631034, 1730466503349, 1735014917628, 1732232651499, 1732210816577, 1732642011973, 1733098870387, 1732848113089, 1732383738698, 1732642231705, 1732642053646, 1732230186218, 1732384386207, 1733100120403, 1737523528898, 1732230108118, 1732386257083, 1732003405394, 1731034731297, 1733098626952, 1733099160338, 1732644882884, 1732230054569, 1731984719263, 1732039554558, 1731120777186, 1732386582845, 1732636502698, 1732039514748, 1732229482497 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Submission2747/Reviewer_234c" ], [ "ICLR.cc/2025/Conference/Submission2747/Reviewer_234c" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Submission2747/Reviewer_WW6X" ], [ "ICLR.cc/2025/Conference/Submission2747/Area_Chair_rVZy" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Submission2747/Reviewer_234c" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Submission2747/Reviewer_2mu5" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Submission2747/Reviewer_DuS7" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Submission2747/Reviewer_DuS7" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Submission2747/Reviewer_2mu5" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Submission2747/Reviewer_WW6X" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ], [ "ICLR.cc/2025/Conference/Submission2747/Authors" ] ], "structured_content_str": [ "{\"title\": \"Thanks for the comments!\", \"comment\": \"Dear Reviewer DuS7,\\n\\nFirst, we would like to thank the reviewer for fruitful comments. Before addressing the reviewer's concerns, we kindly encourage the reviewer to first read the ***global comment on the main message***, followed by our responses that address specific weaknesses and questions.\\n\\nWe are happy to provide further clarification on any remaining concerns or questions from the reviewer!\\n\\n**Q1. Why does distortion occur on visitation probability rather than transition probability?**\\n\\nThank you for raising this excellent question! We carefully scrutinized which probability is appropriate to distort, and here is the reasoning behind our choice of visitation probability. ***The primary reason lies in the definition of the event unit as a state-action pair $(s, a)$***. In prospect theory, the distortion of rewards (how good or bad an outcome is perceived) and probabilities occurs at the level of events $e$, specifically on $R(e)$ and $P(e)$. Translating this perspective to the context of a Markov Decision Process (MDP), the goodness or badness of a policy is revealed when a state $s$ is encountered, and the subsequent action $a$ is taken, yielding a reward $r(s, a)$. Therefore, it is natural to define the event unit as $(s, a)$ in the MDP framework.\\n\\nConsequently, probability distortion should occur where probabilities are defined over the support of $\\\\mathcal{S} \\\\times \\\\mathcal{A}$. If we were to define distortion on transition probabilities, this would imply distorting the probabilities of the *next state* given a fixed $(s, a)$, which is defined over the support of $\\\\mathcal{S}$. Such an approach would not align with the event-level distortion described in prospect theory. \\n\\n***By distorting visitation probabilities, we ensure consistency with the literature on prospect theory and maintain alignment with the event-based perspective.*** \\n\\n\\n**Q2. Why are value distortion and probability distortion piecewise?**\\n\\nThank you for raising this important issue. ***The piecewise nature of these distortions aligns with prospect theory, which models the irrationality of human behavior.*** While we have detailed how Definitions 1 and 2 were derived in lines 167 to 171, we will briefly summarize for clarity.\\n\\nThe value function $v$ is piecewise due to *loss aversion*. For example, losing 1M feels significantly more impactful than gaining 1M, despite their equal absolute values. This asymmetry reflects the principle that losses are perceived more strongly than equivalent gains. Similarly, the probability distortion $p$ is piecewise because humans *overestimate or underestimate low-probability events* depending on whether they involve gains or losses. For instance, people overestimate the chances of winning the lottery (a low probability gain) but underestimate the likelihood of rare negative events, such as airplane crashes or pandemics like COVID-19.\\n\\nFor further discussion, please see [Appendix C.2. Irrationality due to subjective probability].\\n\\n\\n**Q3: Example 2 - Case 3**\\n\\nIn the context of a stationary MDP, any $(s, a, t_{bs})$ that is identified as a black swan will always qualify as an S-black swan.\\n\\n\\n**Q4: Importance of Lemma 1**\\n\\nThank you for highlighting this point. **The importance of Lemma 1 lies in connecting distortion in visitation probability with real-world data collection, specifically in demonstrating how the draft's analysis can be applied in an experimental setting.** Consider a scenario where an agent collects a dataset $\\\\mathcal{D} = \\\\{(s_i, a_i, r_i, s'_i)\\\\}, i \\\\in [N]$, where both the rewards $r_i$ and the state-action occupancy measure of $\\\\mathcal{D}$ are subject to distortion. Simply distorting visitation probability does not intuitively explain how the data is collected. The data collection process follows a sequence: given a state and action, the agent receives a reward, transitions to the next state, and then selects the next action, repeating this process $N$ times. Lemma 1 addresses this by demonstrating that it is always possible to identify a distorted state that produces an equivalent distortion to that caused by the distortion function $w$. This result provides a more intuitive understanding of how the collected data exhibit distortion in an experimental setting, making the analysis more practical and relatable.\\n\\n**Q5, W2: Some Applications**\\n\\nThank you for highlighting the potential future extensions of how this analysis can be utilized. We have addressed this in our response to **[Q4: Aiding Algorithm Design] from reviewer 234c**. Briefly, our work suggests that the analysis can inform the development of safe exploration strategies that account for human misperception and adapt to changes in probability or feasible set size, helping to better prevent S-black swan events.\\n\\n**W3: Frequency of black swans**\\n\\nThank you for your question. Due to space limitations, we kindly refer you to our response to **[Q5: Hitting Time Analysis] from reviewer 234c** for further details.\"}", "{\"title\": \"Reviewer's follow-up response\", \"comment\": \"Thank you for the very detailed response. At this point, I think you have addressed many of my initial concerns. In particular, I found the clarification of Theorem 4 and the order-preserving property of CPT-distortion helpful. I also like the discussion on the difference between static and dynamic distortions and believe that incorporating the above clarifications into the paper will benefit its future readership.\\n\\nSome further comments on the authors' follow-up response:\\n\\n- On the experiment: I think the experiment you conducted is indeed interesting, but I am not sure about whether it is on theme: anyway your main argument is that CPT-distortion is bad for finding the optimal policy (such that it can be used to model \\\"black swans\\\"), so providing an example showing that some distortions are actually beneficial is somewhat strange (although also making sense) to me.\\n\\n- On the example of predicting blood sugar level: I think this example is interesting and indeed reflects the s-black swan notion that you attempted to formalize. Yet, it would be better if you could take a step further and show how your formalization can help in addressing this problem---perhaps in a future version of the paper (e.g., how to find a \\\"debiasing function\\\" to correct the distortions). I know that this may still require some additional efforts but do believe that this could be a great addition to your work.\\n\\nIn general, I appreciate the efforts of the authors in the response and decided to raise my rating from 3 to 5.\"}", "{\"summary\": \"This paper proposes that rare and high-risk events, namely black swans, can originate from misperceptions of an event\\u2019s reward and likelihood even in static environments. The paper formalizes this concept in the context of MDP-based decision tasks using the machineary of cumulative prospect theory, where the misperceptions of rewards and transition functions in MDPs are characterized by distortion functions, resulting in a gap between the ground truth MDP and the MDP perceived by humans. The paper then theoretically analyzes the impact of black swans on the value function and the hitting time of black swan events.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The main proposal of the paper, namely the black swan events can emerge in stationary environments, is interesting and makes sense to me. I also found the formalization using distortion functions on transition probabilities and rewards natural and captures the intuition of human irrationality.\", \"weaknesses\": \"In my opinion, the most important weakness of the paper is that apart from the main claim (black swans in unchanging environments), it is not clear to me what the main takeaways of the paper are. The authors wrote in the abstract \\\"We hope these definitions can pave the way for the development of algorithms to prevent such events by rationally correcting limitations in perception\\\", but it is not clear to me how the definitions and theoretical results presented in the paper can really benefit this: under the formalization of the paper, black swans in stationary environments essentially stem from the distorted transition and reward functions, which naturally create a gap between the ground truth MDP and the human MDP. The main theoretical result of the paper (Theorem 4) focuses on proving this gap in the context of value function estimation, yet such a result is quite straightforward and the techniques used in proving it (bounding the gap in value estimation given the gap in transition and reward functions) is also somewhat common in the RL literature. Perhaps more importantly, I feel that such a result (and also the result in Theorem 5) is not really useful to inspire algorithm design since it does not tell us _how_ to correct the misperceptions in the human MDP. Other results in the paper are also quite natural to me and seem not really tied to the specific black swan problem considered by the paper (see Questions for details). While the authors have extensively discussed in the appendix that existing solutions in safe RL may also fall short, only _defining_ such a problem is not enough to me for acceptance.\\n\\nI also have some concerns about the overall presentation of the paper. For example, the formal definition of s-black swan is deferred to Section 6 but heavily referred to in earlier sections, and I think it may be better to move this definition to earlier sections in the paper. I also do not understand the role of Section 3 beyond it re-emphasizes the main claim of the paper in Remark 1.\", \"questions\": [\"Is the proof of Lemma 1 in the appendix correct? The lemma in the main text is stated for visitation probabilities, while the proof in the appendix seems to only deal with rewards. The authors could clarify if there is a connection between the reward-based proof and the visitation probability statement that is not immediately apparent.\", \"Theorem 3 seems a quite general (and somewhat trivial) result to me: of course if we consider a sufficiently general case, the difference in MDP transition probabilities would result in different optimal policies. What is the concrete relation between this result and the s-black swan defined by the paper?\", \"What is the role of Proposition 1\\uff1fAnd isn't it contradictory to Algorithm 1 (Algorithm 1 defines an s-black swan for arbitrary $t$, while Proposition 1 considers $t$ in specific time intervals)?\", \"How the definitions and theoretical results in the paper may be used to aid algorithm design?\", \"Could the hitting time analysis in Theorem 5 be used to inform when to trigger perception updates?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"None\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Main Message\", \"comment\": \"We would like to sincerely thank all the reviewers for their thorough and insightful feedback! We hope that this response effectively delivers the main message of our draft and contributes to a clearer and more comprehensive understanding of our work.\\n\\n**Our central argument highlights that if black swan events can also arise from human misperception, AI safety algorithms should broaden their focus beyond solely developing accurate predictive models for rare events.** Specifically, they should also aim to address and correct biases in data generated by human behavior, which are shaped by such misperceptions. This perspective emphasizes the importance of prioritizing **not only algorithmic design**, which is the current standard approach in AI safety, but also the mitigation of human-induced biases in the **data**. We believe that introducing a new direction for AI safety algorithms - one that has not been widely explored - is vital for the AI community. A deeper understanding of the origins of black swans paves the way for achieving greater control and making more informed, optimal decisions.\\n\\n\\nWe appreciate that this paper primarily focuses on definitions and offers a viewpoint that we believe is essential for guiding the direction of future AI safety algorithm design, rather than proposing specific algorithms to correct human misperception. ***Its central goal is to establish a foundational understanding of how black swans may be interpreted as a result of human misperception.*** While we acknowledge that the paper may not meet the standard for acceptance if it does not provide sufficient evidence to support this interpretation in the context of human perception, we also see significant potential in its contributions.\\n\\nIf the paper presents a well-reasoned perspective on the possibility of black swans occurring in stationary environments due to human misperception, supported by convincing evidence, we believe it could positively influence the AI community. Specifically, it could inspire efforts to develop AI safety algorithms that address human-biased data, moving beyond a sole focus on creating accurate forecasting algorithms for black swans. Exploring methods for bias correction or algorithm development would be a natural and meaningful extension of this work, although we recognize that such efforts fall outside the immediate scope of this paper.\"}", "{\"summary\": \"This paper is a theory paper that challenges the view that black swan events only originate from changing (non-stationary) environments. Instead, the paper focuses on defining S-Black Swan events, which occur in unchanging environments due to human misperception of events\\u2019 values and probabilities. The paper is focused on formalizing the definition of S-Black-Swan events, by starting from Hypothesis 1 in the introduction.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The main arguments of the paper are well-structured and well-communicated. The flow of the paper is helpful to the reader in communicating both the preliminary materials as well as leading to the mathematical formulation of the S-Black-Swan definition. For a theory paper, which could have the tendency to overcomplicate results, it feels like the authors have made significant effort to make the paper readable and therefore potentially meaningful to those who might use it as a future reference.\", \"The definitions of the stationary ground MDP, the Human MDP, and the Human Estimation MDP form a clear picture of the potential Agent-Environment Framework that could lead to a S-Black-Swan event.\", \"The presentation showing that black-swan events can arise in stationary environments is interesting.\"], \"weaknesses\": [\"One weakness of the paper is in its ability to build a strong link between the results of the paper and how it might affect the wider machine learning community. This weakness can be broken down into a combination of the following:\", \"The related works section is left to the end and reads a bit like a list of works at the intersection of expected utility theory and reinforcement learning. The reader gets to the end of the section and is then told that this literature does not cover black swan events, but this statement lacks enough motivation.\", \"The contribution of the paper is highlighted as defining an S-Black Swan event, but this contribution does not appear to be motivated by issues the current RL algorithms struggle within the literature. Every so often the paper includes comments like \\u201caimed at guiding the design of safer ML algorithms in the future\\u201d; \\u201claying the groundwork for future algorithmic design\\u201d; and \\u201ctraditional risk criteria in RL are insufficient for managing the unique risks associated with black swan events\\u201d. While these comments might be the motivation of the paper, a more concrete motivation could be showing (or referencing) a specific RL (or ML) scenario which could benefit from this new definition and formalization of a S-Black-Swan event. This update seems like it would be important for an ICLR conference venue.\"], \"typos\": [\"Line 108 $\\\\mathbf{p}_c = \\u2026$ should be 3 prob choices.\"], \"comment\": [\"Figures 1c and 1d could benefit from being moved further down in the paper.\"], \"questions\": \"1. Are there any examples where a machine learning study/problem/algorithm would have benefited from the definition of a S-Black-Swan event?\\n2. The probability distortion function seems like it would be an easier function to measure in practice compared to the value distortion function. Each individual might legitimately have different reasons to value outcomes differently. Going with Example 1, a loss of -1000 might be significantly worse for a poorer person than a richer person. It looks like the theory still holds when $\\\\epsilon_r = 0$, but $\\\\epsilon_d > 0$, but could the authors comment on that?\\n3. If possible, could the authors provide additional context to the novelty of the work and how previous works have not considered an S-Black-Swan event, and attributed rare events to changing environments.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"Thank you for your submission to ICLR. This paper introduces the S-Black Swan (i.e., black swan events in stationary environments), which provides formalism for the setting where black swans occur in unchanging environments due to a misperception of events\\u2019 probabilities and values. This is in contrast to the typical assumption that a black swan event occurs in an unpredictable and dynamic environment. The authors define S-Black Swan in the context of a Markov Decision Process (MDP).\\n\\nReviewers agreed that the paper was clearly written, provides a novel contribution, and has strong mathematical formalism (which captures intuition well). However, there were concerns from multiple reviewers about whether this paper was well-enough motivated from problems in machine learning, and in general whether it has enough connection to the broader machine learning community\\u2014and wanted a clear explanation of how results from this paper can benefit this community. Also, multiple reviewers requested an empirical/practical application demonstration to make this connection more clear. In their rebuttal, the authors put a lot of effort into justifying this connection, and giving concrete demonstrations of applications, along with an experimental result. On balance, a majority of reviewers updated their scores by the end of the rebuttal period, and felt that their concerns were addressed. I therefore recommend this paper for acceptance, and encourage the authors to keep this feedback in mind when putting together the camera-ready version of their paper.\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, the authors made efforts to thoroughly answer the reviewers\\u2019 questions, where they focused on explaining the practical application of their method and providing empirical results. There was a healthy discussion with multiple reviewers, which led to multiple reviewers raising their scores. In the end, the authors addressed most concerns from all of the reviewers.\"}", "{\"title\": \"Experiments to Support Our Main Message\", \"comment\": \"Dear Reviewers 2mu5, DuS7, WW6X, and 234c,\\n\\nWe hope our comments address the reviewers' concerns! During the active discussions with Reviewer 234c, who raised *a very insightful question* to support the clarity and significance of this paper\\u2019s contribution, we focused on the following key question: \\n\\n**\\\"How does CPT-characterized distortion provide an additional meaning to the suboptimal gap?\\\"** \\n\\nTo address this question, we conducted **simulation experiments in Gridworld.** We kindly recommend the reviewers refer to the section titled: \\\"[Weakness 1] S-black swan is just an emergence from a non-zero model error from true P and R, so the theoretical analysis is trivial, making it indistinguishable from prior RL theory literature focusing on PAC bound analysis via model estimation - (Additional Answer 2): Experiment to support our answer for [Weakness 1]\\\"\\n\\nThis discussion is included in the **discussion panel with Reviewer 234c.** \\n\\nThrough the experiment, we concluded that: \\n\\n**\\\"CPT-distortion provides an intrinsic motivation (or helps shape a beneficial intrinsic reward) for the agent to discover the optimal policy, potentially outperforming that of the real-world scenario.\\\"**\", \"these_experimental_results_strongly_support_our_main_message\": \"Interpreting the risk by human misperception reveals novel phenomena, underscoring the need for in-depth scrutiny in future algorithm design.\\n\\nWe hope our experiment results address the concerns of reviewers who were interested in seeing simulation results. Thank you again for your thoughtful feedback and active engagement. We welcome any further questions or discussions! \\n\\nBest regards, \\nAuthors\"}", "{\"title\": \"Response of the reviewer\", \"comment\": \"Thank you for your detailed response, which addressed some of my concerns. I agree with you that the central message of the paper---that black swans can emerge due to human misperception, is an interesting and potentially significant contribution. I would also like to make it clear that my initial review is not meant to criticize this main message, but to share some of my concerns on (1) the formalization of the main message and (2) the implications of such a formalization.\\n\\nIn particular, I now realize that my initial feeling mostly came from the fact that most of MDP analysis in the paper models black-swans as _distortions on rewards and transition functions_ (e.g., Defs. 4 and 5). While this formalization is indeed natural, I think the current exposition of the paper does not really makes it stand out from the conventional analysis on MDPs, since \\\"changing the reward distribution and/or transition probabilities can change the optimal policy/value estimation\\\" is a well-known result in RL theory. I would also think that even the conventional (non-static) black swans can also be modeled as \\\"distortions\\\" on rewards or transition probs, despite that such distortions are caused by the non-stationary MDP dynamics itself. But what is the essential difference between the analysis of the conventional black swans and the s-black swans then?\\n\\nTo address this concern, I think the authors could include some concrete examples (e.g., in an application scenario) to underline the importance of formalizing the s-black swans, and discuss how such a formalization can indeed make a difference compared with the conventional formalization of black swans.\", \"some_follow_up_questions_that_i_still_have\": [\"Q1. I still do not understand why transforming the visitation probability of a specific state-action pair into the visitation probability of the corresponding reward is correct---as you have mentioned in the proof, different state-action pair may be mapped to the same reward, how can we discriminate between them then?\", \"Q3. I also do not understand why _any_ black swans can also be modeled by your framework---does this mean that there is not an essential difference between black swans and the introduced s-black swans? But then the position of the paper would be questionable.\"]}", "{\"comment\": \"Dear Reviewer WW6X,\\n\\nThank you for your thoughtful follow-up comment. We sincerely appreciate your understanding and we are writing further comments to address some points that may have been misunderstood.\\n\\n### **[Q1] How to build a strong link between the results of the paper and how it might affect the wider machine learning community?**\\n\\nThank you for raising this important question. We agree that establishing a strong connection between our findings and their implications for the wider machine learning community is crucial for deepening the contribution of our work. Our response to this question aligns closely with our answer to [W1] from Reviewer 234c, and we modify our previous comments to address this context:\\n\\nTo address this question, we will answer how the model error induced by CPT-distortion can provide a new message in RL theory as follows :\\n\\n * ***[Question]: How does CPT-distortion on the model (P, R) provide an additional interpretation of the suboptimality gap?\\\"*** \\n\\nOur answer is \\n\\n * ***[Answer]: \\\"CPT-induced distortions exhibit an order invariance property, meaning that they preserve the order of $P$ and $R$, while still allowing for the possibility of finding suboptimal policies (as demonstrated in Theorem 3).\\\"***\", \"let_us_elaborate_further\": \"Theorems 1, 2, and 3 in Section 4 collectively clarify the distinction. The main message of Theorems 1 and 2 is that in environments with low complexity, the agent selects the same optimal policy, as the distortion functions $w$ and $v$ preserve the order of $R(s, a)$ and $P^\\\\pi(s, a)$ for all $(s, a)$. Then, the key difference from prior PAC-bound analyses lies in the following:\\n\\n * *[Detailed answer]: Regardless of the degree of distortion in $R$ and $P^\\\\pi$ (i.e., irrespective of how large the model-error bound $\\\\epsilon_r,\\\\epsilon_p$ is, in terms of RL theoretical literature), in low-complexity environments, the agent will still select the true optimal policy due to the \\\"order-invariant\\\" property of $w$ and $v$ (as outlined in Theorems 1 and 2). However, this \\\"order invariance\\\" breaks as the complexity of the environment increases, leading to suboptimal policy selection, which is highlighted in Theorem 3.*\\n\\nFurthermore, maybe a more interesting question is \\\"How does the lower bound of the value gap provide a different perspective beyond the typical interpretation involving $\\\\epsilon_r$ and $\\\\epsilon_p$ as model errors?\\\"\\n\\nIn Theorem 4, while the lower bound does include the distortion of reward $C_{bs}$\\u2014a factor that aligns with previous RL theory literature utilizing $\\\\epsilon_r$\\u2014it also introduces two additional factors:\\n\\n * A larger feasible set for S-black swan events (i.e., $R_{max} - R_{bs}$ increases).\\n * A higher minimum probability of S-black swan occurrence (i.e., $\\\\epsilon^{min}_{bs}$ is larger).\\n\\nEven though both of these factors diminish as $\\\\epsilon_p$ converges to zero, we believe they represent new, distinct elements of the lower bound. These factors capture important characteristics of the problem that are not fully encapsulated by $\\\\epsilon_p$. By considering these elements explicitly, the lower bound offers a richer perspective on the dynamics of S-black swan events, beyond merely interpreting them as aspects of model error.\\n\\n### Additional experiment to support Q1\\n\\nTo further support our answer tothe reviewer's comments, we would like to provide simulation results addressing this point. If we agree that the critical question is: \\\"How does CPT-characterized distortion provide an *additional meaning* to the suboptimal gap? then a natural follow-up question arises: **\\\"Does CPT-characterized distortion offer any *benefit* to the agent in finding the optimal policy?\\\"** \\n\\nTo explore this, we conducted additional experiments. Consider an 8x8 Grid World where the agent always starts at $(0, 0)$ and can reach one of six different goal states located at $(4, 4)$, $(5, 5)$, $(6, 6)$, $(7, 7)$, $(8, 8)$, and $(9, 9)$. Each goal has a distinct reward from the set $[0.1, 1, 10, 100, 1000, 10000]$, and the agent incurs a step penalty of $-0.1$ for each move. Naturally, the optimal policy under real-world conditions would be to reach the goal at $(9, 9)$, which provides the highest reward. \\n\\nInterestingly, our simulation results reveal that there exists a **specific pair of reward distortion and probability distortion** (as defined in Definitions 1 and 2) that enables the agent to discover a better policy than what was optimized under real-world conditions.\"}", "{\"title\": \"Thanks for comments!\", \"comment\": \"Dear Reviewer 2mu5,\\n\\nThank you for your feedback on emphasizing the significance of the theorems and including examples of real-world cases of S-Blackswan to illustrate their practical applications. We wanted to follow up and inquire if you have any further concerns or feedback that we can address as we approach the end of the rebuttal phase. We greatly value your insights and are always happy to discuss further!\\n\\nBest, Authors\"}", "{\"title\": \"Response to authors\", \"comment\": \"Thank you for the response, which addresses my concerns. I appreciate the author's effort in modeling the S-black-swan events. Thus, I increase the score to 6. However, I believe that this work would be more beneficial if it proposed algorithm(s) or method(s) to mitigate the potential risks of these events.\"}", "{\"title\": \"Follow-up response for Reviewer\", \"comment\": \"Dear Reviewer 234c,\\n\\nThank you for your active engagement in these discussions! Your thoughtful feedback has been immensely valuable in helping us clarify and refine the presentation of our paper\\u2019s contributions. We have prepared follow-up responses to address potential misunderstandings and further address your concerns. As always, we are more than happy to provide additional clarifications if needed.\\n\\n---\\n\\n## **[Point 1: does CPT-distortion really helps the agent to find optimal policy?]** \\n\\nIt seems there may have been some misunderstandings, and we would like to clarify an important point: \\n\\n**CPT-distortion can actually help the agent discover the optimal policy.** \\n\\nOur experimental results support this claim, along with additional experiments suggesting that the underlying reason is that CPT-distortion may provide an intrinsic motivation (or reward) [3,4,5] for the agent to find the optimal policy. This phenomenon, while counterintuitive, has been noted as an interesting and impactful insight in prior work [1, 2]. For example, paper [1] explicitly states in its abstract: \\n\\n> *\\\"Irrationality fundamentally helps rather than hinders reward inference, but it needs to be correctly accounted for.\\\"* \\n\\n### Experimental Clarification \\n\\nTo provide greater clarity, we would like to revisit and further elaborate on the results of our experiments mentioned earlier. For better clarity, we have numbered the entries in **Table 1** and **Table 2** as (1) through (12). \\n\\n#### **Key Insights from Table 1:** \\n- Results in **Table 1** show that as the level of distortion increases (moving from (2) \\u2192 (3) \\u2192 ... \\u2192 (7)), the probability of visiting the optimal state $(9,9)$ also increases, surpassing the real-world case (denoted as (1)). \\n- **Why does this happen?** \\n\\n#### **Key Insights from Table 2:** \\n- Results in **Table 2** empirically show that increasing the portion of an exploration bonus (a common method for designing intrinsic rewards/motivation [4]) also increases the visitation probability of $(9,9)$ (moving from (8) \\u2192 (9) \\u2192 ... \\u2192 (12)). \\n- Notably, **case (10) matches with case (7)**, providing strong evidence that CPT-distortion can effectively serve as an intrinsic motivation mechanism for the agent to find the optimal policy. \\n\\nWe hope this explanation helps clarify the misunderstanding. Please let us know if you have any further questions or need additional clarification! \\n\\n## **[Point2: regarding with algorithm design]**\\n\\nThank you for your support and thoughtful engagement with the sugar-level example. We are pleased to hear that it helps clarify the distinction between non-stationarity and model distortion, addressing the reviewer\\u2019s concerns. Indeed, a natural extension of this work would be to explore future algorithm designs that aim to debias the distortion function. While we have not included algorithm design in this draft, we hope the current manuscript effectively conveys the **main message** we have emphasized and demonstrates its contribution to the field. Please let us know if further clarification would be helpful - we are always happy to discuss! \\n\\n\\n\\n## Reference\\n[1] Chan, Lawrence, Andrew Critch, and Anca Dragan. \\\"Human irrationality: both bad and good for reward inference.\\\" arXiv preprint arXiv:2111.06956 (2021). \\n[2] Kwon, Minae, Erdem Biyik, Aditi Talati, Karan Bhasin, Dylan P. Losey, and Dorsa Sadigh. \\\"When humans aren't optimal: Robots that collaborate with risk-aware humans.\\\" In Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction, pp. 43-52. 2020. \\n[3] Chentanez, Nuttapong, Andrew Barto, and Satinder Singh. \\\"Intrinsically motivated reinforcement learning.\\\" Advances in neural information processing systems 17 (2004). \\n[4] Strehl, Alexander L., and Michael L. Littman. \\\"An analysis of model-based interval estimation for Markov decision processes.\\\" Journal of Computer and System Sciences 74.8 (2008): 1309-1331. \\n[5] Bellemare, M., Srinivasan, S., Ostrovski, G., Schaul, T., Saxton, D., & Munos, R. (2016). Unifying count-based exploration and intrinsic motivation. Advances in neural information processing systems, 29.\"}", "{\"comment\": \"### **[Q2] NASA example.**\\nThis example is for the \\\"Q1. Real-world examples that can benefit from the definition of S-Black Swan events\\\". That is more focused on supporting our hypothesis: the existence of a static black swan due to human misperception. Rather than NASA example, we address how the Healthcare example can be framed in an ML setting in the following section.\\n\\n### **[Q3] Healthcare example.**\\nThanks for asking how this example relates to the ML setting. This comment is also similar to our comments of \\\"[Weakness 2]. Distortion Can Also Be Regarded as Non-Stationarity\\\" to Reviewer 234c. We elaborate further as follows.\\n\\nSuppose we aim to predict a patient\\u2019s blood sugar level, denoted as $A$, as a function of factors $W$, $X$, $Y$, and $Z$\\u2014such as the level of carbohydrate intake or the severity of the patient\\u2019s illness. We design a prediction function: \\n\\n$$ A = f(W, X, Y, Z), $$ \\n\\nand train $f$ on a given dataset of input-output pairs, $((w, x, y, z), a)$. \\n\\nHowever, due to human misperception about the relevant factors, we might overlook an important variable, $V$, which significantly impacts blood sugar spikes (i.e., black swan events). Specifically, humans might misperceive the joint probability $p(V, A)$ as being zero, and incorrectly assign the reward $r(V, A)$ as $r(V, A) + 100$, perceiving it as a benign event. As a result, $V$ is excluded when designing the function $f$. Despite our best efforts, failing to include $V$ in the model design introduces errors or non-stationarity in the predicted blood sugar levels, particularly for extreme cases. \\n\\nThis example illustrates how missing critical factors due to human misperception can lead to inaccurate predictions. Furthermore, addressing such misperceptions could potentially reduce the variance (or non-stationarity) of the output $A$, improving model reliability and performance.\"}", "{\"comment\": \"### > Experimental Setup\\n\\n- **Distorted Worlds:** We consider six distorted environments, all of which share the same reward distortion pattern that adheres to Definitions 1 and 2. \\n- **Distortion Parameter ($\\\\gamma$):** $\\\\gamma$ represents the distortion rate of transition probabilities, where smaller values of $\\\\gamma$ indicate larger distortions. \\n- **Algorithm:** All experiments were performed using the Q-learning algorithm. \\n- **Outcome Metric:** The probabilities represent the visitation frequency of each goal state.\\n\\n### > Results \\n\\nThe results demonstrate that CPT-distorted worlds can enhance policy selection by leveraging the interaction between reward and probability distortions, ultimately leading to improved agent behavior in some cases. \\n### Table1\\n| visitation probability of Goal State | (4,4) | (5,5) | (6,6) | (7,7) | (8,8) | (9,9) = Optimal goal |\\n|-----------------|---------|---------|---------|---------|---------|---------|\\n| **(1) Real world (no distortion)** | 0.87929 | 0.09817 | 0.01621 | 0.00414 | 0.00168 | **0.00051** |\\n| **(2) Distorted world (only reward distortion)** | 0.93762 | 0.05177 | 0.00728 | 0.00207 | 0.00090 | **0.00036** |\\n| **(3) Distorted world (gamma=0.9 with reward distortion)** | 0.92069 | 0.06538 | 0.00986 | 0.00264 | 0.00108 | **0.00035** |\\n| **(4) Distorted world (gamma=0.8 with reward distortion)** | 0.90552 | 0.07670 | 0.01244 | 0.00342 | 0.00147 | **0.00044** |\\n| **(5) Distorted world (gamma=0.7 with reward distortion)** | 0.87729 | 0.09861 | 0.01674 | 0.00482 | 0.00185 | **0.00070** |\\n| **(6) Distorted world (gamma=0.6 with reward distortion)** | 0.83943 | 0.12043 | 0.02770 | 0.00843 | 0.00309 | **0.00090** |\\n| **(7) Distorted world (gamma=0.5 with reward distortion)** | 0.81693 | 0.13363 | 0.03255 | 0.01048 | 0.00466 | **0.00175** |\\n\\n#### **Key Insights from Table 1:** \\n- Results in **Table 1** show that as the level of distortion increases (moving from (2) \\u2192 (3) \\u2192 ... \\u2192 (7)), the probability of visiting the optimal state $(9,9)$ also increases, surpassing the real-world case (denoted as (1)). \\n- **Why does this happen?** : Intuitively, this occurs because **reward distortion alone** causes the agent to perceive the step reward of $-0.1$ as a significantly lower value, such as $-5$. This perception incentivizes the agent to prioritize shorter trajectories to avoid accumulating large penalties from step rewards, potentially leading to suboptimal goals (that is going to among (4,4) to (8,8).) However, **distorting probabilities** introduces exploration, encouraging the agent to consider a broader range of trajectories. This additional exploration allows the agent to better evaluate distant goals, often leading to improved policies. Therefore, our answer to the above question is \\n\\n**[Answer]: CPT-distortion provide an intrinsic motivation (or help shape a beneficial intrinsic reward) for the agent to find the optimal policy that that of real-world**\\n\\nWe have checked that CPT distribution can effectively serve this purpose. Specifically, we find that there exists a **reward bonus term** based on count-based exploration, expressed as: \\n\\n$$r(s,a) + \\\\frac{\\\\beta}{\\\\sqrt{n(s, a)}}, $$ \\n\\nwhere $n(s, a)$ represents the visitation count of state-action pair $(s, a)$. \\n\\n### Table 2\\n| visitation probability of Goal State | (4,4) | (5,5) | (6,6) | (7,7) | (8,8) | (9,9) |\\n|-------------------|---------|---------|---------|---------|---------|---------|\\n| **(1) Real world (no distortion)** | 0.87929 | 0.09817 | 0.01621 | 0.00414 | 0.00168 | 0.00051 |\\n| **(8) Real world (Reward Bonus - beta=0.5)** | 0.86549 | 0.10520 | 0.02041 | 0.00587 | 0.00235 | 0.00068 |\\n| **(9) Real world (Reward Bonus - beta=1.0)** | 0.85266 | 0.11078 | 0.02397 | 0.00817 | 0.00340 | 0.00103 |\\n| **(10) Real world (Reward Bonus - beta=1.5)** | 0.83812 | 0.11758 | 0.02782 | **0.01000** | **0.00479** | **0.00170** |\\n| **(11) Real world (Reward Bonus - beta=2.0)** | 0.82782 | 0.12016 | 0.03137 | 0.01253 | 0.00587 | 0.00215 |\\n| **(12) Real world (Reward Bonus - beta=3.0)** | 0.80600 | 0.12934 | 0.03732 | 0.01605 | 0.00831 | 0.00298 |\\n| **(7) Distorted world (gamma=0.5)** | 0.81693 | 0.13363 | 0.03255 | **0.01048** | **0.00466** | **0.00175** |\\n\\n#### **Key Insights from Table 2:** \\n- Results in **Table 2** empirically show that increasing the portion of an exploration bonus (a common method for designing intrinsic rewards/motivation [4]) also increases the visitation probability of $(9,9)$ (moving from (8) \\u2192 (9) \\u2192 ... \\u2192 (12)). \\n- Notably, **case (10) matches with case (7)**, providing strong evidence that CPT-distortion can effectively serve as an intrinsic motivation mechanism for the agent to find the optimal policy.\"}", "{\"comment\": \"## **[Weakness 2]. Distortion Can Also Be Regarded as Non-Stationarity**\\n\\nThank you for raising this question -- this is indeed an excellent point! We agree that the distortion of an MDP between time $t$ and $t+1$ is mathematically similar to the distinction between a Ground-MDP and a Human-MDP. However, while the two formulations are mathematically analogous, they provide different perspectives on the design of RL algorithms. \\n\\nIn non-stationary environments, the majority of RL algorithms focus on how to adapt quickly to previously unseen environments. On the other hand, viewing the gap as a form of **static misperception** offers an alternative approach. It suggests finding a \\\"static\\\" distortion function that can eventually debias the collected data prevents a blackswan. This is also similar to think about finding the \\\"stationary source (function v and w)\\\" that generates the non-stationary environments.\\n\\nThe following example might not be perfect, but our view can also be applied to identify optimal factors that reduce the non-stationarity of the output we aim to predict. \\n\\n### Example: Predicting a Patient's Blood Sugar Level \\n\\nSuppose we aim to predict a patient\\u2019s blood sugar level, denoted as $A$, as a function of factors $W$, $X$, $Y$, and $Z$\\u2014such as the level of carbohydrate intake or the severity of the patient\\u2019s illness. We design a prediction function: \\n\\n$$ A = f(W, X, Y, Z), $$ \\n\\nand train $f$ on a given dataset of input-output pairs, $((w, x, y, z), a)$. \\n\\nHowever, due to human misperception about the relevant factors, we might overlook an important variable, $V$, which significantly impacts blood sugar spikes (i.e., black swan events). Specifically, humans might misperceive the joint probability $p(V, A)$ as being zero, and incorrectly assign the reward $r(V, A)$ as $r(V, A) + 100$, perceiving it as a benign event. As a result, $V$ is excluded when designing the function $f$. Despite our best efforts, failing to include $V$ in the model design introduces errors or non-stationarity in the predicted blood sugar levels, particularly for extreme cases. \\n\\nThis example illustrates how missing critical factors due to human misperception can lead to inaccurate predictions. Furthermore, addressing such misperceptions could potentially reduce the variance (or non-stationarity) of the output $A$, improving model reliability and performance. \\n\\n\\n## **[Question 1]: Transforming the Visitation Probability of a Specific State-Action Pair into the Visitation Probability of the Corresponding Reward**\\n\\nThank you for this thoughtful question! We agree that establishing a one-to-one mapping between state-action visitation probabilities and reward values is not highly practical setting. However, we believe this is a trivial technical issue. Recall that \\\"Humans\\\" manually design the reward of certain events, which are not inherently given by the environment as true values. It is entirely open and flexible for a **Human reward designer** to craft rewards corresponding to different state-action pairs. For example, they could: \\n\\n1. Add Gaussian noise to rewards in a sparse reward setting. \\n2. Design the reward function as a continuous function at the initial stage. \\n\\nThis flexibility allows for practical implementation in various environments that enables the one-to-one mapping. \\n\\n## **[Question 2]: What Is the Meaning of Any Black Swan?**\\n\\nWe apologize for the earlier confusion\\u2014let us clarify this point more specifically. \\n\\nWhat we meant by saying \\\"any black swan can be regarded as an S-Black Swan\\\" depends on how the **time interval** is chosen. Recall that any non-stationary environment can be conceptualized as a **piecewise stationary environment**. In this framework, a general non-stationary environment that changes at every time step $t$ can be viewed as a time interval of 1 (i.e., a stationary environment for each time step). \\n\\nThus, our statement means that for a fixed time interval where certain segments of the non-stationary environment are stationary, a black swan can be regarded as an S-Black Swan within that interval. \\n\\n#### Practicality of This Setting\\nThis setting is practical because, in many cases, non-stationarity evolves slowly over time. For example: \\n\\n- **User Preferences:** Preferences for products or movies do not change every second or every day, making them piecewise non-stationary reward functions. \\n- **Insulin Reaction for Diabetes Patients:** The effect of insulin depends on the condition of a patient's body, but the condition does not fluctuate every second or every day, making it a piecewise non-stationary probability function. \\n\\nThis concept is well-illustrated in **Case 2 of Example 2**. \\n\\nWe hope this detailed explanation addresses your concerns. Please feel free to reach out with additional questions or for further clarification - we are more than happy to discuss!\"}", "{\"title\": \"Further Clarifications on the Reviewer's Concerns\", \"comment\": \"Dear Reviewer 2mu5,\\n\\nWe hope our response helps address the reviewer\\u2019s initial concerns. If Reviewer 2mu5 has any additional points or questions to discuss, please do not hesitate to let us know. Your insightful comments have guided us toward \\n * presenting empirical evidence (a gridworld multi-goal experiment) that supports our main message, \\n * providing an additional real-world case that interprets the origin of the black swan as a static misperception\\n\\nand we look forward to hearing whether this has resolved your concerns. We would be delighted to continue the discussion and provide further clarification to ensure a clear understanding of our work. \\n\\nBest,\\nAuthors\"}", "{\"title\": \"Thanks for the feedbacks!\", \"comment\": \"Dear Reviewer WW6X,\\n\\nThank you for your thoughtful feedback, especially on how the results of this paper can contribute to the AI community -- seems to be a critical consideration for this foundational draft. We hope our follow-up comments have effectively addressed your concerns. As the discussion phase draws to a close, we wanted to check if you have any additional feedback or concerns. Please feel free to share, and we would be more than happy to discuss further. Thank you once again!\\n\\nBest, \\nAuthors\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"comment\": \"(continue) We have checked that CPT distribution can effectively serve this purpose. Specifically, we find that there exists a **reward bonus term** based on count-based exploration, expressed as:\\n\\n$$r(s,a) + \\\\frac{\\\\beta}{\\\\sqrt{n(s, a)}}, $$ \\n\\nwhere $n(s, a)$ represents the visitation count of state-action pair $(s, a)$. \\n\\n### Table 2\\n| visitation probability of Goal State | (4,4) | (5,5) | (6,6) | (7,7) | (8,8) | (9,9) |\\n|-------------------|---------|---------|---------|---------|---------|---------|\\n| **(1) Real world (no distortion)** | 0.87929 | 0.09817 | 0.01621 | 0.00414 | 0.00168 | 0.00051 |\\n| **(8) Real world (Reward Bonus - beta=0.5)** | 0.86549 | 0.10520 | 0.02041 | 0.00587 | 0.00235 | 0.00068 |\\n| **(9) Real world (Reward Bonus - beta=1.0)** | 0.85266 | 0.11078 | 0.02397 | 0.00817 | 0.00340 | 0.00103 |\\n| **(10) Real world (Reward Bonus - beta=1.5)** | 0.83812 | 0.11758 | 0.02782 | **0.01000** | **0.00479** | **0.00170** |\\n| **(11) Real world (Reward Bonus - beta=2.0)** | 0.82782 | 0.12016 | 0.03137 | 0.01253 | 0.00587 | 0.00215 |\\n| **(12) Real world (Reward Bonus - beta=3.0)** | 0.80600 | 0.12934 | 0.03732 | 0.01605 | 0.00831 | 0.00298 |\\n| **(7) Distorted world (gamma=0.5)** | 0.81693 | 0.13363 | 0.03255 | **0.01048** | **0.00466** | **0.00175** |\\n\\nFrom above table, please check that the bonus term when $\\\\beta = 1.5$ in the real-world aligns with the best performance observed in the distorted world (gamma=0.5). \\n\\nWe hope this explanation provides a clear intuition behind the mechanism of CPT distortion and its potential to shape intrinsic motivation! \\n\\nShould you have any further questions or require clarification, we would be delighted to discuss this further :)\"}", "{\"title\": \"Further Clarifications on the Reviewer's Concerns\", \"comment\": \"Dear Reviewer DuS7,\\n\\nWe sincerely hope our response has effectively addressed the reviewer\\u2019s initial concerns! Should Reviewer DuS7 have any further questions or points for discussion, we warmly encourage you to share them with us. Your thoughtful feedback has been instrumental in helping us:\\n\\n * clarify the definitions and motivations of the value & visitation probability distortion functions, as well as emphasize the significance of Lemma 1,\\n * present empirical evidence (a gridworld multi-goal experiment) that strengthens our main message, and\\n * provide an additional real-world example illustrating how a static misperception can give rise to black swans and inform future algorithm design.\\n\\nWe look forward to hearing whether these revisions have sufficiently addressed your concerns. We would be delighted to continue the discussion and provide further clarification to ensure a clear understanding of our work!\\n\\nBest, Authors\"}", "{\"title\": \"Thanks for the comments!\", \"comment\": \"Dear Reviewer 2mu5,\\n\\nWe would like to sincerely thank the reviewer for inquiring about empirical evidence and the potential next steps of this work. Before addressing the specific concerns, we kindly encourage the reviewer to first review the ***global comment on the main message***, followed by our detailed responses to specific weaknesses and questions.\\n\\nWe are happy to provide further clarification on any remaining concerns or questions from the reviewer!\\n\\n**Q1 and W1. Empirical evidence or case studies**\\n\\nThank you for highlighting the importance of evidence. In addition to the Lehman Brothers bankruptcy case mentioned in the introduction, we present another case that illustrates how high-risk rare events (black swans) can occur, not due to unpredictable changes in the environment, but rather due to misperception - specifically, underestimating the probability of a certain event:\\n\\n* **Unexpected Drowning of NASA Astronauts Due to Overlooked Details**: \\n (The following is a summary of the case; please refer to [1, 2] for a more detailed account.) Before launching rockets, NASA conducted tests on its space suits using high-altitude hot-air balloons. On May 4, 1961, Victor Prather and another pilot ascended to 113,720 feet to evaluate the suit's performance. While the test itself was successful, an unforeseen risk led to tragedy during the planned ocean landing. Prather opened his helmet faceplate to breathe fresh air, and as he slipped into the water while attaching to a rescue line, his now-exposed suit filled with water. Despite NASA\\u2019s rigorous planning and preparation, the risk of opening the faceplate - perceived as an extremely minor detail - was **underestimated, resulting in catastrophic consequences**. This highlights how rigorously meticulous planning can still fail to account for overlooked events.\\n\\n*References:* \\n[1] Jan Herman, \\u201cStratolab: The Navy\\u2019s High-Altitude Balloon Research,\\u201d lecture, Naval Medical Research Institute, Bethesda, MD, 1995, archive.org/details/StratolabTheNavysHighAltitudeBalloonResearch. \\n[2] Douglas Brinkley, American Moonshot (New York: Harper, 2019), 237.\\n\\n**Q2. Application of proposed viewpoints to other domains such as healthcare or environmental science**\\n\\nThank you for pointing this out. We provide examples of how black swan events can occur in the domains of healthcare as follows:\\n\\n* **Healthcare**: \\n Diabetes patients typically experience a highly chronic condition, making their state relatively predictable a few hours into the future (stationary environment). However, **rare hypoglycemic events**, characterized by a sudden and dangerous drop in blood sugar, pose significant risks. To address this, [3] developed an RL model capable of predicting changes in a patient's condition and providing optimized treatments. Furthermore, [4, 5] emphasize the critical importance of selecting appropriate signals as inputs, as **human misperceptions about what constitutes important signals** can lead to unexpected and suboptimal decisions. As a result, traditional supervised learning methods, such as general Transformers paired with loss functions like MSE, may struggle to accurately predict rare events like hypoglycemic episodes when the input signals fail to align with the underlying dynamics.\\n\\n*Reference*:\\n\\n[3] Wang, G., Liu, X., Ying, Z. et al. Optimized glycemic control of type 2 diabetes with reinforcement learning: a proof-of-concept trial. Nat Med 29, 2633\\u20132642 (2023). https://doi.org/10.1038/s41591-023-02552-9. \\n[4] Panda, D., Ray, R., & Dash, S.R. (2020). Feature Selection: Role in Designing Smart Healthcare Models. Intelligent Systems Reference Library. \\n[5] C. Ambhika, S. Gayathri, A. T. P, B. G. Sheena, N. M and S. S. R, \\\"Enhancing Predictive Modeling in High Dimensional Data Using Hybrid Feature Selection,\\\" 2024 5th International Conference on Electronics and Sustainable Communication Systems (ICESC),\\n\\n**Q3 and W3. Next steps on how to develop the algorithm** \\n\\nThank you for inquiring about potential extensions of this work. The theorem we have developed can significantly inform future algorithm design. We would like to note that our response to Q3 same with our answer to **[Q4: Aiding Algorithm Design] from reviewer 234c**.\\n\\n**W2. difficult for practitioners to apply directly due to complex mathematical formalizations.**\\n\\nThank you for raising this important concern. We recognize that, while rigorous, the mathematical formalizations may be challenging for practitioners to apply directly. In revision, we will provide intuitive explanations, such as real-world examples (such as NASA case and healthcare case mentioned above), analogies, or visual aids, which could make the concepts more accessible.\"}", "{\"summary\": \"This paper introduces the concept of \\\"s-Black Swan\\\" - statistically rare, high-risk events that can occur in unchanging environments due to human misperception of event probabilities and values. Authors present a formal framework from to define this in the context of MDPs and argue that for safety in RL systems, it's important to consider stationary MDPs that present these Black Swan events.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This paper presents a new view point on how to look at Black Swan events. Specifically, it points to the case of stationary MDPs where agents have distorted perspective on reward signals and visitation probabilities which are likely to be overlooked by researchers.\", \"The mathematical rigor is strong - the authors have done a great job at defining s-Black Swan using the existing concepts of MDP and its special cases. It gives a good framework for future researchers to build upon while trying to model such s-Black Swan events. Specifically, theorem 5 seems to be the most useful aspect of this paper which provides an analytical bound of encountering the rare event with event probability.\", \"Particularly, I liked the formulation of HEMDPs - seems to be a particularly useful modeling strategy.\"], \"weaknesses\": [\"While the paper is very rigorous, the details might be very hard to follow for non-specialists. Some of the aspects are not very intuitive.\", \"This paper lacks a practical application demonstration - it'll be great if the authors can describe how a practitioner can use the definitions that the paper provides for a practical applications.\", \"Building upon the previous point - it'll be useful for us to understand how frequent are such MDPs where the users have a distorted view of the reward signals and the visitation probabilities.\"], \"questions\": [\"Can you explain why the focus is on visitation probabilities v/s transition function? This is something that's not very intuitive.\", \"Why is the value distortion function and the probability distortion function modeled in such a piece-wise way? Is this just for simplicity? How do we decide how to model them?\", \"Example 2, case 3 - what does it mean for an MDP to be always a black swan?\", \"What is the significance of Lemma 1? Can you describe how should one intuitively understand it?\", \"Can you describe some applications where this will be useful? And how should one think about modeling such scenario - my understanding is that HEMDPs would be most appropriate for such situations.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for the feedback!\", \"comment\": \"Dear Reviewer 2mu5,\\n\\nThanks for your feedback and for suggesting potential futures, such as algorithms to mitigate the potential risks of these events (correcting the human misperception). We wanted to follow up and inquire if you have any further concerns or feedback that we can address as we approach the end of the rebuttal phase. We greatly value your insights and are always happy to discuss further!\\n\\nBest, \\nAuthors\"}", "{\"comment\": \"Dear Reviewer 234c,\\n\\nWe sincerely thank you for your active participation, which has greatly contributed to making this draft more insightful, enhancing our ability to convey our message to the readers -- especially great questions and follow-up as experiments section. As we approach the end of the rebuttal phase, we wanted to kindly ask if you have any additional concerns or feedback that we can address. Once again, thank you for your valuable engagement!\\n\\nBest, \\nAuthors\"}", "{\"comment\": \"Thanks for addressing my comments. I'd highly recommend the authors to include this discussion in their next version of the manuscript given that it contains useful clarifications. I'd be raising my score based on your comments.\"}", "{\"comment\": \"### **(Additional answer2) Experiment to support our answer for [Weakness 1]**\\n\\nTo further support the reviewer\\u2019s insightful question regarding the impact of **CPT-characterized distortion** on the suboptimality gap, we would like to provide simulation results addressing this point. If we agree that the critical question is: \\n\\n**\\\"How does CPT-characterized distortion provide an *additional meaning* to the suboptimal gap?\\\"**\", \"then_a_natural_follow_up_question_arises\": \"**\\\"[Question] Does CPT-characterized distortion offer any *benefit* to the agent in finding the optimal policy?\\\"** \\n\\nTo explore this, we conducted additional experiments. Consider an 8x8 Grid World where the agent always starts at $(0, 0)$ and can reach one of six different goal states located at $(4, 4)$, $(5, 5)$, $(6, 6)$, $(7, 7)$, $(8, 8)$, and $(9, 9)$. Each goal has a distinct reward from the set $[0.1, 1, 10, 100, 1000, 10000]$, and the agent incurs a step penalty of $-0.1$ for each move. Naturally, the optimal policy under real-world conditions would be to reach the goal at $(9, 9)$, which provides the highest reward. \\n\\nInterestingly, our simulation results reveal that there exists a **specific pair of reward distortion and probability distortion** (as defined in Definitions 1 and 2) that enables the agent to discover a better policy than what was optimized under real-world conditions. \\n\\n### Experimental Setup \\n\\n- **Distorted Worlds:** We consider six distorted environments, all of which share the same reward distortion pattern that adheres to Definitions 1 and 2. \\n- **Distortion Parameter ($\\\\gamma$):** $\\\\gamma$ represents the distortion rate of transition probabilities, where smaller values of $\\\\gamma$ indicate larger distortions. \\n- **Algorithm:** All experiments were performed using the Q-learning algorithm. \\n- **Outcome Metric:** The probabilities represent the visitation frequency of each goal state.\\n\\n### Results \\n\\nThe results demonstrate that CPT-distorted worlds can enhance policy selection by leveraging the interaction between reward and probability distortions, ultimately leading to improved agent behavior in some cases. \\n### Table1\\n| visitation probability of Goal State | (4,4) | (5,5) | (6,6) | (7,7) | (8,8) | (9,9) = Optimal goal |\\n|-----------------|---------|---------|---------|---------|---------|---------|\\n| **(1) Real world (no distortion)** | 0.87929 | 0.09817 | 0.01621 | 0.00414 | 0.00168 | **0.00051** |\\n| **(2) Distorted world (only reward distortion)** | 0.93762 | 0.05177 | 0.00728 | 0.00207 | 0.00090 | **0.00036** |\\n| **(3) Distorted world (gamma=0.9 with reward distortion)** | 0.92069 | 0.06538 | 0.00986 | 0.00264 | 0.00108 | **0.00035** |\\n| **(4) Distorted world (gamma=0.8 with reward distortion)** | 0.90552 | 0.07670 | 0.01244 | 0.00342 | 0.00147 | **0.00044** |\\n| **(5) Distorted world (gamma=0.7 with reward distortion)** | 0.87729 | 0.09861 | 0.01674 | 0.00482 | 0.00185 | **0.00070** |\\n| **(6) Distorted world (gamma=0.6 with reward distortion)** | 0.83943 | 0.12043 | 0.02770 | 0.00843 | 0.00309 | **0.00090** |\\n| **(7) Distorted world (gamma=0.5 with reward distortion)** | 0.81693 | 0.13363 | 0.03255 | 0.01048 | 0.00466 | **0.00175** |\\n\\nIt is easy to observe that for a fixed reward distortion, as the distortion of transition probability increases (i.e., smaller $\\\\gamma$), the distorted world shows a higher probability of visiting the state $(9,9)$ than the real-world scenario. For example, in the real-world, the visitation probability is $0.00051$, while in the distorted world, it surpasses this value when $\\\\gamma < 0.7$ (e.g., at $\\\\gamma = 0.7$, the probability is $0.00070$). \\n\\nSo, why does this happen? Intuitively, this occurs because **reward distortion alone** causes the agent to perceive the step reward of $-0.1$ as a significantly lower value, such as $-5$. This perception incentivizes the agent to prioritize shorter trajectories to avoid accumulating large penalties from step rewards, potentially leading to suboptimal goals (that is going to among (4,4) to (8,8). \\n\\nHowever, **distorting probabilities** introduces exploration, encouraging the agent to consider a broader range of trajectories. This additional exploration allows the agent to better evaluate distant goals, often leading to improved policies. Therefore, our answer to the above question is \\n\\n**[Answer]: CPT-distortion provide an intrinsic motivation (or help shape a beneficial intrinsic reward) for the agent to find the optimal policy that that of real-world**\"}", "{\"title\": \"Thanks for the comments!\", \"comment\": \"Dear Reviewer 234c,\\n\\nFirst, we would like to thank the reviewer for their fruitful and thoughtful comments. We are impressed by the depth of the feedback, which reflects a thorough scrutiny of our draft and demonstrates the significant time and effort invested. Before addressing reviewer's concerns, we kindly encourage the reviewer to first read the **global comment on the main message**, followed by our responses that address specific weaknesses and questions. \\n\\nWe are happy to provide further clarification on any remaining concerns or questions from the reviewer.\\n\\n\\n **Q1.Proof of Lemma1**\\n\\nThanks for pointing out the gap between Lemma and the proof. ***Yes, the proof of Lemma 1 is correct***, and we apologize for any confusion. We understand that, at first glance, there may appear to be a significant gap between the statement and the proof of Lemma 1, which could be challenging for readers to bridge. For the analysis, we ***transform the visitation probability of a specific state-action pair $(s, a)$ into the visitation probability of the corresponding reward, as described on line 1131 to 1133 of the appendix***. We believe not clarifying this point creates a large gap in understanding the proof and the Lemma1. This transformation allows us to perform further analysis in the space of $\\\\mathbb{R}$, rather than in the more complex space of $\\\\mathcal{S} \\\\times \\\\mathcal{A}$. This shift simplifies the problem and facilitates a more tractable analysis.\\n\\n**Q2.Trivialness of Theorem 3**\\n\\n We appreciate your observation regarding this matter. The main reason why Theorem $3$ is introduced is to align the statements of Theorems $1$ and $2$ with the ***completeness of Section $4$***. We understand the reviewer's point -- we are happy to rewrite Theorem $3$ as a Remark to avoid over-emphasizing the contribution of this paper.\\n\\n**Q3.Role of Proposition 1**\\n\\n Also, thanks for brining this up. We would like to first clarify that the role of Section $3$ is to classify black swans in stationary environments from black swans in non-stationary environments. Since a stationary environment is a subset of non-stationary environments, we first state this classification through Algorithm $1$. Then, Proposition $1$ states that for any black swan event, we can always find a time interval that classifies any black swans in S-blackswan (which can also be thought of as an observation interval). The ***main role of Proposition $1$ is to establish the possibility of applying our analysis (interpretation) to any black swans***. To be more specific, suppose that $(s, a, t)$ is a black swan for $t \\\\in [10, 20]$. EBy Algorithm $1$, even though $(s, a)$ is a black swan in a non-stationary environment, this paper also demonstrates that it is possible to classify $(s, a)$ as an S-black swan during the time interval $[10, 20]$, enabling further interpretation and methods to handle $(s, a)$ during $[10, 20]$. This is also well illustrated in Example $2$.\\n\\n**Q4.Aiding Algorihtm design** \\n\\nThanks for asking this question. *We believe that this is a crucial point*. We also believe that the global comments provided above partially address the reviewer's concerns. To be more specific, the lower bound of S-black swan events depends on three factors: *greater distortion* in reward perception (i.e., larger $C_{bs}$), *a larger feasible set* for S-black swan (i.e., larger $R_{\\\\max} - R_{bs}$), and a *higher minimum probability* of S-black swan occurrence (i.e., larger $\\\\epsilon^{\\\\min}_{bs}$). This framework highlights the importance of designing future algorithms that aim to correct human perception by reducing reward distortion. Additionally, these algorithms should also focus on decreasing the feasible set size and the minimum probability of occurrence. ***This is still an open direction but to the best of our knowledge, such considerations can inform how much the agent should explore in a safe manner that takes misperception in a consideration.That is, our analysis enables guideing the design of an safe exploration strategy that is sensitive to whether the probability or the size of the feasible set is being enlarged.*** This connection between exploration strategies and the underlying factors provides a pathway for advancing the design of algorithms to handle S-black swan events.\\n\\n**Q5.Hitting time analysis** \\n\\nAlso, thanks for pointing out the implication of the hitting time analysis. This is well-stated in lines 429 to 432, which explain how frequently the correction algorithm should be updated in proportion to the magnitude of the perception gap and the minimum frequency of black swan events.\"}", "{\"comment\": \"**Q1. Real-world examples that can benefit from the definition of S-Black Swan events**\\n\\nWe appreciate the reviewer\\u2019s concern regarding our hypothesis - specifically, whether interpreting black swan events through the lens of human misperception is meaningful and applicable in real-world scenarios. To provide additional clarity and support for this perspective, we offer further examples beyond the Lehman Brothers bankruptcy case mentioned in the introduction:\\n\\n* **[Case 1: Unexpected Drowning of NASA Astronauts Due to Overlooked Details]** \\nBefore launching rockets, NASA conducted tests on its space suits using high-altitude hot-air balloons. On May 4, 1961, Victor Prather and another pilot ascended to 113,720 feet to evaluate the suit's performance. While the test itself was successful, an unforeseen risk led to tragedy during the planned ocean landing. Prather opened his helmet faceplate to breathe fresh air, and as he slipped into the water while attaching to a rescue line, his now-exposed suit filled with water. Despite NASA\\u2019s rigorous planning and preparation, the risk of opening the faceplate - perceived as an extremely minor detail - was ***underestimated, resulting in catastrophic consequences***. This highlights how rigorously meticulous planning can still fail to account for overlooked events. [1,2]\", \"references\": \"[1] Jan Herman, \\u201cStratolab: The Navy\\u2019s High-Altitude Balloon Research,\\u201d lecture, Naval Medical Research Institute, Bethesda, MD, 1995, archive.org/details/StratolabTheNavysHighAltitudeBalloonResearch. \\n[2] Douglas Brinkley, American Moonshot (New York: Harper, 2019), 237.\\n\\n* **[Case 2: Healthcare - Hypoglycemic Events in Diabetes Patients]** \\nDiabetes patients typically experience a highly chronic condition, making their state relatively predictable a few hours into the future (stationary environment). However, ***rare hypoglycemic events***, characterized by a sudden and dangerous drop in blood sugar, pose significant risks. To address this, [3] developed an RL model capable of predicting changes in a patient's condition and providing optimized treatments. Furthermore, [4, 5] emphasize the critical importance of selecting appropriate signals as inputs, as ***human misperceptions about what constitutes important signals can lead to unexpected and suboptimal decisions***. As a result, traditional supervised learning methods may struggle to accurately predict rare events like hypoglycemic episodes when the input signals fail to align with the underlying dynamics.\", \"reference\": \"[3] Wang, G., Liu, X., Ying, Z. et al. Optimized glycemic control of type 2 diabetes with reinforcement learning: a proof-of-concept trial. Nat Med 29, 2633\\u20132642 (2023). https://doi.org/10.1038/s41591-023-02552-9. \\n[4] Panda, D., Ray, R., & Dash, S.R. (2020). Feature Selection: Role in Designing Smart Healthcare Models. Intelligent Systems Reference Library. \\n[5] C. Ambhika, S. Gayathri, A. T. P, B. G. Sheena, N. M and S. S. R, \\\"Enhancing Predictive Modeling in High Dimensional Data Using Hybrid Feature Selection,\\\" 2024 5th International Conference on Electronics and Sustainable Communication Systems (ICESC), \\n\\n**Q2. Different distortion with respect to different individuals**\\n\\nThank you for raising this insightful question\\u2014it is an excellent point! We agree that individuals may exhibit varying distortions in value and perception. Addressing how black swan events arise due to the collective behavior in a multi-agent human setting, where each individual has a distinct distortion rate, is an intriguing avenue for future work. Such differences could potentially result in varying suboptimal policies, further highlighting the complexity of this phenomenon.\\n\\n**Q3. Additional context on the novelty and prior works that have not considered S-Black Swan events**\\n\\n* **Additional context on the novelty**: \\n Thank you for pointing out the importance of clarifying the novelty of this work. We believe that our global comment on the **Main Message** provides a precise explanation of the context and contributions of this study. However, if this is insufficient, please let us know - we would be happy to provide further elaboration. \\n\\n* **Prior works that have not considered S-Black Swan events**: \\n We believe that our response to the reviewer's previous question, **[Q1. Real-world examples that can benefit from the definition of S-Black Swan events]**, address this issue. Please feel free to let us know if additional clarification is needed, and we will be glad to expand further.\\n\\n**W1. How the results of the paper help the AI community** \\n\\nWe kindly refer the reviewer to our response to **[Q4. Aiding Algorithm Design] from Reviewer 234c** and our global comment on the **Main Message** for insights on how our results contribute to future algorithm design and how our work can positively influence the AI community.\"}", "{\"summary\": \"This paper challenges the conventional understanding of black swan events, which are typically seen as arising from unpredictable and dynamic environments. The authors propose that such high-risk, statistically rare events can also occur in static environments due to human misperception of events\\u2019 values and likelihoods, introducing the concept of S-BLACK SWAN. The paper categorizes black swan events, formalizes their definitions mathematically, and provides a framework for understanding and preventing these events through improved perception.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper presents a novel hypothesis that black swan events can occur in static environments due to human misperception, which is a significant departure from the traditional view. This new perspective could open up fresh avenues for research in risk management and machine learning.\", \"The theoretical framework is well-developed, with rigorous mathematical formalizations and proofs. The use of Markov Decision Processes (MDPs) to model human perception and misperception is particularly robust.\", \"The paper is well-structured, with clear definitions and logical progression of ideas.\", \"By redefining the origins of black swan events, the paper has the potential to significantly impact the fields of machine learning, risk management, and decision theory. It provides a foundation for developing algorithms that can better handle rare, high-risk events.\"], \"weaknesses\": [\"The paper lacks empirical validation of the proposed hypothesis. While the theoretical framework is strong, it would benefit from experimental results or real-world case studies demonstrating the occurrence of S-BLACK SWAN events.\", \"The mathematical formalizations, while rigorous, are quite complex and may be difficult for practitioners to apply directly. Simplifying some of the models or providing more intuitive explanations could enhance accessibility.\", \"The paper primarily focuses on financial and autonomous systems. Expanding the discussion to other domains where black swan events are critical, such as healthcare or environmental science, could broaden the impact of the work.\"], \"questions\": [\"Can the authors provide empirical evidence or case studies that demonstrate the occurrence of S-BLACK SWAN events in real-world scenarios and model it using the proposed algorithm?\", \"How can the proposed algorithm be applied to other domains beyond finance and autonomous systems? Are there specific examples or case studies in areas like healthcare or environmental science?\", \"What are the next steps for developing algorithms based on the proposed hypothesis? Are there any preliminary results or ongoing projects that the authors can share?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Further Clarifications on the Reviewer's Concerns\", \"comment\": \"Dear Reviewer WW6X,\\n\\nWe hope our response addresses the reviewer\\u2019s initial concerns! If Reviewer WW6X has any further points or questions, please feel free to share them with us. Your valuable initial feedback has helped us to:\\n\\n * present empirical evidence (a gridworld multi-goal experiment) that supports our main message,\\n * provide an additional real-world example illustrating how a static misperception can lead to black swan events,\\n * clarify the novelty of our work in relation to the global comments summarized as the \\\"main message,\\\" and\\n * acknowledge the potential extension of our work to multi-agent settings where agents exhibit differing distortion rates, which we can highlight as future work.\\n\\nWe look forward to hearing your thoughts on whether these revisions have addressed your concerns. Please let us know if further clarification is needed, and we would be glad to continue the discussion.\\n\\nBest, \\nAuthors\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for your response. I am not sure that the provided \\\"real-world examples\\\" tackle the original main weakness that I put forward: \\\"ability to build a strong link between the results of the paper and how it might affect the wider machine learning community\\\". The NASA example and the Healthcare example do not link this work to RL.\"}", "{\"title\": \"Thanks for the comments!\", \"comment\": \"Dear Reviewer WW6X,\\n\\nWe would like to sincerely thank the reviewer for requesting additional evidence and highlighting the novelty of our work. Before addressing the specific concerns, we kindly encourage the reviewer to first review the global comment on the **Main Message**, followed by our detailed responses to specific weaknesses and questions as below!\"}", "{\"title\": \"Thanks for the review\", \"comment\": \"Dear Reviewer 234c,\\n\\nThanks for getting back to us. Actually, we are delighted to note that the points you raised present an excellent opportunity to enhance the paper's contribution, beyond simply conveying its central message (as discussed earlier).\\n\\nTo address your concerns, we have provided detailed responses organized into the following four key points: [Weakness 1, 2 and Question 1,2]\\n\\nWe believe this discussion has brought us to a shared understanding, allowing us to respond to your queries in a more direct and precise manner. Should you have any additional questions or concerns, please do not hesitate to reach out -- we would be more than happy to continue the discussion!\\n\\n### ***[Weakness 1] S-black swan is just an emergence from a non-zero model error from true $P$ and $R$, so the theoretical analysis is trivial, making it indistinguishable from prior RL theory literature focusing on PAC bound analysis via model estimation.***\\n\\n$\\\\textcolor{red}{\\\\text{This is really great question!}}$. \\n\\nTo address this, we believe it is helpful to reframe the question into a more fundamental inquiry as follows:\\n\\n * ***[Question]: How does CPT-distortion on the model (P,R) provide an additional interpretation of the suboptimality gap?\\\"*** \\n\\nOur answer is \\n\\n * ***[Answer]: \\\"CPT-induced distortions exhibit an order invariance property, meaning that they preserve the rank order of $P$ and $R$, while still allowing for the possibility of finding suboptimal policies (as demonstrated in Theorem 3).\\\"***\", \"let_us_elaborate_further\": \"Theorems 1, 2, and 3 in Section 4 collectively clarify the distinction. The main message of Theorems 1 and 2 is that in environments with low complexity, the agent selects the same optimal policy, as the distortion functions $w$ and $v$ preserve the order of $R(s, a)$ and $P^\\\\pi(s, a)$ for all $(s, a)$. Then, the key difference from prior PAC-bound analyses lies in the following:\\n\\n * *[Detailed answer]: Regardless of the degree of distortion in $R$ and $P^\\\\pi$ (i.e., irrespective of how large the model-error bound $\\\\epsilon_r,\\\\epsilon_p$ is, in terms of RL theoretical literature), in low-complexity environments, the agent will still select the true optimal policy due to the \\\"order-invariant\\\" property of $w$ and $v$ (as outlined in Theorems 1 and 2). However, this \\\"order invariance\\\" breaks as the complexity of the environment increases, leading to suboptimal policy selection, which is highlighted in Theorem 3.*\\n\\n ### **(Additional answer1) Additional explanation for reviewer's understanding on [Weakness 1]**\\n\\nWe hope the explanation above is helpful. However, if further clarification is needed, we would like to provide additional support by bringing in Theorem 4 to address your concerns more comprehensively. To frame this discussion more specifically, another key question might be:\\n\\n\\\"How does the lower bound of the value gap provide a different perspective beyond the typical interpretation involving $\\\\epsilon_r$ and $\\\\epsilon_p$ as model errors?\\\"\\n\\nIn Theorem 4, while the lower bound does include the distortion of reward $C_{bs}$\\u2014a factor that aligns with previous RL theory literature utilizing $\\\\epsilon_r$\\u2014it also introduces two additional factors:\\n\\n * A larger feasible set for S-black swan events (i.e., $R_{max} - R_{bs}$ increases).\\n * A higher minimum probability of S-black swan occurrence (i.e., $\\\\epsilon^{min}_{bs}$ is larger).\\n\\nEven though both of these factors diminish as $\\\\epsilon_p$ converges to zero, we believe they represent new, distinct elements of the lower bound. These factors capture important characteristics of the problem that are not fully encapsulated by $\\\\epsilon_p$. By considering these elements explicitly, the lower bound offers a richer perspective on the dynamics of S-black swan events, beyond merely interpreting them as aspects of model error.\"}" ] }
7jDv1RrNQX
Path Selection Makes BERT-family Good Generators
[ "Yisheng Xiao", "xiaobo liang", "Juntao Li", "Zechen Sun", "Pei Guo", "Wenpeng Hu", "Min Zhang" ]
The Mask-Predict decoding algorithm has been widely used to enhance the generation capacity of traditional non-autoregressive (NAR) models and provide a good recipe for adapting the pre-trained BERT-like masked language models (MLMs) to NAR generation scenarios. However, these models, which we denote as NAR-MLMs, are still regarded as inferior to competitive autoregressive (AR) models in terms of performance. In this paper, we further explore the core problems leading to the performance gap of NAR-MLMs and delve into effective solutions for technological innovation. Specifically, most related works neglect the impact of the training sequence decomposition format, i.e., Unlike the AR models which can naturally decompose the text sequence in a left-to-right manner for training and inference, NAR-MLMs are trained with a random decomposition but aim to find a determined optimal composition (denoted as decoding paths) during inference. To alleviate this mismatching, we propose decoding path selection to increase the search space for finding a better composition, and path optimization methods to enable the model decoding path preference during the training process. Results on various zero-shot common sense reasoning and reading comprehension tasks and several task-specific generation tasks demonstrate that our NAR-MLM achieves significant performance improvements on common benchmarks with the methods mentioned above, reaching performance levels comparable to even outperforming AR pre-trained models. Our model and code will be available at Github.
[ "BERT-family", "path selection", "natural language generation" ]
Reject
https://openreview.net/pdf?id=7jDv1RrNQX
https://openreview.net/forum?id=7jDv1RrNQX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zfaZ6xffbT", "uNvKpzqThi", "Riiy5ZciLa", "7yvAwyKKyq", "657vBY7aOW", "60p3GTiQbm" ], "note_type": [ "decision", "meta_review", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1737524296297, 1734622011734, 1730742466309, 1730216681342, 1730690413954, 1730640856639 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission14035/Area_Chair_1fdf" ], [ "ICLR.cc/2025/Conference/Submission14035/Reviewer_pJpu" ], [ "ICLR.cc/2025/Conference/Submission14035/Reviewer_zyki" ], [ "ICLR.cc/2025/Conference/Submission14035/Reviewer_ddFt" ], [ "ICLR.cc/2025/Conference/Submission14035/Reviewer_g4ao" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"The paper argues that the way existing methods use BERT-like models for generation (what they are calling NAR-MLMs) is suboptimal. They claim that the random decomposition of the text sequence during training doesn't match the need to find a specific order during generation. The paper tackles a relatively under-explored area \\u2013 using encoder-only models like BERT for generation. The experiments are criticized for their limited scope (lack of diversity in generative tasks, comparing against one AR model, lack of different sized models). The comparison to AR models isn't considered fair by some reviewers because of the experimental setup differences. The path selection algorithm isn't clearly explained, making it hard for reviewers to assess soundness. The reviewers ask for more analysis to better understand the role of the proposed methods and other possible issues in the algorithm. Addressing the weaknesses, particularly improving clarity and providing more robust experiments, would be crucial for future submission consideration.\", \"additional_comments_on_reviewer_discussion\": \"There is no author rebuttal.\"}", "{\"summary\": \"This paper investigates the application of BERT-style models as generators. It introduces two path selection strategies and trains a BERT-style model with instruction-following capabilities. The study demonstrates zero-shot performance in common sense question answering and reading comprehension, and achieves comparable autoregressive generation capabilities on the XSUM summary dataset.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"In the context of the rising popularity of decoder-only architecture in large language models, this paper revisits the encoder-only architectures. This perspective may encourage the academic community to view generative models differently and inspire new thinking about model architecture.\", \"The method proposed by the author endows BERT-style models with zero-shot capabilities in common sense question answering tasks, while also offering a speed advantage over autoregressive models.\"], \"weaknesses\": [\"While I believe research on encoder-only models remains valuable, the motivation behind this paper is unclear to me. Given that autoregressive models have demonstrated excellent generative capabilities across various tasks, why do the authors choose to focus on BERT-style models instead of enhancing autoregressive models? I look forward to discussing this with the authors.\", \"If the authors aim to explore whether BERT-style models can achieve generative capabilities comparable to those of autoregressive models, then a broader range of generative tasks should be included in their experiments. The datasets used in the paper, such as ARC, primarily consist of multiple-choice questions and do not leverage generative capabilities.\", \"If the authors intend to demonstrate that BERT-style models can achieve faster generation speeds, they should compare their speeds with a broader range of non-autoregressive models. However, it is unclear why the authors only use the encoder-decoder model BART as the baseline for this speed comparison in Table 2.\", \"The experimental comparison does not seem to be fair. The author's pre-trained GeBERT uses advanced RoPE position encoding and sets the maximum length to 2048, but the maximum length of BART is 1024. More advanced generative models are recommended for comparison.\", \"The writing needs improvement. The path selection algorithm is not clearly articulated, and there are also several typos present as follows:\", \"a)\\tSection 3.1 discusses the path selection algorithm; however, the phrase \\u201cAs shown on the left in Figure 2\\u201d in line 157 refers to the path selection-star illustrated in Figure 2.\", \"b)\\tLine 372, \\u201cBART-largeGeBERT 124M\\u201d -> \\u201cBART-large\\u201d\", \"c)\\tLine 535, \\u201cfron scratch\\u201d -> \\u201cfrom scratch\\u201d\"], \"questions\": [\"Why is the speed of Large Version not compared in Table 2?\", \"In Table 1, how does GeBERT perform under fine-tuning as a traditional encoder-only model? This helps to judge the basic capabilities of GeBERT.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper examines BERT-family models' generation capabilities. It proposes path selection to expand inference search space and path selection*, which is an application of DPO, to train the model for better path selection. The methods improve BERT's performance across multiple zero-shot tasks to match or exceed autoregressive models.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper proposes innovative solutions - path selection to expand inference search space and path selection* to integrate path selection into training, which demonstrably elevate BERT's performance across multiple zero-shot tasks to match or exceed autoregressive models.\", \"weaknesses\": \"The generation capabilities of BERT are investigated in previous paper, e.g. [1]. This paper aims to improve the performance of BERT by using path selection, a non-autoregressive variation for inference. However, the paper lacks details on path selection, especially about how to choose the position to predict if beam size > 2. There are other potential issues should be considered for Mask-Predict methods, e.g. the potential mode collapse with many iterations. More ablation studies about the designs and clear pseudo-codes are needed.\\n\\nPath selection* is an application of DPO, but the `Score' function is not defined in the paper, which diminishes the contribution of the paper.\\n\\n[1] Patel, Ajay, et al. \\\"Bidirectional Language Models Are Also Few-shot Learners.\\\" The Eleventh International Conference on Learning Representations.\", \"questions\": \"How to choose the position to predict if beam size > 2? In your case (beam size = 2) , the masked position with the highest probability is choose, but if beam size > 2, both topk and one-by-one replacement is possible. So the necessary details are lacked.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This work introduces GeBERT, a variant of BERT specifically pre-trained to leverage path selection, calming that it performs competitively, often on par with or surpassing traditional autoregressive models on various zero-shot and task-specific generation tasks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. With the rise of autoregressive models, research on BERT-family models for generative tasks has decreased, and few studies have explored their generative capabilities. This study takes a fresh approach to directly compare BERT models with AR models.\\n\\n2. Introduces a method to expand the search space during inference, allowing BERT models to select the optimal path for improved generation quality. and also incorporates path selection into the training process, enabling BERT models to learn and prefer certain paths over others, further enhancing output quality.\\n\\n3. Experimental results show substantial improvements in both zero-shot and fine-tuned settings, demonstrating that, with these modifications, BERT-family models can effectively compete with AR models.\", \"weaknesses\": \"1. While this research is innovative and valuable, a fundamental question remains\\u2014why use BERT for generation tasks? Given scaling laws, AR models generally improve significantly with larger model sizes, while BERT-family models struggle to achieve similar gains. Moreover, as shown in Table 1, the performance improvements for GeBERT are limited compared to AR models.\\n\\n2. The path selection techniques introduce additional complexity, particularly in tuning hyperparameters for path selection*. This added complexity could hinder the practical application and usability of these methods.\\n\\n3. The experimental models are relatively small in scale. Larger models might address or clarify the first weakness I raise, providing a stronger case for BERT's use in generation tasks.\", \"questions\": \"see above\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a method to select the decoding paths of BERT-family to conduct generation tasks. They also propose a DPO method to optimize the preferences of the model over difference paths. Experimental results show that the decoding method and preference optimization method can effectively improve the generation quality of BERT-family on zero-shot commonsense reasoning, reading comprehension, summarization and question generation tasks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method select the positions to decode dynamically, which is suitable for BERT-like models. BERT-like models trained with masked language modeling objective can predict any positions in the target part, while autoregressive language models can only follow the left-to-right order.\\n2. Experimental results show that the proposed method can effectively improve the generation quality of BERT-like models on various tasks.\", \"weaknesses\": \"1. On Line 146 the authors aim to identify an optimal decoding path. In other parts the authors also state that previous methods cannot find the optimal paths. But the proposed method cannot guarantee to find the optimal decoding path. It is a form of beam search over the mask-predict algorithm, and beam search cannot guarantee to find the optimal solution.\\n2. The description of the path selection algorithm in Section 3.1 is unclear. The example in Figure 2 is not enough since it didn't describe how the lowest-k total prediction probabilities are selected. The authors should give a pseudocode to describe the algorithm.\\n3. In Section 5.1 the authors find that BERT-like models can achieve competitive performance with AR models when the model predicts one token in each decoding step. In that case it requires the same number of decoding steps as AR models, and the compute of each decoding step is much larger than that of AR models since BERT-like models cannot reuse kv cache during inference. There is not efficiency advantage compared with AR models. Therefore I challenge the motivation to apply BERT-like models to generation tasks.\\n4. On Line 87 the authors claim that path selection* incorporates path selection into the training process. In fact neither the sampling methods to generate the pairs nor the computation of the probability considers the path selection method.\", \"questions\": \"1. What does it mean that \\\"only one position in masked parts can be replaced by the one in unmasked parts to obtain the candidate decoding states\\\"? Does it mean that only one position is predicted at each decoding step?\\n2. Is the algorithm equivalent to mask-predict when $k$ equals 1?\\n3. What is $n_{\\\\text{new}}$ in Table 2? If it is equal to 1 as Table 1, how can the model achieves speeup over AR models?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
7igPXQFupX
CoTFormer: A Chain of Thought Driven Architecture with Budget-Adaptive Computation Cost at Inference
[ "Amirkeivan Mohtashami", "Matteo Pagliardini", "Martin Jaggi" ]
Scaling language models to larger and deeper sizes has led to significant boosts in performance. Even though the size of these models limits their application in compute-constrained environments, the race to continually develop ever larger and deeper foundational models is underway. At the same time---regardless of the model size---task-specific techniques continue to play a pivotal role in achieving optimal downstream performance. One of these techniques, called Chain-of-Thought (CoT), is particularly interesting since, as we point out in this work, it resembles employing a deeper transformer through re-applying the model multiple times. However, a key subtlety in computing the attention of past tokens differentiates CoT from simply applying the model several times. Based on this insight, we propose CoTFormer, a novel architecture which closely mimics CoT at the token level, allowing us to obtain significantly improved accuracies close to much larger models. While applying CoT introduces additional computation costs, we compensate for it by leveraging CoTFormer's special compatibility with token-wise variable depth. Through a compute adaptive model---which automatically allocates the compute to tokens that need it most---we show that it is possible to reduce the computation cost significantly without any reduction in accuracy, and with further compute cost reductions possible while maintaining a competitive accuracy.
[ "language models", "adaptive compute", "chain of thought" ]
Accept (Poster)
https://openreview.net/pdf?id=7igPXQFupX
https://openreview.net/forum?id=7igPXQFupX
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wE9zPV3nP6", "uSriWqcCkA", "kjheq4zbUH", "kiPcGbOmPu", "jT1CC7wLU6", "dtnuQkx3Je", "cEeV0gUERe", "XpKkHW0giX", "U9fTvQ45O9", "PL3kjEA8iD", "P9xPBk4dTl", "NWfJp9Mobr", "LFlmYsmKR1", "K55KywEZB4", "9JJCAY48Xa", "3aOLdkJ93c" ], "note_type": [ "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_review", "official_review", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732529014769, 1732500135082, 1734861234215, 1731934163563, 1730377555088, 1733268824568, 1730657301544, 1730240465238, 1733035911315, 1737523666897, 1729125044836, 1732715763990, 1731936984566, 1731934200570, 1733051233236, 1731937065081 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4870/Reviewer_UfJo" ], [ "ICLR.cc/2025/Conference/Submission4870/Reviewer_YaZu" ], [ "ICLR.cc/2025/Conference/Submission4870/Area_Chair_je8H" ], [ "ICLR.cc/2025/Conference/Submission4870/Authors" ], [ "ICLR.cc/2025/Conference/Submission4870/Reviewer_UfJo" ], [ "ICLR.cc/2025/Conference/Submission4870/Area_Chair_je8H" ], [ "ICLR.cc/2025/Conference/Submission4870/Reviewer_YaZu" ], [ "ICLR.cc/2025/Conference/Submission4870/Reviewer_rjXu" ], [ "ICLR.cc/2025/Conference/Submission4870/Reviewer_rjXu" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4870/Reviewer_jCWB" ], [ "ICLR.cc/2025/Conference/Submission4870/Authors" ], [ "ICLR.cc/2025/Conference/Submission4870/Authors" ], [ "ICLR.cc/2025/Conference/Submission4870/Authors" ], [ "ICLR.cc/2025/Conference/Submission4870/Authors" ], [ "ICLR.cc/2025/Conference/Submission4870/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for your responses.\", \"i_think_one_important_question_here_is\": \"**which model architecture is a more appropriate baseline for the current work,\\nthe standard Transformer or (Block) Universal Transformer?**\\nThis question is related to what CoTFormer intends to achieve, and also to how evaluation of it should be done.\\n\\nTo explain this, let's first recall the original Universal Transformer paper (https://arxiv.org/pdf/1807.03819).\\nThe motivations and fundamental advantages of Universal Transformer was clearly stated in the abstract, for example:\\n- \\\"Despite these successes, however, popular feed-forward sequence models like the Transformer fail to generalize in many simple tasks that recurrent models handle with ease, e.g. copying strings or even simple logical inference when the string or formula lengths exceed those observed at training time.\\\"\\n- \\\"UTs combine the parallelizability and global receptive field of feed-forward sequence models like the Transformer with the recurrent inductive bias of RNNs.\\\"\\n- \\\"In contrast to the standard Transformer, under certain assumptions UTs can be shown to be Turing-complete.\\\"\\n\\nAnd to validate the claimed advantages, empirical results on both synthetic tasks and some common benchmarks were reported.\\nPerplexity on text was reported only for LAMBADA, for which the accuracy was also reported.\\n\\n\\nOf course, the complexity of the Universal Transformer architecture limits its adoption in practice (and standard Transformer is still dominant), \\nbut that's fine as long as it has sufficient research value.\\n\\n\\nNow, let's get back to CoTFormer.\\nIf its goal is merely to improve language modeling perplexity or scores on common benchmarks (which is how most of the evaluation is done in the current work), then I think the standard Transformer (with equal inference FLOPs) would be a more appropriate baseline, and CoTFormer doesn't seem to outperform it.\\nOn the other hand, if its goal is to achieve fundamental advantages over the standard Transformer and Universal Transformer (via attention to previous intermediary states, which is the main innovation of CoTFormer),\\nthen it's totally fine to use Universal Transformer as the baseline, \\nbut language modeling perplexity or common benchmarks would not be sufficient in this case.\\n\\n\\nFrom my perspective, a nearly 100% accuracy on a carefully crafted synthetic task (no matter how toy it is) where standard / Universal Transformer totally fails\\nwould be much more exciting and insightful than a minor improvement in perplexity,\\nand it is this kind of evidence that can truly demonstrate the fundamental benefits of scaling up inference FLOPs in the particular way that CoTFormer does (rather than simply using a standard Transformer with twice as many layers).\\nThe bottomline would be to show that CoTFormer retains the key advantages that Universal Transformer (with equal inference FLOPs) has, while improving perplexity or scores on common benchmarks.\\n\\n\\nOverall, I feel that the current work does not fully meet my expectations for a research work that introduces a novel model architecture,\\nand I'm inclined to maintain my rating.\\nHowever, I acknowledge that my evaluation might be somewhat subjective, so I'm not strongly opposed to acceptance.\"}", "{\"title\": \"Thank you!\", \"comment\": \"Thank you for your detailed response!\\n\\nThe new evaluations of CoTFormer across wider models, the comparison against a new baseline (pause tokens) and the added attention pattern analysis all make the paper stronger. While I still have reservations about the theoretical foundations and experimentation (e.g. adaptive/fixed-depth performance gaps and training efficiency), these improvements are material and the paper would, in my view, be a good addition to the proceedings and useful for research.\", \"nit\": \"In the newly added Table 7 (Appendix G), the last column name should be more descriptive that it\\u2019s a perplexity measurement, i.e. \\u201cPerplexity with $n_{\\\\text{repeat}} =\"}", "{\"metareview\": \"Summary:\\nThe paper introduces CoTFormer, a novel transformer architecture that draws inspiration from chain-of-thought reasoning. The key innovation is allowing tokens to attend to representations from all previous \\\"thought\\\" steps, which differs from both standard transformers and Block Universal Transformers. The authors claim this leads to improved accuracy while maintaining parameter efficiency through weight sharing. They also propose an adaptive computation mechanism that dynamically allocates computational resources during inference based on token difficulty.\", \"strengths\": [\"Novel architectural insight connecting chain-of-thought reasoning to transformer design\", \"Practical application for compute-constrained environments\", \"Clear empirical improvements over baseline approaches\", \"Effective adaptive computation mechanism\", \"Thorough ablation studies and analysis\", \"Well-documented implementation details\"], \"weaknesses\": [\"Limited theoretical understanding of why the architecture works better\", \"Experiments focused on relatively short sequence lengths (256) compared to modern standards\", \"Some gaps in performance between adaptive and fixed-depth variants\", \"Training efficiency challenges for deeper layers\", \"Limited exploration of performance at larger scales\", \"Could benefit from more diverse dataset evaluation\"], \"reasons_for_acceptance\": [\"The paper presents a novel and practical architectural innovation that shows clear improvements over existing approaches\", \"The work addresses an important practical challenge (compute-efficient language models)\", \"The empirical results, while not revolutionary, demonstrate consistent improvements\", \"The authors have been responsive to reviewer concerns and provided additional experiments\", \"The adaptive computation mechanism adds practical value\", \"The work opens up interesting directions for future research\"], \"additional_comments_on_reviewer_discussion\": \"The review discussion highlighted several important aspects of the paper. Reviewers initially debated whether standard transformers or Block Universal Transformers were more appropriate baselines, with authors clarifying their focus on improving upon Block Universal Transformers while maintaining adaptivity benefits. Questions about empirical validation led to additional experiments with longer sequences and new comparisons against pause token baselines. Some reviewers requested more theoretical analysis, though the authors maintained their focus on empirical improvements while acknowledging theoretical analysis as future work. They did add attention pattern visualization in the appendix to provide more insight into the model's behavior. Concerns about scalability were addressed through additional results with wider models and clarification of memory footprint analysis.\\nOverall, the author responses and additional experiments adequately addressed most reviewer concerns. While some reservations remain about theoretical foundations and comprehensive evaluation, the improvements and clarifications demonstrate that this work makes a valuable contribution worthy of acceptance. The paper advances our understanding of efficient transformer architectures and provides practical benefits for compute-constrained scenarios.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for your careful consideration of our work and helpful suggestions. We provide the following answers to your comments:\\n\\n1. The goal of this paper is to demonstrate the benefit of having access to past intermediate representations through attention and point out an often overlooked difference between block universal transformer and chain of thought. While understanding the inner workings of Transformers or CoTFormers is a very exciting and interesting direction it falls outside the scope of this work. Still, as you suggested we plotted the attention patterns observed in a CoTFormer to the appendix of our paper\\u2019s revised version. Interesting patterns similar to what you mentioned such as heads focusing on tokens generated during specific repeats can be observed.\\n\\n2. While we agree that the used sequence length is shorter than state of the art models, we use it to save on computation costs. That being said, our intuition suggests that the method\\u2019s performance is disentangled from the context\\u2019s original length. To support this we provide two additional pieces of evidence in the appendix of the revised version of our paper. First, we demonstrate superiority of CoTFormer on longer sequence lengths of 512. Second, we also compare the performance of CoTFormer with using pause tokens suggested in [1]. While the latter also provides longer sequence lengths to the model during inference, it can be seen that this does not suffice to obtain the high accuracy obtained by CoTFormer. \\n\\n4. As can be seen from Equation 3, the (i+1)-th pass of a token through the model is done by inputting the output of the i-th pass to the model. Therefore, similar to the figure, the last token representation is processed by the model to generate the next one. However, when using CoTFormer, earlier versions (e.g. the initial token representation) can also be accessed through attention.\\n\\n5. This can be done by providing a special mask vector which allows tokens in the i-th pass to attend to tokens that are coming before it and are in earlier passes. For efficiency, in the implementation, we append the tokens in the new pass to the sequence instead of interleaving them. Thus the mask is no longer fully causal. \\n\\n6. We use the same position id as the original tokens. Thank you for raising this question. We will include this clarification in the camera ready version of the paper.\\n\\n7. As suggested, we also add the results for models with a larger width (1024) to the appendix in our revision. It can be seen that even for the larger widths the gap between CoTFormer and Block Universal Transformer persists. That being said, intuitively, going to extremely larger widths might allow Block Universal Transformers to obtain the same performance as CoTFormers since one can theoretically fit the same information as multiple tokens in one token with a much larger width. However, aside from possibly being impractical, such a setting diminishes the adaptivity benefits of CoTFormer as it needs to operate on the larger width for all tokens.\\n\\n8. \\u201cActivating a prefix of repeats\\u201d is similar in inference to fixed depth, since we fix the depth in this case. However, note that in this case the model is not trained with fixed depth and therefore we can use the same model at different fixed depths. \\nWhen using a mixture of depth, at training we fix the ratio of tokens reaching each depth. But at inference, e.g. for decoding, we usually use a fixed threshold to decide from the router\\u2019s weight whether to proceed to the next depth or not. Varying this threshold is how we vary the compute in Figure 4. When using this thresholding mechanism, the decision can be made auto-regressively during decoding. Thus there is no issue regardless of the batch size. \\n\\n\\n9. We appreciate the various suggestions you provided and have already implemented some of them as experimental results for longer sequence lengths and larger widths. We point out that in this work our aim is to clarify the benefits of having access to earlier representations of the model through attention which is clearly supported by our results. We agree that improvements in efficiency of the method are great directions for future work and as you mentioned we have already outlined them in Section 5. Similarly, theoretical analysis of this phenomenon would be interesting but falls outside of the scope of this work.\\n\\n\\nWe hope the above comments fully address your concerns. We ask that you kindly consider raising your score and remain in your disposal if you have any additional comments or questions.\"}", "{\"summary\": \"This work proposes CoTFormer, a novel Transformer-based model architecture for generative language models.\\nLike Universal Transformers, a CoTFormer re-applies the same Transformer blocks for an adaptive number of times for generating each token.\\nThe major difference is, after each repeat, the output tokens interleaved with input tokens are used as the new input for the next repeat;\\nthis is inspired by chain-of-thought where each generated \\\"thought token\\\" can attend directly to all previous thought tokens.\\nOther details are handled, such as compatibility with KV cache and batch processing during inference.\\nEmpirical results show that a CoTFormer achieves lower perplexity or higher scores in some common benchmarks \\nthan Block Universal Transformer with the same configuration, \\nor a standard Transformer of the same size.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"This work proposes a novel Transformer-based model architecture, and draws an interesting link to chain-of-thought.\", \"The proposed CoTFormer is compatible with KV cache and batch processing, which is not the case for many other adaptive-computation architectures tailored to autoregressive generation.\", \"Overall the writing is good, and things are mostly well explained.\", \"The source code is provided.\"], \"weaknesses\": \"My major concern is that the empirical evidence for the efficacy of CoTFormer, or its advantages over the standard Transformer architecture, is insufficient.\\n\\n- Generally speaking, the empirical results from the main text suggest that a CoTFormer with $n_{\\\\text{repeat}} \\\\ge 2$ slightly outperforms a standard Transformer of the same size in terms of perplexity, but underperforms a standard Transformer with twice as many layers, except in Table 2 where a CoTFormer with $n_{\\\\text{repeat}}$ as large as 5 (and other tweaks) achieves a perplexity that is lower by only 0.06. \\nThe issue is that the inference cost (in terms of time or memory, or both) of a CoTFormer, with the total number of tokens growing linearly with $n_{\\\\text{repeat}}$, can possibly be larger than that of a standard Transformer with twice as many layers.\\nThis raises the question of whether CoTFormer actually pushes forward the Pareto frontier of accuracy and cost; to support such a claim, it is necessary to compare CoTFormer's accuracy-cost curves with those of standard Transformers (not just Block Universal Transformer).\\nWithout clear evidence of its advantages over standard Transformers, the additional complexity overhead to code and infrastructure might further hinders the adoption of CoTFormer in future research or applications.\\n\\n- The results of downstream performance in Appendix B have limited implications, as discussed by the authors in Line 725. \\n For example, all scores for MMLU are close to 25%, namely the accuracy of randomly picking option A/B/C/D.\\n\\n- The current work only contains end-to-end performance (perplexity or scores) on some common datasets and benchmarks.\\n There is no intermediate empirical result (except for Figure 5) or synthetic task, like those in the original paper of Universal Transformers (Dehghani et al., 2019), for truly understanding when, why and how CoTFormer works or fails.\\nThe authors might consider visualizing the attention patterns of CoTFormer, or designing synthetic tasks that highlight CoTFormer's fundamental advantages over standard Transformers or Universal Transformers.\", \"questions\": [\"Typo in Line 114, \\\"similar the\\\" --> \\\"similar to the\\\"\", \"Is it possible to convert a standard pre-trained Transformer to a CoTFormer via a post-training or fine-tuning phase, which can be much more efficient than pre-training a CoTFormer from scratch?\", \"I can't see an obvious way of doing this, since the behavior of a CoTFormer deviates significantly from that of a standard Transformer.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"reviewer-author discussion phase ending\", \"comment\": \"Dear reviewers,\\n\\nAs we near the conclusion of the reviewer-author discussion phase, I wanted to kindly follow up to see if you\\u2019ve had a chance to review the author responses on your comments. Could you confirm that you\\u2019ve read it and, if needed, update your review and scores accordingly?\\n\\nThank you for your time and effort!\\n\\nYour AC\"}", "{\"summary\": \"This paper introduces CoTFormer, a novel transformer architecture that draws inspiration from chain-of-thought (CoT) reasoning. The key insight is recognizing that CoT differs from simple weight-tying in how attention operates across intermediary reasoning steps. The authors leverage this insight to develop an architecture that allows tokens to attend to representations from all previous \\\"thought\\\" steps, leading to improved performance compared to baseline approaches like Block Universal Transformers. Additionally, they propose an adaptive computation mechanism that allows dynamic allocation of computational resources at inference time.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"*The application of deploying LLMs to storage-constrained devices like mobile phones is relevant and timely.\", \"The proposed CoTFormer architecture (Figure 1(c)) effectively translates the CoT principle into architectural design, showing clear improvements over baseline approaches while maintaining parameter efficiency through weight sharing (Section 3.1).\", \"The architectural tweaks introduced in Section 3.3, particularly reserved layers and layer normalization after each repeat (LN-CoTFormer), prove crucial for achieving state-of-the-art results.\", \"Addition of depth embedding (Section 4.2) shows notable improvements in the adaptive setting.\", \"While not outperforming a FLOP-matched non-repeated transformer, the authors improve upon existing parameter-matched weight sharing baselines.\", \"The proposed architecture and adaptive repetition method are described clearly.\"], \"weaknesses\": [\"While Section 3.2 provides empirical evidence, the theoretical understanding of why CoTFormer works better could be deeper. Through analysis of attention patterns, we observe that tokens in later repeats tend to focus heavily on earlier representations that capture key contextual information, suggesting the model learns to leverage complementary features detected at different processing stages. This selective attention to informative past representations may help explain why CoTFormer outperforms the baseline Block Universal Transformer, where such cross-repeat attention patterns are not possible.\", \"Could better connect to recent theoretical work on transformer expressivity discussed in Section 2.\", \"The sequence lengths that are used for training (256) are quite short relative to the lengths that are used for training modern language models and are shorter relative to common LLM evals and typical chatbot conversations.\", \"Performance gap between adaptive and fixed-depth CoTFormers under same compute budget (Section 5)\", \"Training efficiency of deeper layers could be improved (e.g. increasing the gradient information during adaptive training), as shown by the analysis of router weights distribution (Figure 5)\"], \"questions\": [\"I\\u2019m confused by Figure 1(c): it seems to indicate that the earlier token representations (i.e. the red rectangles) are reprocessed by the model to make new token representations. But Section 3.1 seems to contradict this and instead describes that these earlier representations are only used as context in attention.\", \"It would be helpful to state the total parameter counts of the models used in each experiment, as well as the total number of training tokens in each experiment (either in a table or in the prose describing experimental setup).\", \"471: It would be helpful if the authors provided more details on their \\u201cefficient implementation\\u201d, and specifically how the authors are using a non-causal FlashAttention kernel to implement their proposed method.=\", \"How do position embeddings work with the added interleaved tokens? Are the interleaved tokens given the same position id as the original tokens they came from, do the position ids change between repetitions, or something else?\", \"Do the authors have any intuitions as to how their method behaves as the width of the model changes? It appears to be held constant across all experiments.\", \"402: what does it mean to \\u201cactivate a prefix of repeats\\u201d? is this the fixed depth baseline that is referenced in Figure 4?\", \"How does mixture of repeats work during, for example, batch size 1 transformer decoding, where there is only a single token being processed through the model?\", \"Below are some thoughts that might be helpful but are not critical to give insight into ways that might improve the paper.\", \"Consider analyzing attention patterns and strengthening theoretical connections to transformer expressivity research (building on 3's architecture analysis).\", \"Explore sparse variants to improve scaling for longer sequences beyond 8192 (extending the computational analysis in 3.2).\", \"Focus on improving training efficiency, particularly for deeper layers and adaptive computation (addressing limitations discussed in 5).\", \"Develop specialized attention implementations for better computational performance (following the implementation discussion in 5).\", \"Expand evaluation to include longer sequences and broader comparisons with other adaptive approaches (extending the experimental work in 4.3).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a new model architecture called CoTFormer that improves the Block Universal Transformer by providing intermediary representations from previous repeats in the attention. Besides CoTFormer architecture, the paper also proposes a training approach called Mixture of Repeats that varies the number of model passes for individual tokens based on their difficulty. Results show that CoTFormer substantially improves accuracy and inference computation efficiency over Block Universal Transformer.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The CoTFormer architecture and Mixture-of-Repeats approach effectively improve the performance and efficiency of the Universal Transformer.\\n2. The evaluation of downstream tasks illustrates the CoTFormer's potential to surpass the standard Transformer.\", \"weaknesses\": \"1. Possible misuse of technical terms: Chain-of-thought is a prompting technique. The process illustrated by Figure 1 (a) is called auto-regressive, which is orthogonal to CoT. Could the authors clarify how the CoTFormer model architecture model relates to the CoT prompting? Could CoT prompt be applied to CoTFormer model?\\n2. The model architecture is not clearly explained. Specifically, the meaning of different colors in Figure 1 is vague. Why are there no yellow tokens in Figure 1 (b) and (c)? The figure can be more clear if the caption explains the reason for the absence of yellow tokens and the meaning of different numbers of tokens.\", \"questions\": \"1. Figure 2 shows the inference FLOPs vs. Perplexity. However, it cannot suggest better \\\"scaling properties of CoTFormers\\\" (quote Line 257 of the paper) because scaling properties should be suggested by the training FLOPs vs Perplexity following Kaplan et al.[1]. Could you provide the training FLOPs vs. Perplexity plot for Figure 2?\\n2. Could you add the standard Transformer to Figure 2?\\n3. The paper claims that \\\"The growth in computation cost is actually much less noticeable\\\". Could you provide the real measurement of computation cost in terms of memory footprint (Figures 2 and 3 only show FLOPs)? \\n\\n**After discussion period, questions were addressed by the authors:\", \"answer_1\": \"Keep all the other factors constant, the scaling behavior with respect to the training FLOPs still holds.\", \"answer_2\": \"The accuracy of the standard Transformer in Table 1 can indicate the distance between the CoTFormer and the standard Transformer. Therefore, it is necessary to add the standard Transformer to Figure 2.\", \"answer_3\": \"Theoretically, the memory footprint is the same for CoTFormer and Block Universal.\\n \\n\\n[1] Kaplan, Jared, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. \\\"Scaling laws for neural language models.\\\" *arXiv preprint arXiv:2001.08361* (2020).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank the authors for the response. I agree the backward FLOPs are about twice the inference FLOPs. However, training FLOPs = backward FLOPs * the number of tokens trained. Current inference FLOPs vs. Perplexity certainly demonstrates some scaling properties. I would adjust my claim to be that training FLOPs vs. Perplexity could suggest more scaling properties. I agree that optimizing block universal transformers can be justified as an interesting research problem. However, the distance between CoTFormer and the standard Transformer should at least be demonstrated from an \\\"upper-bound\\\" analysis perspective. I have raised the score but I hope these clarifications make sense to the authors.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"The paper presents CoTFormer, a novel Transformer architecture that leverages the Chain-of-Thought mechanism to enhance model performance while allowing for budget-adaptive computation at inference. CoTFormer enables intermediate tokens to be accessible, improving accuracy without significantly increasing computational costs. The authors further propose an adaptive training method that dynamically allocates computational resources based on the needs of individual tokens. Empirical results demonstrate that CoTFormer outperforms existing models, such as the Block Universal Transformer, while maintaining a smaller model size.\\n\\n(Note: Thank the authors for their clarification; that addressed some of my concerns. I've adjusted the score accordingly)\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The architecture is novel and the authors made smart observations with respect to the CoT can access previous tokens.\\n2. The problem is very practical and of high interest to the community, especially with limited computation resources\\n3. The paper is overall well-written and organized.\", \"weaknesses\": \"1. The paper could have a more detailed discussion on the scalability of the architecture, with respect to larger models and higher sequence lengths, since the paper discusses that the attention computation is not the bottleneck.\\n2. The paper studies the performance of CoTFormer on a particular dataset; would be interesting to see the performance on other datasets\\n3. The paper could have benefited from a more thorough theoretical analysis of COTFormer, especially with the number of repeats compared to the block universal transformer\", \"questions\": \"1. Is the compute budget a hyperparameter to tune to achieve an optimal balance between accuracy and computation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for replying to our rebuttal and clarifying your major objections. We think the following comments will help understand the importance of our work despite some shortcomings that we discussed in the limitation section of our paper as well.\\n\\n1. You mentioned the claim that universal transformers are turing complete. However, this is merely a theoretical argument that does not bridge to practice as being turing complete requires the model to be applied an arbitrary number of times which is not the case for the current models. While the Universal Transformer paper describes a goal we aim to reach, that goal has not yet been fully reached. For example, as you rightly mentioned, efficiency (the computation cost required to reach a given perplexity) is limiting the wide adoption of universal transformers. Thus, developing more efficient recurrent architectures is still an active field of research, and CoTFormer\\u2014by moving the Pareto frontier forward\\u2014is another step in this direction. With your argument, any future refinement of the universal transformer architecture would be discarded. We do not believe that is the correct approach.\\n\\n\\n2. We note that if you significantly increase a Block Universal Transformer\\u2019s width (multiplying the width by the number of repeats of CoTFormer), theoretically you can represent CoTFormer with a Block Universal Transformer (e.g. you can just concatenate all tokens generated by CoTFormer). As such, the difference between Block Universal Transformer and CoTFormer is not that there are tasks where a Block Universal Transformer gets 0% accuracy and CoTFormer gets 100% accuracy. However, going to such extreme widths would make the model significantly more expensive. Additionally, we gain in terms of adaptiveness when using CoTFormer, as we can vary the number of repeats per token. The structure of CoTFormer very naturally adapts to this highly desired setting whereas for universal transformers, hot-fixes such as copying the state forward need to be used. \\n\\n3. To draw a parallel, it is known that dense transformer models are better than Mixture-of-Expert (MoE) models with similar number of total parameters, yet MoEs have other benefits that make them desirable leading to them being widely adopted by the community. The same holds for CoTFormer, which is more efficient that the previous universal transformer, and allows adaptivity whereas standard architecture does not. The goal is therefore to provide a more efficient architecture **with inference compute-adaptiveness capability** that can obtain better perplexities. \\n\\n4. Toy tasks used in universal transformer\\u2019s paper are so simple that the model is sometimes obtaining 100% accuracy on those tasks. That is why we focus on the harder target task (next token prediction in general text). Usually this is considered a positive feature of a work. We are confused why you would consider results on a toy task more appealing. \\n\\nWe hope that in light of the above comments, your concerts will be addressed and that you would increase your score.\\n\\nThank you.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nWhile we appreciate your consideration of our work, upon reading your review, we do not understand your justification behind rejecting our work with such high confidence. We hope the following comments clarify our contributions further and earnestly ask you to consider raising your score or provide reasons that have so strongly compelled you to reject our work. \\n\\n1. Chain of thought is a prompting technique that relies on auto-regressive decoding. Figure 1 is demonstrating how each method works, comparing what happens in CoTFormer with what happens when a Chain of thought technique is used on a normal Transformer. The goal of this figure is to demonstrate the difference between generating CoT using auto-regressive decoding (Figure 1a) and repeatedly applying the same model (block universal, Figure 1b). In the former, a token is fed through the model to generate a new token, which is appended to the sequence, and itself fed again to the model, generating a new token, etc. This is conceptually similar to feeding the same token through multiple repetitions of the model. In the latter approach, each token is fed to multiple repetition of the model, yet\\u2014unlike in the former approach\\u2014intermediary representation of a given token cannot attend to past representations of that token. This observation led us to the design of CoTFormer. The colors are meant to distinguish tokens based on how many steps were used to generate them. For chain of thought (Figure 1a), the figure shows a process of generating an answer after 3 steps of decoding with the model whereas for block universal and CoTFormer, the figure only shows 2 repeats. That is why there are no yellow colors for CoTFormer and Block Universal. We understand that this can be slightly confusing and will remove the third step from CoT so all the figures correspond to 2 steps of using the model. That being said, given that this is the only weakness mentioned in your review, we strongly object that this would be grounds for rejection. \\n\\n3. Regarding training vs inference FLOPs, we note that we are plotting against the FLOPs for a forward pass of sequence length 256. The backward pass\\u2019s FLOPs is usually within a constant factor of the forward pass. As such\\u2014given we are using a similar number of training iterations, batch sizes, etc. for all methods\\u2014training FLOPs and inference FLOPs are not that different. Note that we are not plotting against single token decoding FLOPs. We will add this remark to our next revision. Let us know if more clarifications are needed on this point. \\n\\n4. The goal of this work is to point out the difference between doing CoT using auto-regressive decoding, and simply applying the model multiple times (block universal transformer). We do not claim that we are doing better than a larger standard Transformer and clearly discuss the benefits (such as adaptive compute) and limitations (needing 5 repeats to match 48 layer Transformer) in the paper (Section 5). However, compared with architectures that rely on repeatedly applying the same model, CoTFormer still improves significantly over block universal transformers due to attention\\u2019s access to intermediary representations. To draw a parallel with another recent line of research, it is known that dense transformer models are better than Mixture-of-Expert (MoE) models with similar number of total parameters, yet MoEs have other benefits that make them desirable leading to them being widely adapted by the community. \\n\\n5. Please note that the referred sentence about the growth of computation cost of attention being less noticeable is about much larger models that have large feed forward layers. Training these models is not possible for us due to resource limitations. However, we point out that when using flash attention, the attention\\u2019s memory cost increases linearly (not quadratically) with length. Therefore, theoretically when factoring in the KV cache, the memory requirements of Block Universal and CoTFormer are the same. \\n\\nWe believe we have addressed (i) the misunderstanding regarding the link between our method and CoT, as well as (ii) the performance of our method in terms of training FLOPs, and finally (iii) compared the memory footprint of our method with our block universal transformer baseline. As those fully cover the concerns raised in your review, we hope you will either raise your score or provide further clarifications justifying your score.\"}", "{\"comment\": \"Dear Reviewer,\", \"we_hope_the_following_comments_address_your_concerns\": \"1. The limitation you described is discussed in Section 5 of the main paper.. However, the goal of this paper is to demonstrate an inconsistency between block universal (i.e. applying the same model multiple times) and doing chain of thought (i.e. applying the same model multiple times and attending to past intermediary representations) which was otherwise missed and to show that it has significant effects. That is why our main comparison is with block universal, to show that allowing attention to previous intermediary states, pushes the pareto frontier forward.\\n\\n2. While it is true that the results on downstream tasks are not very strong, they still show improvements of CoTFormer over Block Universal Transformer. These results are meant to complement our results in the main text. To obtain much better performance on these tasks, we need much larger models that are also trained for much longer which is not feasible in our academic settings.\\n\\n3. We are evaluating the perplexity of language modeling which is the main target for training many large models. We are also showcasing performance on downstream tasks in the appendix. As such, we do not understand why training on a synthetic task would help. Additionally, understanding how Transformers or CoTFormers work is outside the scope of our work which is to point out the benefits of having attention access to intermediately representations and demonstrate new methods to achieve adaptive compute. Such analysis would of course be useful in future work.\\n\\n4. We did not investigate fine-tuning a model and focused on training from scratch. We agree that it is not obvious to fine-tune a model as a CoTFormer and investigation into whether it is possible, for example by starting from a pre-trained model and using CoTFormer training with it, would be interesting in the future. However, we emphasize that new methods for training models from scratch are useful given that they enable additional benefits such as adaptive compute in this case.\\n\\n\\nWe hope the above comments fully address your concerns. We ask that you kindly consider raising your score or otherwise please share the major objections that compel you to reject our work. We remain in your disposal if you have any additional comments or questions.\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you for the clarification and updating your score.\\n\\nRegarding the training FLOPs, note that we train all models for the same number of steps, the same batch size, and the same sequence length. That means the number of tokens used for training is the same for all models and does not affect our results (it's a constant scaling of the x-axis for all points). \\n\\nRegarding comparison with standard transformer, we have reported the accuracy of standard transformer in Table 1 which can be used to asses the distance between CoTFormer and standard transformer. We also explicitly discuss this distance and make improvements to the architecture in Section 3.3 and discuss limitations in Section 5.\\n\\nWe believe the above comments fully address your concerns. Therefore, it is still unclear why you think a rejection would be warranted. We hope that you would consider raising your score further or that you would please let us know how we should improve the paper further.\"}", "{\"comment\": \"Dear Reviewer,\", \"we_provide_the_following_answers_to_your_questions\": \"1. We run our experiments on both 12 layer and 24 layer CoTFormers and compare with up to 48 layer Transformers to showcase consistency of our results in different scales. However, scaling up the model further requires much higher computation power. In the revision of our paper, we provide results for 512 sequence length in the appendix which shows the benefits of CoTFormer persists at these lengths as well.\\n\\n2. In order to limit the number of experiments and compute costs, we focus on the OpenWebText2 dataset, we emphasize that this is a fairly generic and large dataset and is not task specific similar to the datasets used for state of the art pre-training. \\n\\n3. We are not sure what kind of theoretical analysis is requested here. The goal of this paper is to clarify a clear advantage of CoTFormer over Block Universal due to access through attention which has not been pointed out in prior work. This is evidenced by empirical results. Any theoretical analysis remains far outside the scope of this work.\\n\\n4. In the adaptive CoTFormer, the compute budget can be decided **at inference** and can be tuned depending on the cost of compute for the user. Of course, more compute usually leads to better accuracy (as shown in Fig. 4). \\n\\nWe hope the above answers adequately answer your concerns. However, looking at your review, we do not understand why you are suggesting a rejection of our work as most of the raised points are not major objections. If our answers have adequately answered your concerns, please consider raising your score or let us know any major objections that compel you to suggest a rejection so we can also answer them.\"}" ] }
7ienVkNf83
EReLELA: Exploration in Reinforcement Learning via Emergent Language Abstractions
[ "Kevin Yandoka Denamganai", "Tim Bradley", "Pierluigi Vito Amadori", "Sondess Missaoui", "Guy Moss", "James Alfred Walker" ]
The ability of AI agents to follow natural language (NL) instructions is important for Human-AI collaboration. Training Embodied AI agents for instruction-following can be done with Reinforcement Learning (RL), yet it poses many challenges. Among which is the exploitation versus exploration trade-off in RL. Previous works have shown that NL-based state abstractions can help address this challenge. However, NLs descriptions have limitations in that they are not always readily available and are expensive to collect. In order to address these limitations, we propose to use the Emergent Communication paradigm, where artificial agents learn an emergent language (EL) in an unsupervised fashion, via referential games. Thus, ELs constitute cheap and readily-available abstractions. In this paper, we investigate (i) how EL-based state abstractions compare to NL-based ones for RL in hard-exploration, procedurally-generated environments, and (ii) how properties of the referential games used to learn ELs impact the quality of the RL exploration and learning. We provide insights about the kind of state abstractions performed by NLs and ELs over RL state spaces, using our proposed Compactness Ambiguity Metric. Our results indicate that our proposed EL-guided agent, entitled EReLELA, achieves similar performance as its NL-based counterparts without its limitations. Our work shows that RL agents can leverage unsupervised EL abstractions to greatly improve their exploration skills in sparse reward settings, thus opening new research avenues between Embodied AI and Emergent Communication.
[ "Emergent Communication", "Exploration", "Reinforcement Learning", "Abstraction", "Emergent Languages", "Natural Languages" ]
Reject
https://openreview.net/pdf?id=7ienVkNf83
https://openreview.net/forum?id=7ienVkNf83
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z4PkdaGT1c", "uUfzjMlxe9", "tEZgFKcWvA", "rvCydX96g0", "rbt8Vvrln3", "qF2UTOY0is", "TAuMz4TEki", "QLdlQQBY3N", "PbQwFCxnZt", "Oq61GS9Lal", "ArWnHhgfJx", "1MwV8gWuzq" ], "note_type": [ "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "meta_review", "official_review", "official_comment", "official_comment" ], "note_created": [ 1733002236583, 1732794052127, 1737524073828, 1732990187485, 1732791998896, 1729098883843, 1730060222445, 1732787657419, 1734620572967, 1730931277931, 1732791969083, 1732787619298 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10736/Reviewer_2bD4" ], [ "ICLR.cc/2025/Conference/Submission10736/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10736/Reviewer_4A62" ], [ "ICLR.cc/2025/Conference/Submission10736/Authors" ], [ "ICLR.cc/2025/Conference/Submission10736/Reviewer_us7n" ], [ "ICLR.cc/2025/Conference/Submission10736/Reviewer_2bD4" ], [ "ICLR.cc/2025/Conference/Submission10736/Authors" ], [ "ICLR.cc/2025/Conference/Submission10736/Area_Chair_N4tD" ], [ "ICLR.cc/2025/Conference/Submission10736/Reviewer_4A62" ], [ "ICLR.cc/2025/Conference/Submission10736/Authors" ], [ "ICLR.cc/2025/Conference/Submission10736/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Hi Authors,\\n\\nThank you for considering my suggestions and making edits that contribute to the paper's readability. Its current version is slightly more than the 10-page limit I think.\\n\\nI agree with R1(4A62)'s suggestion that less is more for presenting empirical results, and the same rule of thumb applies to formalism if I may add. For example, two pages on CAM seems too much. I would've placed only a clear algorithm box and one intuition paragraph. The authors sometimes elaborate too much on details and compromises ('despite', 'nevertheless', 'that being said') that prevent me from understanding the big picture. It is perfectly okay to move those behind-the-scene rationales or analyses to the appendix, which also helps highlight your key contributions.\\n\\nI remain conservative about the presentation quality (clarity of figures, preciseness of writing for the technical section). This limits me from confidently evaluating the soundness, understanding the results, and asking well-informed questions. There is no lower confidence level I can assign. \\n\\nIn recognition of the authors' edits, the novelty (established before the technical section), and the well-known difficulty of getting emergent communication to work, I tend to raise the overall score from 3 to 4 if there were one. AC: Please note this and my low confidence score when considering my evaluation.\"}", "{\"title\": \"Reply 1\", \"comment\": \"We thank the reviewer for their time and thorough review and constructive comments.\\nWe have addressed all minor concerns in the revised paper and we reply to main comments and the questions below.\\n\\n# Q1: \\\"What is the precise definition of CAM?\\\"\\n\\nThis concern was shared with Reviewer 4A62, we have provided a detailed reply to their review and invite you to centralise the discussion there.\", \"we_provide_here_some_specific_details\": \"We have added an algorithm to fully clarify the details of the CAM formulation, and reframed the Formalism paragraph with a more top-down narrative that we hope will be effective in enhancing the clarity but we are looking forward to more specific feedback if you have any further ideas about how to improve the matter.\\n\\nWe have also added a CAM Distances paragraph at the end to clarify how we use the CAM measures in our analysis in Section 4.2.\\n\\n# Q2: \\\"How do parallel experiments (i.e., different curves in Figure 2) differ exactly?\\\"\\n\\nThank you for your advice, as suggested, we have added a table summarising the different tested agent with their relevant parameters in Table 1.\\n\\nWe are also further clarified what **shared** and **agnostic** mean at the end of Section 3.1.\\n\\nThe parameters $\\\\beta_1,\\\\beta_2$ are not explained in the main text but solely in appendix G.1, as mentioned in the Agent paragraph of Section 4. We have added the mention towards explanations of the two hyperaprameters.\\n\\nWe hope that these three additions fully answer your concerns on what the different settings are, but please let us know if you have any further advice to help us improve the clarity of the matter.\\n\\n# Comment 1: \\\"Undefined H3.1 and H3.2 in lines 337-338\\\"\\n\\nThank you for catching this issue, we have now correct the matter and, as detailed to our answer to Reviewer 4A62, we have also improve the clarity of our Hypotheses paragraph by framing them with more precision to be testable.\\n\\nWe hope that this clarifies the issue, but please let us know if you have any further advice.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"I appreciate the edits that have been made, and I think they improve the paper. In particular, I think the explanation of CAM is much better, and I now have a better idea of what that is. I think the presentation of the experiments has also improved (e.g., Table 1), but the plots are still noisy and little hard to follow. The main thing I would recommend on this front is to limit the varieties of EReLELA that are being experimented with---introduce as few varieties as are necessary to show the effectiveness of the result and perform the related ablation studies.\\n\\nI will move my overall assessment from 3 to 5, but no higher because the presentation of the experiments and their results does not leave me with a clear idea of what to take away (I would need to reread the full paper to be certain about this, but having skimmed the edits, I think this may still be an issue).\", \"miscellaneous_comment\": \"CAM seems to be similar to Slow Feature Analysis (e.g., https://ieeexplore.ieee.org/document/9881217); referencing SFA could help contextualize things (only if it is actually related; if I am missing the point, then no need to reference SFA).\"}", "{\"title\": \"Reply 1 (Part 2/2)\", \"comment\": \"# Q2: \\u00a0\\\"What are the experiments showing? Less is more when it comes to these graphics and presenting these results.\\\"\\n\\nWe have enhanced our Hypotheses paragraph in order to try to clarify what the experiments must show and have taken greater care in their formulation towards being precision to be testable, please find it reproduced below for tractability:\\n\\n\\\"We seek to validate the following hypotheses.\\nFirstly, we consider whether a simple count-based approach over (synthetic) NL abstractions is sufficient to solve hard-exploration RL tasks **(H1)**.\\nWe refer to the corresponding agent using (synthetic) NL abstractions to compute intrinsic rewards as SNLA.\\nWe carry on with the hypothesis that a simple count-based approach over EL abstractions is similarly sufficient **(H2)**.\\nIn doing so, we will also investigate to what extent do ELs compare to SNL in terms of abstractions, using our proposed CAM. \\nUsing our proposed CAM, we consider two state abstractions to be aligned when their CAM distance is low.\\nAs the *MultiRoom-N7-S4* environment only shows differently-coloured doors in a partial observation context, the most important type of state abstraction is related to the colour of visible objects.\\nOn the other hand, since the *KeyCorridor-S3-R2* environment requires picking up an object behind a (unique) locked door, after having unlocked said door with a key, the most important type of state abstraction is related to the shape of visible objects.\\nWe consider a state abstraction to be meaningful in a given environment if it is aligned with the language oracle's abstraction that is the most important in said environment.\\nThus, we expect ELs to perform meaningful abstractions **(H3)**, i.e. being aligned with the colour-specific language's abstractions in the *MultiRoom-N7-S4* environment, and being aligned with the shape-specific language's abstractions in the *KeyCorridor-S3-R2* environment.\\\"\\n\\nWe hope that those reformulations are sufficient towards streamlining the presentation of the results section, as we are not sure how we could further clarify our main 2 claims which are already addressed in 2 concise subsections, but we are looking forward to any specific advice if you have any more.\\n\\nWe are also further clarifying what **shared** and **agnostic** mean at the end of Section 3.1.\\n\\nPlease let us know if you find this sufficiently clear and precise, and/or whether you see ways to further enhance it all.\\n\\n## Synthetic NL:\\nWe note also that we have clarified the issue around synthetic natural languages by adding the following discussion and renaming the originally 'natural language oracle' as 'synthetic natural language oracle' in order to clearly emphasise both the way those descriptions are obtained and the kind of grammar they rely on:\\n\\n\\\"**Synthetic Natural Language Oracles.** Like Tam et al. (2022), we employ language oracles that provides NL descriptions/captions of the state. Like them, we mean to use the adjective \\u2018natural\\u2019 to specify the quality and form of the caption rather than the process in which it is obtained (i.e. programmatically as opposed to having human beings producing them). Nevertheless, in order to make the distinction clear, we will refer to those oracles as Synthetic Natural Language (SNL) oracles.\\n\\nThat being said, we mean to emphasise that our considerations and results are agnostic to the process through which the NL captions are obtained, as we only indeed care about their quality and form, i.e. which vocabulary and grammar are being used, which here refers to that of the English natural language. We flag this as a limitation of our study because using NL captions produced from human beings would have yield a more varied and rich distribution, which would possibly impact the resulting RL agent\\u2019s performance (detrimentally supposedly). We make the choice here to only use synthetically-generated NL captions because they can be generated \\u201caccurately and reliably, and at scale\\u201d (Tam et al., 2022).\\n\\nOur implementation of SNL oracles are simply describing the visible objects in terms of their colour and shape attributes, from left to right on the agent\\u2019s perspective, whilst also taking into account object occlusions. For instance, around the end of the trajectory presented in Figure 6, the green key would be occluded by the blue cube, therefore the SNL oracle would provide the description \\u2018blue cube red cube\\u2019 alone. We also implement colour-specific and shape-specific language oracles, which consists of filtering out from the SNL oracle\\u2019s utterance the information that each of those language abstract away, i.e. removing any shape-related word in the case of the colour-specific language, and vice-versa.\\\"\\n\\nPlease let us know if these changes are satisfactory, and/or whether you have any advice to help us further improve those matters.\"}", "{\"summary\": \"This paper investigates to what extent referential games, and the resulting emergent language abstractions, can be used to derive intrinsic rewards in hard exploration problems of reinforcement learning agents.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"The research question of the paper is creative: can emergent language abstractions be used to help RL agents in hard exploration problems.\", \"weaknesses\": \"A) I don\\u2019t see empirical gains of using emergent language abstractions: Looking at Figure 2, it seems to me that the performance of the natural language abstraction agent (gray) and the emergent language abstraction agent (green) are within noise levels of each other. Thus, I find it difficult to conclude from the experiment that emergent language abstractions are important, in particular since both the natural language abstraction agent and the emergent language abstraction agent are using count based exploration terms as well. These should be ablated.\\n\\nB) Even if there were empirical gains, I would want to see a comparison to state-of-the-art intrinsic reward methods for hard exploration problems based on natural language abstractions to see the point empirically proven that emergent language abstractions are supposedly preferable. In particular, I would expect comparisons to \\n- Mu et al. Improving Intrinsic Exploration with Language Abstractions. NeurIPS 2022. https://doi.org/10.48550/arXiv.2202.08938\\n- Klissarov et al. (2023). Motif: Intrinsic Motivation from Artificial Intelligence Feedback. arXiv. https://doi.org/10.48550/arXiv.2310.00166\\n- Zhang et al. OMNI: Open-endedness via Models of human Notions of Interestingness. ICLR 2024. https://arxiv.org/abs/2306.01711 \\n\\nC) Related to the above, I believe the authors need to evaluate on harder exploration problems, such as MiniGrid\\u2019s KeyCorridor-S3-R3 and MultiRoom-N10-S10, or MiniHack (Samvelyan et al. MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research. NeurIPS 2021. https://doi.org/10.48550/arXiv.2109.13202). Moreover, I would like to see experiments beyond gridworlds, e.g., on Vizdoom (c.f. Henaff et al. Exploration via Elliptical Episodic Bonuses. NeurIPS 2022. https://doi.org/10.48550/arXiv.2210.05805).\\n\\nD) I believe it would be important to add a tabula-rasa RL agent, as well as only RND baseline to Figure 2.\\n\\nE) p8 Figure 3 looks to me like the experiments did not finish in time.\", \"questions\": [\"Abstract: It\\u2019s not entirely clear to me what limitations of NL-based counterparts you are referring to here.\", \"p1: \\u201cRND \\u2026 which can be difficult to deploy\\u201d \\u2014 Why are they difficult to deploy? RND is a very straightforward intrinsic reward method.\", \"From Figure 1 it looks like the intrinsic reward is only generated from the speaker. Why shouldn\\u2019t one also derive the intrinsic reward from the listener?\", \"Comments\", \"p4 Figure 1 is too small to read. Same goes for other figures in the paper (e.g. Figure 2)\", \"p7 Figure 2 caption: explain the different methods variants in more detail.\"], \"what_the_authors_would_have_to_demonstrate_to_see_an_improved_rating_from_me\": \"Demonstrate clearer gains of emergent language abstractions over natural language abstractions (A) on harder exploration problems (C) while also comparing to state of the art natural language abstraction methods (B) and adding tabula-rasa RL, as well as RND, baselines (D). Present results of finished experiments where each method is ran for the same number of steps (E).\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors hypothesize that emergent languages can benefit RL agent exploration, to the same extent as expensive natural lanaguge descriptions. They propose a method to learn such emergent languages via reference games to induce intrinsic rewards jointly with the RL objective (EReLEA). They provide evidence that the learned emergent language is as useful, even more compact than natural language oracles.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. Interesting method. The proposed CAM metric made a decent attempt at measurnig the quality of abstractions as far as I can understand its definition.\\n2. Contextualization of the problem is clearly articulated in sec 1 and 2.\\n3. The analysis about Zipf's Law of Abbreviation and the learned emergent langauge is insightful.\\n4. The ablation studies seem thorough (if only I can interpreate how they precisely differ in context).\", \"weaknesses\": \"My primary concern is presentation quality. Improved clarity can significantly benefit readability of this paper, as well as my understanding of the main method.\\n1. Typo/abbreviation mistakes: line 3 of the abstract \\\"be done ne(?) with Reinforcement Learning\\\", start of the first paragraph of introduction, line 42 \\\"in effect\\\" (?), single quotes in line 43, \\\"it dynamics\\\" -> \\\"its dynamics\\\" in line 98, and so on.\\n2. What's the superscript -1 on line 271?\\n3. Figure readability is sadly discounted by low resolution, small font size, and the lack of in-figure legends. Personally I find it hard to parse the results without clear legend names and matching color coding, even with captions.\\n4. Undefined H3.1 and H3.2 in lines 337-338\\n5. $\\\\beta_1$ and $\\\\beta_2$ in Figure 2 captions seem out of blue. Are they defined anywhere in the main text? Why do they imply \\\"shared\\\" and \\\"anogostic\\\"? Perhaps a table comparing configurations of parallel runs? \\n6. I appreciate the intuition, but I struggle to understand the formulation of the proposed CAM in sec 3.2. I think neither eq 4 nor the relative ambiguity of a language are CAM, but I cannot find exactly how CAM is computed in the main paper.\", \"questions\": \"1. What is the precise definition of CAM?\\n2. How do parallel experiments (i.e., different curves in Figure 2) differ exactly?\\n\\nI am happy to raise the score if these questions are addressed with clarity.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply 1 (part 2/2)\", \"comment\": \"# Comment (A)+(B): \\\"Demonstrate clearer gains of emergent language abstractions over natural language abstractions [...] while also comparing to state of the art natural language abstraction methods\\\"\\nWe thank you for listing clearly what we could address to raise your appreciation of the paper.\\n\\nNevertheless, firstly, we disagree that it is necessary for the paper's results to demonstrate clearer gains of EReLELA compared to approaches using (synthetic) NL abstractions. Indeed, the paper does not claim superiority of EL-guided RL exploration methods over NL-guided ones, but rather solely that, in this minimal design, EL-guided RL exploration methods reach similar performance to NL-guided ones, and are therefore viable alternatives.\\nPlease let us know if that clarifies the issue, and whether you feel the paper should emphasise it further in any given part. \\n\\nSecondly, with regards to comparing to state-of-the-art NL-guided RL exploration methods, we agree that the comparison would strengthen our work, not towards showing superiority, but rather towards providing comparison grounds to put things into perspective. To that end, we are in the process of integrating and adapting to our codebase the codebases of [Mu et al., 2023] and [Raileanu et al., 2022]. We hope to include their results in our frameworks in the next revision of the paper.\\n\\nPlease let us know if this answer fully addresses your concerns here.\\n\\n## References:\\n[Raileanu et al., 2022] : Raileanu, Roberta, and Tim Rockt\\u00e4schel. \\\"Ride: Rewarding impact-driven exploration for procedurally-generated environments.\\\"\\u00a0_arXiv preprint arXiv:2002.12292_\\u00a0(2020).\\n# Comment (C)+(D)+(E): \\\"on harder exploration problems [...] and adding tabula-rasa RL, as well as RND, baselines [...] Present results of finished experiments where each method is ran for the same number of steps.:\\n\\nWe assume that the request to show results on harder exploration problems is especially relevant if the paper was to claim superiority of EL-guided RL exploration methods over state-of-the-art NL-guided ones. \\nHowever, in light of the fact that our paper only seeks to show viability of EL-guided RL exploration methods, we hope that you can now appreciate why using the environments we have chosen, for we found them to strike a 'GPU-poor'-friendly balance between difficulty of the exploration problem and training time.\\n\\nThat being said, experiments are currently running on *KeyCorridor-S3-R3* and *MultiRoom-N10-S10* as you proposed, along with tabula-rasa RL and RND baselines, but they are unfortunately not finished yet. We have not been able to include them in the current revision.\\n\\nPlease let us know if this reply addresses your concerns on the matter.\"}", "{\"metareview\": \"The reviewers all agree this paper is not ready for publication. The authors should focus on improving the clarity of their writing. Most reviewers were confused by the discussion and explanation of their method and evaluation framework. Moreover, the authors should be clearer about what settings they foresee their method being of benefit compared to baselines (which their proposed method does not clearly outperform).\", \"additional_comments_on_reviewer_discussion\": \"The reviewers' fair concerns around presentation quality, clarity of exposition, and comparison to similar previous methods were not sufficiently addressed by the authors' rebuttal.\"}", "{\"summary\": \"This paper introduces an algorithm for augmenting reinforcement learning\\nalgorithms with emergent communication-derived rewards that aid in tasks where\\nexploration is a difficult part of the task. This algorithm works by training\\nagents to play a referential game with observations from the environment; the\\nspeaker agent is then able to generate abstracted descriptions of the\\nobservations for the RL agent which can encourage the agent to make new\\nobservations that are not trivially different. This algorithm is validated\\nwith a handful of experiments and new metric \\\"Compactness Ambiguity Metric\\\" (CAM)\\nwhich quantifies the way in which the speaker agent generates abstract\\ndescriptions of the environment observations.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The major strength of the paper is that the EReLELA algorithm itself is\\npresented clearly and is well motivated by (1) the success of language-based\\nabstraction methods for RL-based exploration and (2) the potential of emergent\\ncommunication to produce learned, human language-like communication. I think\\nthis contribution is especially important on the emergent communication side of\\nthings because the field lacks practical applications of emergent languages,\\nand integrating it into an algorithm such as the one this paper introduces\\ncould not only be effective in its own right but be an effective demonstration\\nof the applicability of emergent communication methods.\", \"weaknesses\": \"The major weaknesses of this paper are two fold. First, CAM is not clearly\\ndefined and/or justified. It seems like it is a key analytical tool in the\\nempirical work of this paper, but its presentation did not give me a clear\\npicture of what it was doing either in theory or in practice.\\n\\nThe second also relates to clarity, namely the lack of clarity of the\\nexperiments themselves. The graphics themselves are quite noisy and refer to\\nsettings that are not described in detail (e.g., \\\"Agnostic STGS-LazImpa-10-1\\nELA+AccThresh=90+Distr=256+UnifDSS\\\"). Since the experimental settings are not\\nestablished at the beginning of the experiments section, I have very little\\nidea as to how to interpret the empirical results. Is there a baseline? Which\\none is the proposed method? Which other settings am I supposed to compare it\\nto? Since I cannot easily answer these questions reading this section of the\\npaper, I cannot determine what is learned about the proposed algorithm.\\nI think it could be the case that the experiments themselves already contain\\nthe requisite data for presenting an effective analysis of the algorithm, but\\nthose things would need to be presented more simply and methodically.\", \"questions\": \"My main questions derive from the _Weaknesses_ section above: What are the\\nexperiments showing? Less is more when it comes to these graphics and\\npresenting these results. Regarding CAM: what exactly is the metric? What are\\nthe inputs and outputs, precisely? Once this is clarified, is it the case the\\nCAM is actually measuring the things we want to measure? How do we validate\\nthis?\\n\\nIt is possible I could be convinced to raise my review scores if the authors\\nare able to streamline the presentation of the experiments (especially the\\ngraphics) _and_ the results are still substantive enough for the paper's\\nclaims. While I appreciate the thoroughness of the introductory sections,\\nI think they could be compact to make room for a more extensive explanation of\\neach experiment. If the CAM and experiments sections of the paper had as much\\nclarity as the introduction, related work, and EReLELA sections, I easily\\nrecommend acceptance.\\n\\n### Minor Comments\\n\\n- (Abstract) \\\"done ne\\\" -> \\\"done\\\"\\n- (1 Introduction) Typo at the very beginning?\\n- (1 Introduction) \\\"NLs oracle\\\" -> \\\"NL oracle\\\"\\n- (1 Introduction) In a sentence or two, why is it necessary to use\\n language-based abstractions? Wouldn't it be easier to represent things as,\\n say, an embedding or more formal structure? (I have an inclination as to\\n what the answer to this question is, but I think it should be touched on in\\n the text for clarity.)\\n- (Line 055) \\\"NLs, that are\\\" -> \\\"NLs, which are\\\"\\n- (Line 058) \\\"hard-exploration\\\" -> \\\"hard exploration\\\"\\n- (Line 065) What does \\\"aligned by not similar to\\\" mean?\\n- (Line 067) \\\"advantages _over_ their NL\\\"?\\n- (Line 090) The discussion of intrinsic versus extrinsic reward is a little\\n unclear (partially on the writing level). I can see what is being\\n communicated, but someone with slightly less RL background might have a more\\n difficult time.\\n- (Line 105) This is a good distinction to make.\\n- (Line 114) Extra space before \\\";\\\"\\n- (Line 122) \\\"entail to good exploration\\\": Not sure what this means.\\n- (Line 138) \\\"constraint\\\" -> \\\"constrain\\\"\\n- (Line 160) Space after end quote\\n- (Line 161) Use `\\\\citep`\\n- (Line 216) Extra space before \\\",\\\"\\n- (Line 228) Does \\\"may not be passed\\\" mean \\\"is not passed with a certain\\n probability\\\"? The phrasing \\\"may not\\\" is not clear here since it makes it\\n sound like it is \\\"not allowed to be passed\\\".\\n- (Line 276) This paragraph is difficult for me to follow.\\n - $i\\\\in[0,N-1]$ suggests that $i$ is a real number, but I believe it is\\n discrete. Using $i\\\\in{0, 1, \\\\dots, N-1}$ would be clearer.\\n - What is $\\\\lambda_i$?\\n - What is a \\\"time interval threshold\\\"?\\n - Using pseudocode might be clearer here (I don't think I follow it enough to\\n say this for sure, though).\\n - (Sec 3.2) What is the input and output of CAM? I get that it is creating\\n a discrete distribution based on utterances used to describe observations,\\n but what is the metric itself? Is the distribution the metric itself or is\\n it the entropy or the divergence from some baseline metric?\\n- While I appreciate explicitly naming the hypotheses, they are not stated with\\n enough clarity and precision to be testable. That is, how do we know\\n precisely when the hypothesis as been validated or not?\\n- (Sec 4.1, Fig 2) These are difficult to follow, especially with the colors\\n and the names which have not been well specified. For example, I do not know\\n what \\\"shared\\\" or \\\"agnostic\\\" refers to in the architecture.\\n - Unless it is necessary, it would be good to reduce the number of\\n referential game settings that you report so as to minimize confusion.\\n- The \\\"natural language\\\" baseline should be called a \\\"synthetic language\\\" since\\n it is just programmatically generated and not gathered/derived from human\\n language in a meaningful way.\\n- The different text colors for the experimental settings is a bit distracting.\\n I think it would be better to come up with simple, easy-to-remember names for\\n each setting and use those without worrying about colors (aside from the\\n lines/legend on the plots themselves.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Reply 1 (Part 1/2)\", \"comment\": \"We thank the reviewer for their time and thorough review and constructive comments.\\nWe have addressed all of your minor comments and more in the revised paper, thank you again for the level of details in that feedback.\\nWe reply to your main comments and the questions below.\\n# Q1: \\\"Regarding CAM: what exactly is the metric? What are the inputs and outputs, precisely? Once this is clarified, is it the case the CAM is actually measuring the things we want to measure? How do we validate this?\\\"\\n\\nWe have updated the CAM-related part in the revised paper, please let us know if it addresses your questions. For tractability, we reproduce below a summary of those changes:\\n\\n\\nFirstly, we have enhanced the intuition-building paragraph in Section 3.1, in order to further clarifies motivations for the CAM. Please let us know if it can be further improved.\\nSecondly, we have tried to clarify what are the $\\\\lambda_i$ by detailing their motivations further in the following paragraph:\\n\\n\\\"Next, we focus on the histogram that the metric returns. To sort compactness counts in this histogram, it is necessary to associate to each bin a partition of admissible compactness counts. Since compactness counts refer to time intervals, each bin of the histogram must refer to a range of time, between 0 and the maximum length $T$ of an RL trajectory/episode in the given environment. We assume that the start of the range associated with a given bin is the end of the range associate with the previous bin.\\nTherefore, we can na\\u00efvely associate to each bin $i \\\\in {0, 1, \\\\dots , N \\u2212 1}$ a time interval start $T_i$, defined relatively to the maximal length $T$. This framing is shown in Equation 3, with \\u2308\\u00b7\\u2309 being the ceiling operator. It is obtained by partitioning the whole range with the second and last hyperparameters $(\\\\lambda_i)_{i\\\\in\\\\{0,1,\\\\dots,N-1\\\\}} \\\\in [0,1]^N \\\\; \\\\text{ such that } \\\\; \\\\forall(j,k), \\\\; j<k \\\\implies \\\\lambda_j < \\\\lambda_k$\\\"\\n\\n## What exactly is the metric? What are the inputs and outputs, precisely?\\nThe metric is highlighted in as many details as possible in the added Algorithm 1 of the revised paper. Please let us know if it is efficiently answering your questions. It takes as inputs the following:\\n\\n- $\\\\mathcal{D}$: Dataset of $N_{\\\\mathcal{D}}$ RL trajectories of length $T$;\\n- $\\\\text{Sp}_l$: Speaker agent for language $l$ being evaluated;\\n- $N$: Number of histogram bins;\\n- $(\\\\lambda_i)_{i\\\\in\\\\{0,1,\\\\dots, N-1\\\\}} \\\\in [0,1]^N$: partition hyperparameters;\\n\\nand it returns a histogram of compactness counts $H$.\\n\\n## is it the case the CAM is actually measuring the things we want to measure? How do we validate this?\\n\\nWe performed interval validation of the CAM and presented it in Appendix E.1, but we appreciate that the original version of the paper did not emphasise it properly. We have now added the following lines to emphasise it:\\n\\n\\\"In Appendix E.1, we show that this framing is sufficient to grant internal validity to our metric, meaning that this framing of the CAM (i) enables us to discriminate between different languages that are known to build different state-abstractions (e.g. synthetic languages that refers to all or only one specific attribute of objects, such as color or shape, used to caption a video stream that is egocentric viewpoint of an agent randomly walking in a 3D room with many randomly-placed objects), and (ii) maps languages without consistent state-abstractions (e.g. shuffled captions over a video stream) close to a null distribution histogram.\\\"\\n\\nPlease let us know if those additions address your concerns and/or whether you have propositions to improve them.\"}", "{\"title\": \"Reply 1 (Part 1/2)\", \"comment\": \"We thank the reviewer for their time and thorough review and constructive comments.\\nWe reply to those and the questions below.\\n\\n# Q1 : \\\"Abstract: It\\u2019s not entirely clear to me what limitations of NL-based counterparts you are referring to here.\\\" :\", \"we_assume_you_are_referring_to_the_following_sentence\": \"\\\"Our results indicate that our proposed EL-guided agent, entitled EReLELA, achieves similar performance as its NL-based counterparts without its limitations.\\\"\\n\\nWe meant to refer to the limitations of the NL-based RL exploration methods in terms of the cost in collecting NL descriptions.\\nWe agree with you that the formulation is ambiguous.\\nWe propose to rephrase by simply removing the ambiguous part of the sentence, since it is already addressed earlier in the abstract:\\n\\n\\\"Our results indicate that our proposed EL-guided agent, entitled EReLELA, achieves similar performance as its NL-based counterparts.\\\"\\n\\nPlease let us if you feel some precision would be better.\\n\\n# Q2: \\\"p1: \\u201cRND \\u2026 which can be difficult to deploy\\u201d \\u2014 Why are they difficult to deploy? RND is a very straightforward intrinsic reward method.\\\"\\n\\nWe agree with you, our statement about the complexity of deployment of RND and NGU here is unfortunately ambiguous and lacks precision.\\nWe meant it in comparison to a count-based exploration approach, mainly, in the sense that deploying a count-based exploration method can be considered simpler since it involves (i) fewer moving parts (.e.g state-count buffer versus e.g. RND's random and predictor networks, predictor optimizer ) that (ii) can also be deemed simpler to implement (no tricks required on the contrary to RND's tricks like reward normalization and observation clipping and normalization), and (iii) it involves fewer hyperparameters to finetune (e.g. only a reward-mixing coefficient as opposed to e.g. RND's reward mixing coefficient, architectures of random and predictor networks, hyperparameters of the predictor optimizer, different intrinsic and extrinsic discount factors, number of timesteps to step the initial random agent into the environment to harvest states for normalization purpose before starting optimization ...).\\n\\nIn the revised paper, we have clarified this in the above specified terms.\\nWe hope that you agree with our statement now, but please let us know if you would prefer a different formulation or if you still find ways to clarify our statement.\\n\\n# Q3: \\\"From Figure 1 it looks like the intrinsic reward is only generated from the speaker. Why shouldn\\u2019t one also derive the intrinsic reward from the listener?\\\"\\n\\nWe appreciate your attention to details in that matter. We have been considering deriving an intrinsic reward from the listener too, but the formulations we considered (e.g. entropy of the distribution over candidate stimuli, or referential game accuracy) forced our design to become an *across-training* exploration strategy [Stanton & Clune, 2018] (whereas the current design centered around the speaker's utterances allows an *intra-life* framing), primarily, and it also increased the complexity of the system further, secondly.\\n\\nWe aim to investigate those other sources of intrinsic reward in subsequent papers. Indeed, we feel that the narrative of the current paper is already quite dense and we did not want risking losing the readers (more than we might already have in some places...) by adding an extra element to our proposed initial architecture.\\n\\n**With this paper, we emphasise that we solely seek to show viability in using EL to guide exploration of an RL agent, and therefore shows to reduce the designed architecture to its minimal requirements.**\"}" ] }
7idCpuEAiR
SC-OmniGS: Self-Calibrating Omnidirectional Gaussian Splatting
[ "Huajian Huang", "Yingshu Chen", "Longwei Li", "Hui Cheng", "Tristan Braud", "Yajie Zhao", "Sai-Kit Yeung" ]
360-degree cameras streamline data collection for radiance field 3D reconstruction by capturing comprehensive scene data. However, traditional radiance field methods do not address the specific challenges inherent to 360-degree images. We present SC-OmniGS, a novel self-calibrating omnidirectional Gaussian splatting system for fast and accurate omnidirectional radiance field reconstruction using 360-degree images. Rather than converting 360-degree images to cube maps and performing perspective image calibration, we treat 360-degree images as a whole sphere and derive a mathematical framework that enables direct omnidirectional camera pose calibration accompanied by 3D Gaussians optimization. Furthermore, we introduce a differentiable omnidirectional camera model in order to rectify the distortion of real-world data for performance enhancement. Overall, the omnidirectional camera intrinsic model, extrinsic poses, and 3D Gaussians are jointly optimized by minimizing weighted spherical photometric loss. Extensive experiments have demonstrated that our proposed SC-OmniGS is able to recover a high-quality radiance field from noisy camera poses or even no pose prior in challenging scenarios characterized by wide baselines and non-object-centric configurations. The noticeable performance gain in the real-world dataset captured by consumer-grade omnidirectional cameras verifies the effectiveness of our general omnidirectional camera model in reducing the distortion of 360-degree images.
[ "Self Calibration", "Gaussian Splatting", "Radiance Field", "Omnidirectional Vision", "Bundle Adjustment" ]
Accept (Poster)
https://openreview.net/pdf?id=7idCpuEAiR
https://openreview.net/forum?id=7idCpuEAiR
ICLR.cc/2025/Conference
2025
{ "note_id": [ "shMJINmcRE", "oYZ1JwmwsK", "krwopKpyQP", "jK5TwuXHWl", "iZnLgsVJjZ", "iYb3IcQjA4", "hqOCL11aKP", "SjqYn5oAo5", "RiI2BkvLDr", "RBC7M1M6kQ", "PkVohNGdKQ", "MQMVft3rxs", "JtaPCkejte", "IfF73MTIyZ", "FuVopjuJxa", "ACJdgUuzBf", "9Kjp6vKmTc", "6pcjz2BVJq" ], "note_type": [ "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732473447060, 1732470230444, 1737523770839, 1730543110456, 1732607707010, 1732474285749, 1734670473709, 1732470420441, 1730406737009, 1732612993392, 1732508647539, 1732609070450, 1730342147508, 1732613933560, 1729650456539, 1732607649563, 1732474147272, 1732547966114 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6454/Authors" ], [ "ICLR.cc/2025/Conference/Submission6454/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6454/Reviewer_CA85" ], [ "ICLR.cc/2025/Conference/Submission6454/Authors" ], [ "ICLR.cc/2025/Conference/Submission6454/Authors" ], [ "ICLR.cc/2025/Conference/Submission6454/Area_Chair_PKNq" ], [ "ICLR.cc/2025/Conference/Submission6454/Authors" ], [ "ICLR.cc/2025/Conference/Submission6454/Reviewer_83HV" ], [ "ICLR.cc/2025/Conference/Submission6454/Reviewer_wcnp" ], [ "ICLR.cc/2025/Conference/Submission6454/Reviewer_CA85" ], [ "ICLR.cc/2025/Conference/Submission6454/Reviewer_dnTE" ], [ "ICLR.cc/2025/Conference/Submission6454/Reviewer_dnTE" ], [ "ICLR.cc/2025/Conference/Submission6454/Reviewer_83HV" ], [ "ICLR.cc/2025/Conference/Submission6454/Reviewer_wcnp" ], [ "ICLR.cc/2025/Conference/Submission6454/Authors" ], [ "ICLR.cc/2025/Conference/Submission6454/Authors" ], [ "ICLR.cc/2025/Conference/Submission6454/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for the precise and insightful comments, please find our responses below:\\n>***W1: About title preciseness of self-calibrating.***\\n\\nThis point is worth discussion. As defined, calibration is to determine or adjust the accuracy and quality of measurements. Given SfM estimation without perturbation, SC-OmniGS can continue to refine camera models and poses which increases reconstruction performance, as evidenced in Table 2. In our experiments, we have also studied various situations of radiance field self-calibration involving varying levels of initialization noise to verify our method's robustness. Training from scratch is just a special situation where all camera poses are initialized at the origin of world coordinates. In essence, our paper title, \\\"SC-OmniGS: Self-Calibrating Omnidirectional Gaussian Splatting,\\\" accurately encapsulates the core content of our research.\\n\\n>***Q1 Evaluation of optimized camera poses.***\\n\\nWe appreciate your suggestion and have incorporated the evaluation results of optimized camera poses in the revised version. Our method has indeed achieved the highest accuracy in pose estimation, a fundamental aspect crucial for the reconstruction of high-fidelity radiance fields. For details, please refer to Table 7 in the revised paper.\\n\\n>***Q2: Showcase of improvements in the intrinsic parameters.***\\n\\nIn some self-calibrating radiance field papers, they utilize Colmap results as ground truth to evaluate their optimized intrinsic parameters. However, as we mentioned in the introduction section (i.e., Line 43-46), existing SfM methods rely on an idealized omnidirectional camera model assumption and overlook the adverse effects of omnidirectional camera distortion in real-world scenarios. We are the first to tackle this issue but cannot obtain pseudo ground truth to conduct quantitative evaluation of intrinsic parameters. As a radiance field method, we instead relied on reconstruction quality, i.e. novel view rendering quality to reflect the improvement of camera parameters. We also conducted an ablation study (Table 3) to solely evaluate camera model efficacy in terms of reconstruction quality. We believe the experiment is comprehensive.\\n\\n>***Q3: Are the optimizations of focal length and distortion parameters shared across all views?***\\n\\nYes, we only optimized a single omnidirectional camera model and used it to tackle all views on each scene. We have made it clearer in the implementation details of the revised version (Line 335-336).\"}", "{\"comment\": \">***W1: Limited technical improvement.***\\nThe scientific motivation for combining Self-calibration and omnidirectional GS is limited.\\n\\nThe proposed SC-OmniGS is not an incremental work that simply combines existing solutions. Although the pose optimization along Gaussian splatting process has been studied in some recent GS-based SLAM methods, they only support perspective images without distortion. Their theoretical analysis and implementation of camera pose derivatives in 3D Gaussian Splatting cannot be directly reused to achieve self-calibrating omnidirectional radiance fields. Additionally, joint optimization of intrinsic camera models and GS is still underexplored. \\n\\nOur SC-OmniGS systematically analyzes and achieves omnidirectional camera pose optimization within the omnidirectional Gaussian splatting procedure. We are the first to tackle the complex distortion pattern contained in omnidirectional cameras via introducing a novel differentiable omnidirectional camera model. Furthermore, we proposed a weighted spherical photometric loss to enhance omnidirectional radiance field reconstruction quality. \\n\\nWe believe our work will attract good attention and make a positive contribution to the omnidirectional vision community. \\n\\n>***W2: Gradient derivation is one of key technical parts of the paper but it has been addressed.*** \\nThe derivation of gradients on spherical camera poses (Eqs. (13--14)) is the key technical part. However it is rather straightforward since it would be naturally extended from perspective cases. Moreover, some SfM softwares supporting the spherical camera already has gradient computing similar to Eqs. (13--14).\\n\\nThe gradient computings in SC-OmniGS and the mentioned softwares (Metashape, OpenMVS) are theoretically different. The mentioned methods optimize camera pose by minimizing 2D-to-3D reprojection residual of corresponding points. The optimization problem is formulated as a factor graph and solved by the Levenberg\\u2013Marquardt (LM) algorithm, i.e., the first-order approximation of the error function. By contrast, our optimization objective in SC-OmniGS is to minimize weighted spherical photometric loss between rendering and reference images. The rendering process is differentiable, a key departure from traditional methodologies. The gradients of omnidirectional camera pose are then derived and back-propagated along the GS process. \\n\\n>***Q1: Emphasizing technical novelty of the proposed method again.***\", \"the_primary_technical_contributions_of_our_work_include\": \"1. **Gradient Derivation for Omnidirectional Camera Poses.** SC-OmniGS stands as a pioneering effort dedicated to the precise calibration of omnidirectional radiance fields, showcasing cutting-edge performance levels. These advancements can further facilitate applications such as GS-based omnidirectional SLAM.\\n2. **Addressing Complex Distortion Patterns with a Generic Omnidirectional Camera Model.** Due to complex distortion patterns inherent in omnidirectional cameras, current 3D omnidirectional vision methods rely on ideal spherical camera model assumption, resulting in suboptimal performance, as we discussed in the main paper Line 43-50. To the best of our knowledge, we are the first to effectively handle this issue by proposing a generic camera model tailored for the 360-degree camera. \\n3. **Enhanced Reconstruction Quality through Weighted Spherical Photometric Loss**. To promote spatially consistent optimization and elevate the overall quality of omnidirectional radiance field reconstruction, we introduce a novel weighted spherical photometric loss function.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper proposes an extension of 3D Gaussian splatting (GS) for omnidirectional images and enabling self-calibration. GS for omnidirectional images is previously studied (OmniGS), but it assumes the pre-computed camera positions. The proposed method refines the camera poses during gradient-based optimization. Experiments show that the proposed method improves the vanilla OmniGS. A main technical contribution of the paper is to derive the backward gradient for pose refinement of spherical images.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"### Self calibration + omnidirectional images\\nThe proposed method would be the first attempt to combine the self-calibration and omnidirectional GS.\\n\\n### Backward gradient for spherical images\\nFor pose refinement of spherical images, the paper derives the gradients in Eqs. (13--14). This would be a technical novelty in this paper.\", \"weaknesses\": \"### Limited technical improvement\\nSelf-calibration of GS has been well-studied so far. Also, omnidirectional GS is existing. The proposed system would be practical, but the scientific motivation for combining those two is not quite large, i.e., the technical novelty is limited.\\n\\n### Gradient derivation\\nThe key technical part of the paper, the derivation of gradients on camera poses of spherical images (Eqs. (13--14)), is rather straightforward. This would be naturally extended from perspective cases. \\n\\nWhile the paper describes \\\"converting 360-degree images to cube maps...\\\" this is just about the camera models.\\nAlthough in different contexts, for example, Metashape and OpenMVS support spherical camera models for the SfM problem, which involves bundle adjustment (i.e., non-linear optimization using first-order derivative), so they should compute gradients in somewhat similar ways to Eqs. (13--14).\", \"questions\": \"I would appreciate it if the authors emphasized the technical (or scientific) novelty of the proposed method again.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you very much for dedicating your time and effort to reviewing our paper. We are grateful for your constructive feedback, which has significantly enhanced the quality of our work. \\n\\nIf you have any further concerns or suggestions, please do not hesitate to share them with us. We look forward to the opportunity for further discussion and paper refinement. And we hope that our work can make a valuable contribution to the community.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \">***W1: Brief captions in Figure 2 and 3.***\\n\\nWe tried to elaborate figure details in the captions while the space exceeded the limitation. Considering these two figures are self-explanatory and their corresponding contents are on the same page, we therefore keep captions of Figure 2 and 3 concise. \\n\\n>***W2: Lack some geometric information, such as depth visualizations and point cloud reconstructions.***\\n\\nThank you for your suggestions. We have added depth visualizations to the Appendix of the revised version. Please refer to Figure 8 of the revision.\\n\\n>***Q1: About the benchmark dataset.***\\n\\nYes, the dataset 360Roam and Omniblender are only composed of 360-degree images and commonly used to evaluate omnidirectional radiance field methods performance.\"}", "{\"metareview\": \"This paper presents a method of self-calibrating 3D Gaussian Splatting from omni-directional images. While there has been a 3DGS method using omni-directional images as input, the paper introduces a differentiable omnidirectional camera model that enables ray-wise distortion handling and develops the derivation of gradients for pose optimization. The strength of the work is on the differentiable omnidirectional camera model, that allows the camera pose/parameter refinement together with the 3DGS updates. The weakness was the lack of evaluation over the optimized camera parameters, basically failing to showcase the strength of the self-calibration part. This part was amended during the interaction between reviewers and authors. As a result, the four expert reviewers were all positive about the paper. The AE agreed with the reviewers' opinions and rendered this recommendation.\", \"additional_comments_on_reviewer_discussion\": \"During the reviewer-author discussion phase, there were questions about the evaluation of the refined camera parameters. The authors clarified the point by an additional table to demonstrate the effectiveness. The reviewers also pointed out that the method was rather incremental without a strong technical novelty. During the discussion, it was agreed that the method actually contained a non-trivial technical contribution in the derivation of the gradient in the omnidirectional camera model, and the method has a strong merit in its application aspect.\"}", "{\"title\": \"Global Response:\", \"comment\": \"In the paper, we proposed the first system for self-calibrating omnidirectional radiance fields, which is able to jointly optimize 3D Gaussians, omnidirectional camera poses and camera models. Notably, our work includes a thorough theoretical analysis of omnidirectional camera pose gradients along the omnidirectional Gaussian splatting procedure, allowing efficient and effective optimization of noisy camera poses. Moreover, we introduced a novel differentiable omnidirectional camera model to address the intricate distortion patterns inherent in omnidirectional cameras, thereby enhancing performance in real-world scenarios. The extensive experiments verified that our method achieved state-of-the-art performance.\\n\\n**We extend our sincere gratitude to all reviewers for their valuable feedback and acknowledgment of excellent presentation, extensive experiments, and the significant contributions our work makes to the research community.**\\n\\nPlease refer to our detailed responses to specific comments provided below. Furthermore, we have carefully revised the manuscript according to reviewers' suggestions, highlighting these changes in magenta. The major modifications in the revised version are summarized below:\\n\\n1) In Table 1, we have included the evaluation results of SC-OmniGS with point cloud initialization involving both random and estimated depth, given perturbed camera input, in response to Reviewer **dnTE**.\\n2) In Appendix C.2, we have included the pose optimization evaluation results comparing different calibration methods in Table 7, in response to Reviewers **83HV** and **dnTE**.\\n3) In Appendix C.2, we have incorporated depth visualizations rendered by various calibration methods in Figure 8, in response to Reviewer **wcnp**.\"}", "{\"summary\": \"This paper proposes a system for self-calibrating omnidirectional radiance fields, aiming to optimize 3D Gaussians, omnidirectional camera poses, and camera models in tandem. While the authors describe this as the first system of its kind, the contribution can largely be seen as an engineering effort to integrate multiple optimized parameters within a single framework. The primary novelty in the work appears to lie in the introduction of a differentiable omnidirectional camera model that enables ray-wise distortion handling and in the derivation of gradients for pose optimization.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"The paper is well-structured and easy to follow, making the methodology and findings accessible to readers.\", \"The introduction of a differentiable omnidirectional camera model that enables ray-wise distortion and the derivation of gradients for pose optimization are valuable innovations that expand the applicability of the system.\", \"The use of spherical weights in the photometric loss ensures spatially balanced optimization, which enhances the robustness and accuracy of the optimization process.\", \"The experiments and results demonstrate superior performance compared to previous methods, highlighting the effectiveness of the proposed approach.\"], \"weaknesses\": \"Misleading Terminology in Title: While the title suggests \\\"self-calibrating omnidirectional Gaussian splatting,\\\" the approach relies on initialization from a structure-from-motion (SfM) pipeline rather than directly calibrating intrinsic parameters from the images alone. This approach is more accurately an optimization process rather than an auto-calibration technique in the classical sense (e.g., auto-calibration from absolute dual quadrics in multiple-view geometry).\", \"questions\": [\"The paper reports PSNR results to demonstrate performance, but since it also optimizes camera poses, it would be beneficial to include a comparison of the optimized extrinsic parameters against ground truth values.\", \"Given the emphasis on self-calibration, could the paper also show improvements in the intrinsic parameters after optimization?\", \"Are the optimizations of focal length and distortion parameters shared across all views, or are they optimized per view? Clarifying this would be help.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you for the answering\", \"comment\": \"I have looked through all the rebuttal comments, including those of other reviewers. They're arranged in detail and have fully convinced me. There's no more concerns and I decide to raise my score.\"}", "{\"title\": \"Thanks for the effective rebuttals\", \"comment\": \"Thanks for the rebuttal comments.\\n\\nI thoroughly went through the others' comments and rebuttals, as well as the revised paper. \\n\\nRegarding the scientific contributions, I may have underestimated them.\\nAs the authors mention, this study does have application-oriented contributions, which archive the self-calibration of GS for omnidirectional cameras.\\n\\nI understand the SfM's objectives are reprojection errors (between 2D positions of points), and their gradients are used for LM algorithms.\\nIndeed, this concept is not the same as the GS-like methods, which minimize the photometric error through a differentiable pipeline.\\n\\nI would like to change the rating to a reasonable one.\"}", "{\"title\": \"Thanks for the rebuttal comments\", \"comment\": \"Thank you for your responses to the comments. I am pleased to note that the revised manuscript has addressed my primary concerns regarding the evaluation of pose errors and the comparison with additional methods. The proposed method demonstrates good performance in camera calibration and scene reconstruction. As a result, I lean toward accepting the paper and giving my final rating as 6.\"}", "{\"summary\": \"The paper proposes a self-calibrating Gaussian splatting method for reconstructing omnidirectional radiance fields from 360-degree images without poses or with noisy poses. In this framework, scene representation, camera poses, and camera models are jointly optimized by minimizing a weighted spherical photometric loss. Additionally, a differentiable omnidirectional camera model is introduced to learn camera distortion. Experimental results show that the proposed method effectively recovers high-quality radiance fields from 360-degree image inputs.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"(1) The paper proposes a novel self-calibrating method that extends the omnidirectional Gaussian splatting to handle unposed or noisy 360-degree images.\\n\\n(2) The paper introduces a differentiable omnidirectional camera model, which uses trainable focal length and angle distortion coefficients to represent camera distortion. \\n\\n(3) The proposed method achieves state-of-the-art performance in novel view synthesis.\", \"weaknesses\": \"(1) The method estimates camera poses but lacks comparisons of pose accuracy. I think that comparing only rendering quality, especially with NeRF-based calibration methods, is insufficient to determine whether the superior performance of this paper is due to pose optimization, the camera model, or the scene representation using 3D GS. Therefore, I suggest adding quantitative and qualitative comparisons of camera poses on two datasets.\\n\\n(2) The experiment only compared with NeRF-based calibration methods and lacks comparisons with 3D GS-based baselines, such as COLMAP-free 3D GS. Besides, these NeRF-based calibration methods are originally designed to address noisy poses, rather than unposed images. I think it would be fairer to compare with poes-prior-free methods, such as NoPE-NeRF or LocalRF.\", \"references\": \"A1. Fu, Y., Liu, S., Kulkarni, A., et al. COLMAP-Free 3D Gaussian Splatting, CVPR, 2024.\\n\\nA2. Bian, W., Wang, Z., Li, K. et al. Nope-NeRF: Optimising neural radiance field with no pose prior, CVPR, 2023.\\n\\nA3. Meuleman A, Liu Y L, Gao C, et al. Progressively optimized local radiance fields for robust view synthesis, CVPR, 2023.\\n\\n(3) The paper does not include an ablation study of the anisotropy regularizer loss.\", \"questions\": \"(1) It would be better to provide the experimental results mentioned in the weaknesses.\\n\\n(2) In Table 1, the paper does not provide results with random initialization and estimated depth in the comparison of perturbed poses.\\n\\n(3) The paper states that it can address scenes with wide baselines. However, the method cannot be trained from scratch on real-world multi-room scenes, even though real-world datasets have more views than synthetic datasets. I recommend including a detailed analysis and results of the failure cases to better understand the reasons behind this limitation.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for the rebuttal\", \"comment\": \"Thanks for clarification, I will keep the score same and haven't got any further concern.\"}", "{\"summary\": \"The authors introduce the first system capable of self-calibrating omnidirectional radiance fields, by simultaneously optimizing 3D Gaussians, omnidirectional camera poses, and camera models. Unlike previous works that project 360-degree images onto cube maps, this study preserves the integrity of 360-degree images by directly modeling the 360-degree camera. Additionally, this approach does not rely on precise camera calibration, allowing it to flexibly adapt to various downstream tasks, such as omnidirectional SLAM, by optimizing both intrinsic and extrinsic 360-degree camera parameters. This method achieves the best results among omnidirectional and self-calibration approaches based on NeRF and Gaussians.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The motivation of the paper is sound. Omnidirectional images have varying information densities across different pixels. Using cube maps-based projection combined with traditional perspective analysis is bound to introduce distortion (or aliasing). By directly modeling the 360-degree camera projection, the method leverages the continuity of 3D space, which can mitigate distortion and provide higher fidelity.\\n2. I like the chain rule-based derivation of the pose gradient in Equations 13 and 14, as it makes the camera optimization more reasonable.\\n3. The experiments are very thorough. The authors compare two datasets and cutting-edge methods (OmniGS), while also discussing the experimental results under different camera and point cloud initializations (Tab. 1), as well as the results after adding various perturbations (Fig. 5). This supports it to be a well-rounded work.\", \"weaknesses\": \"1. Some figure captions are too brief (Fig.2, Fig.3), requiring readers to refer back to the main text for clarification on several unclear points, which disrupts the reading flow.\\n2. In the visualizations, it seems that only the rendered results are compared, lacking some geometric information, such as depth visualizations and point cloud reconstructions.\", \"questions\": \"I have some confusion regarding the input. I notice that in datasets like the 360Roam dataset, there are 110 training views and 37 test views. Are these views entirely omnidirectional data, or are there some additional perspective camera images used as auxiliary data?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer,\\n\\nThank you very much for dedicating your time and effort to reviewing our paper. We are grateful for your constructive feedback, which has significantly enhanced the quality of our work. \\n\\nIf you have any further concerns or suggestions, please do not hesitate to share them with us. We look forward to the opportunity for further discussion and paper refinement. And we hope that our work can make a valuable contribution to the community.\\n\\nBest regards,\\n\\nThe Authors\"}", "{\"comment\": \">***W1: Lack comparisons of camera poses.***\\n\\nThank you for your suggestion. We have added the evaluation results of the camera pose into the revised version, including Table 7. Our method indeed achieves highest pose estimation accuracy leading to high-fidelity radiance field reconstruction. \\n\\n>***W2: Lack comparison with poes-prior-free methods, e.g. COLMAP-free 3D GS, NoPE-NeRF.***\\n\\n_Comparisons with calibration methods with no pose prior. Note that either Nope-NeRF or CF-3DGS requires depth prior during camera calibration, but our SC-OmniGS calibrates camera without any depth prior during optimization.* was reported in the paper._\\n\\n| OmniBlender (test) | Perturb | Point Init | Barbershop | Classroom | Flat |\\n|---|---|---|---|---|---|\\n| | | | PSNR / SSIM / LPIPS | PSNR / SSIM / LPIPS | PSNR / SSIM / LPIPS | \\n| Nope-NeRF | \\u2020 | N/A | 14.113 / 0.451 / 0.685 | 16.911 / 0.619 / 0.712 | 12.760 / 0.586 / 0.652 | \\n| CF-3DGS | \\u2020 | est. depth | 15.635 / 0.501 / 0.510 | 14.823 / 0.539 / 0.612 |14.586 / 0.597 / 0.479 |\\n| SC-OmniGS* | \\u2020 | random | 33.422 / 0.944 / 0.084 | 28.971 / 0.806 / 0.214 | 31.673 / 0.895 / 0.114 |\\n| SC-OmniGS* | \\u2020 | est. depth| 33.401 / 0.940 / 0.087 | 29.385 / 0.801 / 0.195 | 31.278 / 0.897 / 0.094 |\\n||||||\\n\\nAlthough some self-calibrating methods emphasize they are pose-prior-free in their papers, they in fact only support object-centric scenes or using sequential data as input. \\n\\nNoPE-NeRF and COLMAP-free 3DGS heavily rely on depth supervision during the training process which make them sensitive to depth prior obtained from monocular depth estimation models. By contrast, our method only uses coarse point clouds for 3D Gaussians initialization without relying on dense depth supervision. Even though using random point cloud initialization, our methods still have dominant performance, demonstrating our robustness and flexibility. \\n\\nMoreover, A1 COLMAP-free 3DGS and A3 [Meuleman A et al 2023] should be closely relevant to radiance-field-based SLAM methods which require videos as input and make use of sequential relationships to progressively recover radiance fields. The incoming frames can be roughly initialized by the motion model. However, the benchmark dataset for omnidirectional radiance field evaluation is composed of sparse and discrete frames. Therefore, we did not use them as baselines in the paper. \\n\\n>***W3: Lack ablation study of the anisotropy regularizer loss.***\\n\\nSince anisotropy regularizer has become a practical use and is not counted as our contribution, we did not conduct an ablation study to further verify its effectiveness in our paper. \\n\\n>***Q1: It would be better to provide the experimental results mentioned in the weaknesses.***\\n\\nPlease refer to the responses in W1-3.\\n\\n>***Q2: Table 1 lacks results of the proposed methods when the input camera is perturbed while 3D gaussians are initialized randomly or from estimated depth.***\\n\\nThank you for your thoughtful suggestion. We have reported these results in Table 1 of the revised version.\\n\\n\\n>***Q3: Detailed analysis and results of the failure cases on real-world multi-room scenes to better understand the reasons behind this limitation.***\\n\\nAs we discussed in the \\u201cLimitations\\u201d of the paper (Line535-539), all self-calibration methods fail to learn radiance fields without any pose priors in challenging multi-room-level scenes. This is because the tolerance level of the self-calibrating radiance field method is capped. We have analyzed our method robustness against varying levels of camera perturbation in Sec. 5.4. With the noise increasing to some levels, the reconstruction performance drops obviously. Still, our SC-OmniGS consistently outperforms baselines. \\n\\nWhen training from scratch without pose prior, the initial camera poses are identical, at the origin of world coordinates. Therefore, in challenging cases, i.e, sparse and discrete views on multi-room-scale scenes, there is no doubt that the noise of initial value has exceeded tolerance level.\"}", "{\"comment\": \"Thank you for your thoughtful reconsideration and for increasing the score based on our rebuttal.\\n\\nWe appreciate the time you took to review our revised paper and the other comments, as well as for recognizing the scientific contributions of our study.\"}" ] }
7iCT2vmYAR
Contrastive learning of cell state dynamics in response to perturbations
[ "Soorya Pradeep", "Alishba Imran", "Ziwen Liu", "Eduardo Hirata-Miyasaki", "Taylla Milena Theodoro", "Ivan E. Ivanov", "Madhura Bhave", "Sudip Khadka", "Hunter Woosley", "Carolina Arias", "Shalin B. Mehta" ]
We introduce dynaCLR, a self-supervised framework for modeling cell and organelle dynamics via contrastive learning of representations of time-lapse datasets. Live cell imaging of cells and organelles is widely used to analyze cellular responses to perturbations. Supervised modeling of dynamic cell states encoded in 3D time-lapse data is laborious and prone to bias. dy- naCLR leverages single-cell tracking and time-aware contrastive sampling to map images of cells at neighboring time points to neighboring embed- dings. We illustrate the features and applications of dynaCLR with the following experiments: analyzing the kinetics of viral infection in human cells, detecting transient changes in cell morphology due to cell division, and mapping the dynamics of organelles due to viral infection. Temporally regularized embeddings computed with dynaCLR models enable efficient and quantitative annotation, classification, clustering, or interpretation of the cell states. The models reliably embed, i.e., generalize to, data from un- seen experiments with different microscopes and imaging contrasts. Models trained with dynaCLR consistently achieve > 95% accuracy in mitosis and infection state classification, enable the detection of transient cell states and reliably embed unseen experiments. dynaCLR provides a flexible framework for comparative analysis of cell state dynamics due to perturbations, such as infection, gene knockouts, and drugs. We provide PyTorch-based implementations of the model training and inference pipeline and a napari plugin user interface for the visualization and annotation of trajectories of cells in the real space and the embedding space.
[ "contrastive learning", "dynamics", "cell biology" ]
Reject
https://openreview.net/pdf?id=7iCT2vmYAR
https://openreview.net/forum?id=7iCT2vmYAR
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xLTTmV4R7a", "x3DrXhVtPl", "nIcLr8h2nZ", "kAk5DeD4ya", "VOPXV9zEdd", "V9N5OXebTt", "QZFMQGY0uU", "I2Hxuvotxr", "FZWLyNVs21", "EkveLmt3fs", "7aErqdlktY", "4J8mJtwhe5", "1hwTgZ4DzP" ], "note_type": [ "official_comment", "official_comment", "decision", "official_comment", "meta_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732597327028, 1733187584449, 1737523481113, 1732596043586, 1734704524286, 1730574374415, 1733187499365, 1730704650211, 1733187435854, 1732599636332, 1730631457196, 1733187730602, 1732597335288 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2024/Authors" ], [ "ICLR.cc/2025/Conference/Submission2024/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2024/Authors" ], [ "ICLR.cc/2025/Conference/Submission2024/Area_Chair_wvze" ], [ "ICLR.cc/2025/Conference/Submission2024/Reviewer_UA5v" ], [ "ICLR.cc/2025/Conference/Submission2024/Authors" ], [ "ICLR.cc/2025/Conference/Submission2024/Reviewer_oJPV" ], [ "ICLR.cc/2025/Conference/Submission2024/Authors" ], [ "ICLR.cc/2025/Conference/Submission2024/Authors" ], [ "ICLR.cc/2025/Conference/Submission2024/Reviewer_aWZ5" ], [ "ICLR.cc/2025/Conference/Submission2024/Reviewer_oJPV" ], [ "ICLR.cc/2025/Conference/Submission2024/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Thank you for the review and for acknowledging the strength of our work. We address your feedback below:\\n\\n**Weaknesses:**\\n```\\nThe choice of a 30-minute interval as the temporal offset might not generalize across other biological systems with different dynamics, limiting the model\\u2019s adaptability.\\n```\", \"we_address_this_concern_by_doing_new_computational_experiments\": \"we use public ALFI data to demonstrate that models trained with multiple time intervals (7 minutes - 90 minutes) reliably embed cell cycle dynamics in the test dataset acquired with 7-minute intervals (Figure 2 and corresponding appendix figures). Interestingly, we observe that when the time interval of contrastive sampling approaches the typical time interval of morphological change, the dynamic range of the embeddings is maximized. We also demonstrate that infected A549 cells imaged with a 10-minute time resolution can be reliably classified using an embedding model trained with cells imaged with a 30-minute time resolution (Figure 3, panels e and f).\\n```\\nThe reliance on phase and fluorescence imaging could constrain its utility where alternate modalities are necessary.\\n```\\nPhase and fluorescence modalities are broadly used in drug discovery and cell biology. The concepts we present can be extended to other time-lapse datasets. We have demonstrated the model's applicability with multiple phase and fluorescence modalities, particularly using the publicly available DIC dataset, ALFI. \\n\\n```\\nSince cell-aware and time-aware sampling use specific tracked cells, the embeddings may risk overfitting to individual cell trajectories instead of generalized dynamics.\\n```\\nOur sampling strategy promotes intra-track smoothness and inter-track discrimination and learns generalized dynamics, e.g., propagation of infection in a population (Figure 3, panel f). In the revised Figure 3, we have included embeddings of multiple independent test datasets from different cell types, imaging modalities, microscope systems, temporal sampling, and fluorescence markers to show that the dynaCLR models generalize across various experimental conditions. \\n\\n```\\nAlthough contrastive learning was chosen, the paper lacks in-depth comparisons with generative methods that authors summarize in the related work.\\n```\\nIn the revision, we compare this work with published work on time-regularized generative modeling (https://www.molbiolcell.org/doi/full/10.1091/mbc.E21-11-0561). Thank you for the suggestion. A thorough comparison with generative models is out of the scope of this work. We think a thorough evaluation of generative and contrastive models of time-lapse data is a great topic for future work. \\n\\n```\\nBy setting a fixed temporal offset, the model may miss capturing events that unfold asynchronously or at variable rates in different cells.\\n```\\nWe have demonstrated the capture of stages of cell cycle using ALFI, a publically available dataset, in which the mitosis spans over multiple time points and unfolds asynchronously. We validated the results by comparing them with available human annotations. Please see Figure 3 to see the generalization of the model enabling the capture of mitosis in a cell type that was not used to train the models.\\n```\\nModels relying on phase channels for cell division detection may struggle with subtler morphological changes that require fluorescence markers.\\n```\\nWe'd appreciate the elaboration of this question. We agree that subtle changes in morphology may not be captured by any single channel. That is why we are proposing a flexible method that enables the embedding of 3D multi-channel datasets. Specifically, we are leveraging fluorescent labels for organelle phenotyping, see Figure 3, panels g, h, and I.\\n\\n**Questions:**\\n```\\nHow does the model perform for other tasks beyond infection classification? Like, for example, tracking mitotic spindle dynamics during first cell division in embryonic development?\\n```\\nWe now show that the framework can be used to learn phenotypes across multiple datasets and microscopes to classify cell division, classify infection, and learn organelle responses to infection.\\n```\\nHow does the model handle potential noise or artifacts in the time-lapse imaging data? How are hyperparameters tuned and how sensitive is the model to the choice of these hyperparameters?\\n```\\nIn Figure 3, we demonstrate the model's generalizability to different microscopes by performing prediction on an independent dataset with differences in noise and imaging parameters. The new data in Figure 2, Figure 3, and Table 1 illustrate that dynaCLR models lead to useable embeddings for a larger range of time interval hyperparameters.\"}", "{\"comment\": \"Dear reviewer aWZ5, We'd appreciate any follow-up questions and your revised review soon.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": [\"We want to thank all the reviewers for their detailed and constructive feedback and look forward to an active discussion over the next few days.\", \"The reviewers recognized the timeliness of the problem we are addressing and the rigor of our solution.\", \"___\\u201c temporally-aware contrastive learning enables efficient modeling of time-dependent cellular changes\\u201d, \\u201cframework facilitates rapid annotation of cell states, potentially decreasing the reliance on human-intensive and subjective labeling, a significant advancement over prior approaches\\u201d___ (reviewer oJPV);\", \"___\\u201cThe learned representation was used in several dowstream tasks.\\u201d (reviewer: aWZ5)___\", \"___\\u201cThe addition of generalization experiments (though only on one additional set of data) provides some grounding to the results \\u2013 without this section I don\\u2019t think the paper has much weight. I would encourage the authors to extend this to more datasets if possible.\\u201d___ (reviewer: UA5v).\", \"The key constructive feedback from reviewers concerns the method's novelty and generalizability. We have added new results aligned with the paper's original scope on the generalized embedding models trained with public and in-house datasets. We respond to the reviewers' feedback individually. Below is a summary of the main changes to the structure and content of the paper.\", \"The main figures and appendix figures report the following data :\", \"Fig 1: Overview of dynaCLR framework.\", \"Appendix Fig 1: napari plugin for interactive annotation of cell states in embedding space.\", \"Fig 2: Evaluation of time-aware contrastive sampling using public dataset (ALFI)\", \"Appendix Fig 2: Displacement, dynamic range, and smoothness of cell tracks in the embedding as a function of the contrastive sampling strategy.\", \"Appendix Fig 3: PHATE visualizations of embeddings with contrastive sampling strategies.\", \"Appendix Fig 4: Evaluation of contrastive loss using in-house (cellular response to infection) data.\", \"Appendix Fig 5: Number of significant principal components of embeddings as a function of contrastive sampling strategy.\", \"Fig. 3: Evaluation of generalization of the learned embeddings across cell types and microscopes using public (ALFI) and in-house (cell and organelle response to infection) data.\", \"Appendix Fig 6: robustness of dynaCLR embeddings to tracking errors in the data.\", \"Appdneix Fig 7: detecting cell division events in infected cells.\", \"Appendix Fig 8: visual inspection of remodeling of endoplasmic reticulum due to infection.\", \"Fig 4: Explanation of learned embeddings with engineered features and occlusion-based class attribution.\", \"We report computational experiments to show that the time-interval hyperparameter can be changed to tune embeddings' dynamic range and smoothness (Fig. 2 and Table 1).\", \"In an exciting development, we find that the dynaCLR embedding model generalizes across unseen microscopes and contrast methods (Fig. 3). We think that the self-supervised learning of temporally smooth representations of 4D (3D, multi-channel) biological datasets is a valuable novel contribution of this work.\", \"NT-Xent and triplet loss lead to embeddings with similar structures (Appendix Fig 4), and time-aware contrastive sampling improves the dynamic range and smoothness of embeddings independent of the loss function.\", \"We improve the writing throughout to highlight novel methodological aspects of the work, namely,\", \"The first use of time-aware contrastive sampling to learn embeddings from 4D tensors.\", \"Application to diverse biological datasets in which cell states are labeled by a human or via an experimental fluorescent marker.\", \"We now use precise mathematical notation to clarify the contrastive sampling strategy, the metric of dynamic range in embedding space, and the metric of smoothness in embedding space.\"], \"title\": \"Summary of the revision\"}", "{\"metareview\": \"The paper describes a self-supervised framework for modeling cell and organelle dynamics in time-lapse microscopy datasets called dynaCLR. The reviewers unanimously recommend rejection, citing a lack of significant methodological novelty and a lack of sufficient rigor in presentation from a machine learning paper.\", \"additional_comments_on_reviewer_discussion\": \"While there was not significant reviewer discussion, largely due to the overwhelmingly negative initial scoring of the paper, one reviewer responded that the revision did not sufficiently improve their evaluation of the paper.\"}", "{\"summary\": \"The paper describes an application of contrastive learning on a microscopy dataset to model single cell dynamics. The method introduces time-aware sampling to the traditional contrastive learning methodology by making use of cell-tracking. Three downstream applications of the learned embeddings are shown which demonstrate how the method can be used to analyze cell-state dynamics in response to perturbations. In addition, experiments showing generalization and interpretability of embeddings are performed.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tThe paper addresses an interesting topic where self-supervised learning has the potential to be extremely beneficial. Getting annotations for large amounts of microscopy data on a single-cell level is difficult and there is a broad push in the field towards the kind of self-supervised approach described here, so this work is well placed.\\n2.\\tThe utilization of cell tracking to improve self-supervised learning is an interesting idea; cell-identity detection is potentially a useful proxy task that could aid in learning good representations.\\n3.\\tThe biological setup and experiments are well thought out and the dataset would be beneficial to researchers as a benchmark, given simple readouts like infection rate.\\n4.\\tThe addition of generalization experiments (though only on one additional set of data) provides some grounding to the results \\u2013 without this section I don\\u2019t think the paper has much weight. I would encourage the authors to extend this to more datasets if possible.\", \"weaknesses\": \"1.\\tIn general, this is written like a paper in a scientific journal rather than a machine learning conference. While it is important to provide the necessary biological context, the paper lacks the mathematical discourse required to be suitable in a machine-learning setting. For example, could you add a theoretical or mathematical description of ideas such as \\u2018smoothness\\u2019 or \\u2018richness\\u2019 of the latent space that you have mentioned?\\n2.\\tI\\u2019m not quite sure that the quantitative results match up with the claims made. For example, performance scores in Table 1 are better for the \\u2018classical\\u2019 contrastive learning approach as opposed to the \\u2018cell-aware\\u2019 and \\u2018time-aware\\u2019 approaches. The arguments made based on the Euclidean distance plots in Figure 2 are strenuous, and I don\\u2019t agree with the qualitative conclusions made about time-regularized contrastive learning being better than the classical approach from these results (I will add more on this in the next points). Overall, I don\\u2019t think the contributions here are significant enough and the arguments don\\u2019t hold enough weight for me to accept this paper. \\n3.\\tThe whole \\u2018temporal continuity\\u2019 argument seems a little counterintuitive to me. The \\u2018time-aware\\u2019 loss function is designed to encourage the model to ignore differences at the time-scale of the time hyperparameter Tau, but you expect the model to maintain temporal differences at time-scales t > Tau. I think Tau needs to be much smaller than the time-scale of your expected changes for this to work. If you expect changes in cells to happen over a few hours, the hyperparameter Tau being 30 minutes doesn\\u2019t make sense \\u2013 your model is just going to smooth out all temporal information. I know you are limited by the fact that you can\\u2019t take images, say, every 5 minutes, but the current setup doesn\\u2019t make sense to me.\\n4.\\tJust to add to the previous argument \\u2013 in your current setup, it matters how long you train your model for, since overtraining could completely smooth out all temporal information. I don\\u2019t even know how you would design experiments to determine what a good number of epochs to train is without having an idea of how much temporal smoothness is \\u2018correct\\u2019. For example, in the limit of an infinite number of epochs under this loss function, all temporal information would be removed from your model. I just don\\u2019t see how you could both encourage temporal smoothness while maintaining a temporally faithful embedding when your data has a temporal resolution of 30 minutes over 24 hours.\\n5.\\tIn the case of \\u2018time and cell-aware sampling\\u2019, how do you ensure that the model is robust to imaging factors like brightness or sensor noise? The point of such augmentations in the classical contrastive learning case is to teach the model to ignore spurious confounders which may actually show up in your real data \\u2013 but the \\u2018time and cell-aware\\u2019 model never learns to do this. Is this model actually good at generalizing? What happens of your imaging conditions change a little? Would this model still work? I think many questions need to be answered here.\\n6.\\tI think the novelty proposed here lacks conceptual correctness in my view. With traditional ways of using temporal information in self-supervised learning, like predicting the temporal order of randomly flipped images, it makes sense how these would lead to a temporally unbiased and meaningful embedding space. My arguments in the points above highlight my concerns on why the proposed method may not be doing the same.\\n7.\\tWithout actual ground truth annotations of fine-grained temporality, the only way to assess the quality of the embeddings is through the results on downstream temporal tasks. However, the classical contrastive sampling variant seems to perform better on the downstream infection classification task, which tells me that the classical method is leading to better quality embeddings than the proposed method.\", \"questions\": \"1.\\tI\\u2019m a little bit confused about the quantitative results. Specifically, you write \\u201cTherefore, we evaluate the accuracy of visual representation learned by our method using a biologically relevant benchmark: accuracy of the classification of the cell states with 3 hours of expert annotations. We compare our method with two baseline methods: fully supervised time-agnostic semantic segmentation of the infection state and time-agnostic contrastive learning. Compared to the \\u2248 80% accuracy achieved by the supervised model and 60 \\u2212 65% accuracy achieved by the time-agnostic contrastive learning, DynaCLR models consistently achieve \\u2248 95% accuracy.\\u201d I only see results in Table 1 that show classical contrastive sampling having an accuracy of 98.8%. Can you clarify where the results of the supervised model are and why there is a mismatch between the claimed values of 60-65% and the values shown in the table?\\n2.\\tWhen you say, \\u201cFor the experiments in this paper, we set \\u03c4 = 30 min, as we found this captures significant cell changes while maintaining temporal continuity without over-sampling.\\u201d Does this mean you tested with different \\u03c4 values? If you did any experiments on this it would be good to include those, and would contribute to my concerns in the weaknesses section on temporal continuity and temporal faithfulness.\\n3.\\tIn Section 3.3 you claim, \\u201cWe compute (2) and (3) to evaluate the temporal evolution of the embeddings as a gradual, steady increase signifies strong temporal smoothness. In both figure 2 a) and 2 b), cell and time-aware sampling shows a smooth rising curve with minimal fluctuations. The classical contrastive method exhibits a rapid initial increase followed by plateaus, while the cell-aware approach shows intermittent fluctuations.\\u201d I\\u2019m not sure I see in Figure 2 what is described here. It seems like the curves for Euclidean distance are pretty similar between the different strategies. In fact, in the UMAP version it seems like the cell and time-aware version is most fluctuating (whether UMAP is an appropriate space in which to perform this is another question, since it is known to not maintain global structure). Can you please elaborate on this? I\\u2019m also unsure what you mean by \\u201crichness\\u201d of representation in Figure 2C and how you determined this.\\n4.\\tCan you elaborate the results of the integrated gradients based interpretability in Figure A1? What exactly is the model focusing on in the two channels? Can you provide evidence that this is relevant to the output task?\\n5.\\tYou say \\u201cWe observe a clear transition of cell states from interphase to mitosis as we follow the cells in UMAP space, particularly in models trained solely with the phase channel and incorporating temporal regularization\\u201d I kind of see what you\\u2019re trying to say but it\\u2019s still very unclear. Can you provide a mathematical comparison of the two models and show that one model is \\u2018smoother\\u2019 than the other over all cell division events?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer oJPV,\\nWe'd appreciate any follow-up questions and a revised review soon.\"}", "{\"summary\": \"The authors present a self-supervised framework for leveraging contrastive learning to model cell state dynamics from time-lapse imaging.\\nThe model allows temporally adjacent states to be mapped closely together helping achieve accurate, efficient, and label-free analysis of dynamic cell states under perturbations like viral infection. The paper presents a unique application of contrastive learning in cellular imaging and stands out for its temporal coherence and robustness in infection state classification.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The use of temporally-aware contrastive learning enables efficient modeling of time-dependent cellular changes, showing improved performance over traditional contrastive methods, especially under perturbative conditions.\", \"The framework facilitates rapid annotation of cell states, potentially decreasing the reliance on human-intensive and subjective labeling, a significant advancement over prior approaches.\", \"By using a cell-aware approach, the framework attempts to address the intrinsic heterogeneity across cell populations, an advantage over traditional time-agnostic contrastive approaches.\"], \"weaknesses\": [\"The choice of a 30-minute interval as the temporal offset might not generalize across other biological systems with different dynamics, limiting the model adaptability.\", \"The reliance on phase and fluorescence imaging could constrain its utility where alternate modalities are necessary.\", \"Since cell-aware and time-aware sampling use specific tracked cells, the embeddings may risk overfitting to individual cell trajectories instead of generalized dynamics.\", \"Although contrastive learning was chosen, the paper lacks in-depth comparisons with generative methods that authors summarize in the related work.\", \"By setting a fixed temporal offset, the model may miss capturing events that unfold asynchronously or at variable rates in different cells.\", \"Models relying on phase channels for cell division detection may struggle with subtler morphological changes that require fluorescence markers.\"], \"questions\": [\"How does the model perform for other tasks beyond infection classification? Like, for example, tracking mitotic spindle dynamics during first cell division in embryonic development?\", \"How does the model handle potential noise or artifacts in the time-lapse imaging data? How are hyperparameters tuned and how sensitive is the model to the choice of these hyperparameters?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear reviewer UA5v,\\nWe'd appreciate any follow up questions and revised review soon.\"}", "{\"comment\": \"Thank you for the thorough review. We address your feedback below:\\n\\n**Weaknesses:**\\n\\nWeakness #1: \\n\\nThank you for this input. In this revision, we have introduced mathematical notation that describes the contrastive sampling strategy. We also use the precise term (dynamic range) instead of richness. We have developed metrics (see Appendix A.1) of dynamic range and smoothness based on intra-track and inter-track distances in the embedding space to rank the models.\\n\\nWeakness #2, #5, #6, #7:\\n\\nIn this work, we prioritize the models that achieve higher dynamic range, smoothness, and classification accuracy with efficient use of human annotations. The goal is to visualize and identify the evolution of multiple biological states in the embedding space in response to the perturbations. The new results (Fig. 2, Appendix Figures 2 & 3, and Table 1) convincingly show that increasing the dynamic range of embeddings with cell- and time-aware models leads to consistently better classification accuracy than the classical contrastive models. We found that the previous embeddings of infected cells were affected by the photobleaching of the infection sensor on the confocal microscope, which might have compromised the accuracy of cell and time-aware models. So we use our light-sheet imaging system to collect training and test data at two different time resolutions (see section. 3.2.2). We soften the claim for infection classification that self-supervised models are more annotation efficient than dense semantic segmentation models. \\n\\nWeakness #3, #4, #6, #7:\\n\\nThank you for the question. An intuitive argument for the time-aware positive and negative sampling is that it promotes intra-track smoothness and inter-track discrimination. The large shape space of randomly chosen negative samples at t+Tau in a given batch should prevent the model from ignoring the differences in shapes. To evaluate this, we have trained several cell cycle models using public (ALFI) data acquired with 7-minute time resolution and multiple time intervals (Fig. 2, Appendix Figures 2 & 3). We also visualize the structure of embeddings with PHATE (https://github.com/KrishnaswamyLab/PHATE) rather than UMAP, as PHATE better preserves the continuity of embedding space in low-dimensional projections. We have also applied our infection model to a 10-minute temporally sampled dataset (figure 3, panel e and f) that shows that the model trained with the 30-minute dataset enables the analysis of the infection dynamics in data acquired with 10-minute time resolution. \\n\\n**Questions:**\\n\\nQuestion #1:\\n\\nThe supervised method was used as a step to bootstrap infection annotation, which a human annotator further corrected to serve as the ground truth of infection for testing the DynaCLR infection state model. The results from the supervised model are added to Table 3 in the current iteration. 60-65% accuracy was achieved for a self-supervised model trained to predict infection state from just phase channel, which we have dropped from the table to avoid confusion. We have edited the table and text to clarify the data.\\n\\nQuestion #2:\\n\\nIn this iteration, we have included evaluating the optimum temporal sampling for smooth predictions using multiple models trained with a publically available DIC dataset with various temporal sampling. You can find the new results in Figure 2 and Appendix Figure 2 and 3. We have also demonstrated the application of the model trained with 30-minute sampled data on a 10-minute sampled dataset in Figure 3, panel e. \\n\\nQuestion #3:\\n\\nWe\\u2019ve dropped the results where we computed this metric in UMAP space since, as you\\u2019ve pointed out, UMAP does not maintain a global structure. Instead, we now report metrics in the full embedding space (see Figure 2 and Appendix Figures 2 and 3). We have also started using PHATE to better display the global structure of embeddings in low-dimensional projections. We think that the definition of dynamic range and smoothness in (Appendix A.1), the data in Table 1, and the discussion of the experiment in Section 4.1 will clarify the concepts of dynamic range and smoothness. Please ask us a follow-up question if they do not.\\n\\nQuestion #4:\\n\\nThanks for pointing out the lack of \\u201cinterpretability\\u201d of this specific figure. We found that occlusion analysis provides smoother feature attribution than the integrated gradients. We now report feature attribution with occlusion analysis in Figure 4d.\\n\\nQuestion #5:\\n\\nWe've mathematically defined smoothness in appendix A.1. We benchmark smoothness on the ALFI division dataset which was trained on the DIC channel. DIC is similar to phase as both are label-free imaging systems that shows density-based morphological features in cells. In table 1, you can see that time & cell aware sampling performs better on this smoothness metric with tau\\u2019s 21 and greater compared to the classical and cell aware sampling that do not leverage temporal information.\"}", "{\"summary\": \"In this paper, the authors propose a framework for modeling cell dynamics in response to perturbations. They suggested several downstream tasks for the learned representation, such as the analysis of viral infection kinetics in human cells, detecting transient changes in cell morphology, and mapping organelle dynamics due to viral infection. Furthermore, they reported that the proposed framework achieves an accuracy of over 95% for infection state classification, outperforming the supervised setting.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper is well-written, following a clear structure that enhances readability and comprehension.\", \"The learned representation was used in several dowstream tasks.\"], \"weaknesses\": [\"The technical novelty of the paper is limited, as DynaCLR may be considered a straightforward application of the triplet loss with varied sampling strategies.\", \"Some figures (Fig. 2, Fig. 3, and Fig. 6) are not thoroughly explained; a more detailed description would enhance clarity.\"], \"questions\": [\"The choice of the triplet loss is not justified. Why is this loss more suitable for this task than alternatives like NT-Xent or InfoNCE?\", \"In Section 4.1.1, could you clarify ithe origin of the independent test data?\", \"In Section 4.2, the smooth transitions and tight clustering of division events are not immediately evident in Figure 4; additional support for these claims would be beneficial.\", \"In Section 4.3, it remains unclear how the referenced figures demonstrate that \\\"the encoder learns meaningful features that describe cell dynamics.\\\"\", \"Figure 6 would benefit from a more detailed explanation.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I thank authors for detailed response to reviewer's comments. Based on the quality of the revisions and the responses, I will not adjust my previous evaluation and maintain my current rating.\"}", "{\"comment\": \"Thank you for the review and for acknowledging the strength of our work. We address your feedback below:\\n\\n**Weaknesses:**\\n```\\nThe technical novelty of the paper is limited, as DynaCLR may be considered a straightforward application of the triplet loss with varied sampling strategies.\\n```\", \"we_want_to_state_that_the_novelty_of_the_work_is_the_following\": \"* The first use of time-aware contrastive sampling to learn embeddings from 4D tensors.\\n* Application to diverse biological datasets in which cell states are labeled by a human or via an experimental fluorescent marker.\\n* We now use precise mathematical notation to clarify the contrastive sampling strategy, the metric of dynamic range in embedding space, and the metric of smoothness in embedding space. \\n```\\nSome figures (Fig. 2, Fig. 3, and Fig. 6) are not thoroughly explained; a more detailed description would enhance clarity.\\n```\\nThank you for the comment. We have reorganized the paper to enhance clarity, recreated the figures with more datasets and experiments, and added clear descriptions in the text. \\n\\n**Questions:**\\n```\\nThe choice of the triplet loss is not justified. Why is this loss more suitable for this task than alternatives like NT-Xent or InfoNCE?\\n```\\nIn this iteration of the paper, we have experimented with both triplet loss and NT-Xent loss (Appendix Figure 4) and show that we obtained similar results using either loss function. We thank the reviewer for the question, which improved the quality of the work.\\n```\\nIn Section 4.1.1, could you clarify the origin of the independent test data?\\n```\\nWe have changed the independent datasets used and used two new independent datasets instead for infection state classification experiments. Please refer to section 3.2.2 for details on the two independent datasets for infection. We have also included results from the publicly available cell cycle DIC data, ALFI. The independent dataset used is a different cell type, U2OS, whereas the model was trained on images of HeLa and RPE1 cell lines.\\n```\\nIn Section 4.2, the smooth transitions and tight clustering of division events are not immediately evident in Figure 4; additional support for these claims would be beneficial.\\n```\\nThe data used in earlier iteration did not have enough division events, thus being a rare event and class imbalance issue. So, in this iteration, we have used ALFI, a publicly available dataset with human annotation of stages of the cell cycle for model training. Please refer to Figures 3, panels a and b, to see the results on clustering with cell division state and the model's generalizability to other cell types.\\n```\\nIn Section 4.3, it remains unclear how the referenced figures demonstrate that \\\"the encoder learns meaningful features that describe cell dynamics.\\\"\\n```\\nThe paper has been reorganized, and these results have been revised to provide more insightful results.\\n```\\nFigure 6 would benefit from a more detailed explanation.\\n```\\nWe have generated results from a new, improved model with a new dataset and added an explanation in the text of this figure. Please see Figure 4, panels g, h, and I.\"}" ] }
7heZQqlY5t
GAMformer: In-Context Learning for Generalized Additive Models
[ "Andreas C Mueller", "Julien Siems", "Harsha Nori", "David Salinas", "Arber Zela", "Rich Caruana", "Frank Hutter" ]
Generalized Additive Models (GAMs) are widely recognized for their ability to create fully interpretable machine learning models for tabular data. Traditionally, training GAMs involves iterative learning algorithms, such as splines, boosted trees, or neural networks, which refine the additive components through repeated error reduction. In this paper, we introduce \textit{GAMformer}, the first method to leverage in-context learning to estimate shape functions of a GAM in a single forward pass, representing a significant departure from the conventional iterative approaches to GAM fitting. Building on previous research applying in-context learning to tabular data, we exclusively use complex, synthetic data to train GAMformer, yet find it extrapolates well to real-world data. Our experiments show that GAMformer performs on par with other leading GAMs across various classification benchmarks while generating highly interpretable shape functions.
[ "interpretable machine learning", "in-context learning", "synthetic data", "generalized additive models", "gams", "glassbox machine learning" ]
Reject
https://openreview.net/pdf?id=7heZQqlY5t
https://openreview.net/forum?id=7heZQqlY5t
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zwJXlEgtJ8", "lmBB2p36Zc", "kydLQIHacF", "gQ8XliosMy", "e89lKbmcHn", "YmJtNm56ae", "XSSIfZh9tr", "VSHMhwCQlz", "TagHYcg3OA", "Sjo2D88jvV", "Rewt1EWqel", "RWPxVCGQtJ", "RQqduEhSn5", "LSxHaQNa7j", "K2sHpwCORA", "F4oV75YNHY", "EgA2Cms6tR", "D6GFk2CBzK", "CPL1I88Y8x", "8JFasgURnV", "5gJwlpoEnX" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "decision", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732583233217, 1730758424883, 1732792949857, 1732576801779, 1732570654132, 1732570617429, 1733179434460, 1734590491953, 1732579381263, 1729361114291, 1737523795109, 1732570928222, 1729442281572, 1732570917874, 1732570571216, 1732832097519, 1732570725560, 1730713750450, 1732832321645, 1733080529124, 1733282562443 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6827/Reviewer_DRNQ" ], [ "ICLR.cc/2025/Conference/Submission6827/Reviewer_XZqU" ], [ "ICLR.cc/2025/Conference/Submission6827/Reviewer_XZqU" ], [ "ICLR.cc/2025/Conference/Submission6827/Reviewer_DRNQ" ], [ "ICLR.cc/2025/Conference/Submission6827/Authors" ], [ "ICLR.cc/2025/Conference/Submission6827/Authors" ], [ "ICLR.cc/2025/Conference/Submission6827/Authors" ], [ "ICLR.cc/2025/Conference/Submission6827/Area_Chair_9bPF" ], [ "ICLR.cc/2025/Conference/Submission6827/Authors" ], [ "ICLR.cc/2025/Conference/Submission6827/Reviewer_DRNQ" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission6827/Authors" ], [ "ICLR.cc/2025/Conference/Submission6827/Reviewer_7tYk" ], [ "ICLR.cc/2025/Conference/Submission6827/Authors" ], [ "ICLR.cc/2025/Conference/Submission6827/Authors" ], [ "ICLR.cc/2025/Conference/Submission6827/Authors" ], [ "ICLR.cc/2025/Conference/Submission6827/Authors" ], [ "ICLR.cc/2025/Conference/Submission6827/Reviewer_TRGG" ], [ "ICLR.cc/2025/Conference/Submission6827/Authors" ], [ "ICLR.cc/2025/Conference/Submission6827/Reviewer_7tYk" ], [ "ICLR.cc/2025/Conference/Submission6827/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Authors,\\n\\nThank you for the quick response. Please find my answers below:\\n\\n> The training of a per-dataset model, i.e. the in-context learning is not more expensive.\\n\\nI think this is a misunderstanding. I was talking about the (pre-)training time. If you take the number of FLOPs an A100 can process in 25 days, then you could fit around 400 million GAM models. I understand that you learn (very fast) in context. But that was not my point. \\n\\n> Learning the transformer is a meta-learning step that, as mentioned before, could be conceptually compared to tuning the specific GAM implementation in mgcv::gam.\\n\\nIf I understand correctly what you refer to as \\\"tuning\\\": in `mgcv` this does not take more than a second for $n=1000$ and the typical number of features you used in your experiments. Hence my confusion about the quote cited in [C2].\\n\\n> With the representation we choose, directly optimizing a linear model does not lead to an accurate result, as the binned representation does not add any smoothness constraints.\\n\\nI think you might have misunderstood. What I meant is: If you would train a GAM with basis functions (such as a B-spline basis), then this effectively becomes a (ridge-regularized) linear model. I am not saying your GAMformer does that. \\n\\n> With any other GAM, the regularization is hard-coded in the basis function or the regularization of the basis coefficients\\n\\nWhat is the downside of hard-coded regularization? I might need to choose a basis with a suitable null space, but that is all I have to do.\\n\\n> while we learn the optimal regularization for generalization \\n\\nIs there any theoretical evidence that supports this claim? There is one for GAMs (however, only for the given dataset).\\n\\n > We did use the default parameters and achieved bad results. We will share the code for you to confirm our results. From these results, we draw the conclusion that the default parameters do not yield good results, and manual intervention is required.\\n\\nI am a bit confused. How can you achieve bad results, and then --- with manual intervention --- perform worse than a logistic regression model?\\n\\n> Were there specific other points that you wanted us to address that we didn't address above?\\n\\n[E4], [R1], whether the [W]-parts make sense, and all the comments from my response I wrote before (25 Nov 2024, 14:20 ET) that have not been addressed (in particular the \\\"tuning\\\" question).\\n\\n> It would be great to make sure you understand the architectural difference between this work and TabPFN and why this model is more interpretable than TabPFN. From the questions above, it seems these structural differences, which are the core contribution of our work, are not clear.\\n\\nI think one way forward to avoid confusion could be to improve Figure 1 and its caption.\\n\\nHowever, understanding your architecture is not the point that I am struggling with. With my review, \\n- I am providing you pointers and constructive feedback to improve your manuscript (such as your list of contributions) --- something that so far the authors do not seem to value much\\n- and asking you \\n + 1) to provide evidence that your in context learning can recover a GAM model, \\n + 2) what benefits there are if inference is as expensive as for GAMs, (pre-)training is much more expensive (on the level of 400 mio GAMs), prediction performance is likely the same, and I don't know whether I can trust your method.\"}", "{\"summary\": \"The paper presents GAMformer, a novel model for fitting Generalized Additive Models (GAMs) using in-context learning (ICL) within a transformer-based framework. Unlike traditional GAMs that rely on iterative methods such as splines or gradient boosting, GAMformer uses a single forward pass to estimate shape functions for each feature, eliminating the need for hyperparameter tuning and iterative optimization. This approach is trained exclusively on synthetic data but performs competitively with existing GAMs on real-world tasks. GAMformer\\u2019s non-parametric, binned approach to shape function estimation enables high interpretability of feature impacts. Experimental results show that GAMformer matches or surpasses other interpretable machine learning methods on both synthetic and real-world tabular datasets, including clinical applications on the MIMIC dataset for ICU mortality prediction. Additionally, the model\\u2019s adaptability to real-world data demonstrates its potential for scalable, interpretable applications without extensive tuning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"GAMformer is a contribution to GAMs, leveraging ICL and transformer models to eliminate iterative optimization, thereby simplifying the modeling process and reducing the computational overhead associated with traditional GAMs.\\n\\nThe model maintains high interpretability\\u2014crucial for critical fields like healthcare\\u2014while matching the performance of established methods like Explainable Boosting Machines (EBMs).\\n\\nGAMformer\\u2019s training on synthetic data enables it to generalize to real-world data effectively, a challenging task for many models, especially in interpretability-driven applications.\\n\\nThe use of a non-parametric, binned representation for shape functions allows for flexibility, particularly for capturing discontinuities or sudden shifts in feature impacts.\\n\\nThe model was rigorously tested across various benchmark datasets, and a case study on ICU mortality in the MIMIC-II dataset demonstrated its clinical interpretability potential, which is well-aligned with the paper\\u2019s goals.\", \"weaknesses\": \"GAMformer currently only supports main effects and second-order feature interactions, limiting its applicability for datasets where higher-order interactions are significant.\\n\\nThe Transformer architecture in GAMformer scales quadratically with the number of data points, leading to potential performance bottlenecks for very large datasets. Exploring scalable attention mechanisms, as the authors suggest, would strengthen the model\\u2019s practical use.\\n\\nWhile the clinical case study is insightful, further empirical evaluations across diverse fields (e.g., finance, manufacturing) would provide a clearer picture of GAMformer\\u2019s interpretability and performance across different domains.\\n\\nThere is a lack of quantitative results tables comparing the model with recent baselines.\", \"questions\": \"Please see the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I have read the author's response, but my concerns remain unresolved.\\n\\nAs a result, I will maintain my original score.\"}", "{\"comment\": \"Dear Authors,\\n\\nThank you for your detailed comments. Some answers helped in resolving my unclarities. Below is my response for the rest:\\n\\n> [C3] See T3. We follow a standard evaluation procedure to show that our methodology provides comparable insights to existing approaches.\\n\\nI am not sure how a standard application with a commonly used dataset can be a paper contribution. I agree that it does a good job as a sanity check, showing that your method works similarly to other methods. But this belongs to your second contribution point (\\\"experimental results demonstrate ...\\\").\\n\\n> We compare against mgcv, and given the result in Figure 6, it is clear that mgcv without tuning is not a competitive method and is statistically significantly worse than Logistic Regression\\n\\nIf a logistic regression is better than `mgcv`, I believe something went wrong as I tried to elaborate in my review. To get to the bottom of this: What do you mean with `mgcv` without tuning? It could also help if you provide the code. \\n\\nFurthermore, I still don't agree with the sentence \\\"... to form shape functions ... eliminating the need for ... iterative learning and hyperparameter tuning\\\". `mgcv` implements various approaches that do not require separate tuning, e.g., AIC-based selection of the smoothing parameter. See also E2/T4 response below.\\n\\n> The reviewer is correct that the data generating process for the meta learning data used to train the transformer model \\n\\nI think there might have been a misunderstanding. I was talking about your the illustrative example data generation, not the pre-training. Can you comment on this?\\n\\n> [S1] There seems to be a confusion about training, meta-training and prediction phases. \\n\\nI was trying to say that I do not see the benefit in your method when accounting for its complexity. \\n- The inference is as expensive as for the GAM, correct?\\n- The training is much more expensive, correct? (on a factual level, not philosophically)\\n- Will it be significantly different from a fitted GAM in prediction performance? I don't think so, since GAMs have certain optimality properties.\\n- Is there any way I can trust your method's inference? It doesn't come with confidence intervals that have theoretical guarantees as for GAMs.\\n- Furthermore, once you have done the basis transformation, a GAM is a linear model. You don't need to train a transformer to learn a linear model, this can be done in a single `nn.Linear` layer (to be fair, you could benefit from meta-learning, but there are theoretical limits to this in linear models as well).\\n\\n> As mentioned above, it\\u2019s not immediately clear how to analyze TabPFN with SHAP. This is likely computationally infeasible as TabPFN does not produce a model;\\n\\nMaybe I am not too familiar with SHAP, but wouldn't it be possible to use any other XAI method that works on the level of predictions such as partial prediction plots?\\n\\n> also, it would only result in a post-hoc explanation, not a glass-box model.\\n\\nBut a million-parameter model is not a glass-box either, is it? Just because an LLM does have a linear head that we can interpret doesn't mean we understand its reasoning or uncertainty.\\n\\n> [S3] This statement is somewhat confusing.\", \"to_rephrase\": \"If I had access to a post-hoc interpretability method like the one mentioned above, would I find that TabPFN is also learning a GAM if the data-generating process of the data on which we want to predict is also a GAM?\\n\\n[E1] Could you please clarify? \\n\\nI see shape functions for GAMformer and EBM. But I was talking about the shape functions of a simple GAM model (in particular `mgcv`, because some strange things are happening in `pyGAM` and `statsmodels`). \\n\\n> [E2 /T4] This seems to contradict [C2], and it\\u2019s unclear on what basis this claim is made. Either mgcv::gam requires tuning to perform reasonably, or it does not. We did not perform tuning and it was outperformed to a statistically significant degree by logistic regression. If it requires tuning (\\u201cusing it correctly\\u201d as you say) to outperform logistic regression, then [C2] does not seem valid.\\n\\nIt does not contradict. I think we are just talking about different things when using the word \\\"tuning\\\". \\n- `mgcv` uses a criterion to define the optimal smoothing parameter (such as the AIC). If you want to call this \\\"tuning\\\", so be it. But this happens out of the box, no additional data is required. You do **not** need to set any parameters. \\n- If you \\\"tuned\\\" `mgcv` (meaning you fiddled around with the parameters), then you likely either did something wrong or more advanced, but unnecessary in most practical cases).\\n\\n> [Q4] This is somewhat confusing\\n\\nUsing a fANOVA approach for example as e.g. done [here](https://par.nsf.gov/servlets/purl/10298432). \\n\\n------\\n\\nDue to the deadline extension, it would be interesting if the authors could also comment on my other points.\"}", "{\"title\": \"Authors' response\", \"comment\": \"Thank you for your comments. The results in the paper indeed are limited to the classification setting, however, we are in the process of training a separate model for regression. We are not concerned about the discretization of input variables; we use the same discretization scheme that is commonly used with gradient boosted models and EBMs, both of which provide state-of-the-art results for regression (See Grinsztajn et al \\u201cWhy do tree-based models still outperform deep learning on tabular data?\\u201d). In fact, the excellent performance of discretized gradient boosting algorithms was the motivation for this architecture.\"}", "{\"title\": \"Authors' response\", \"comment\": \"Thank you for your comments. Regarding the limitation to first and second order effects, for glassbox models, limiting to second order and main effect features is a common approach, as higher order interactions are usually not easy to interpret. It has been found that by modeling up to second order interactions, GAMs can achieve state-of-the-art results on a wide variety of datasets (See Chang et. al. \\u201cHow Interpretable and Trustworthy are GAMs?\\u201d). While it would be possible to extend GAMformer to higher order interactions, this is unlikely to be useful in a setting that requires interpretable models, and other interpretable GAM models such as EBMs and NAMs typically restrict models to main effects and pairwise interactions.\\n\\nWhat recent baselines do you think are missing from our comparison?\"}", "{\"comment\": \"Could you please clarify in how far your concerns are unaddressed? The main idea of the work of producing parametric interpretable models with efficient inference via in-context learning has not been previously investigated, so we would like to understand your concern about novelty better.\\n\\nWe would also like to understand better your concern about quality. The real world datasets that we evaluate on have no ground-truth classifier, and it's unlikely that the bayes estimator for these datasets are all within the class of generalized additive models as we parametrized it. Therefore nearly all our results address the agnostic case.\"}", "{\"metareview\": \"The paper introduces GAMformer, a method that uses in-context learning to fit Generalized Additive Models in a single step, rather than traditional iterative methods. The models presented are trained on synthetic data but demonstrates good performance on real-world datasets. The authors claim competitive performance with leading GAM implementations while maintaining interpretability.\\n\\nThe method is limited to first and second-order feature interactions. As well, the neural network based approach loses many of the theoretical guarantees compared to traditional GAMs. Reviewers were concerned by the cost of training the model compared to ad-hoc GAM training. Reviewers also had concerns about the experimental validation and the novelty compared to recent works.\\n\\nBased on the reviews and discussion, this paper appears to be marginally below the acceptance threshold. While the approach is interesting, I believe this is outweighed by reviewers' concerns so I will recommend rejection.\", \"additional_comments_on_reviewer_discussion\": \"Reviews were initially slightly negative highlighting novelty concerns, computational complexity, statistical significance of results, comparison with standard baselines, and reproducibility.\\n\\nThe authors responded to reviewers' concerns making good points on one-time training cost (vs every-time training cost for GAMs) and defended their experimental methodology while acknowledging some limitations. Their rebuttal helped clarify some points but left others unresolved. One reviewer was particularly concerns about potential mistakes in baseline experiments. After rebuttal some reviewer concerns remained unaddressed.\"}", "{\"comment\": \"> The training is much more expensive, correct? (on a factual level, not philosophically)\\n\\nThe training of a per-dataset model, i.e. the in-context learning is not more expensive. It is, as we noted, not scalable to large datasets given the current attention mechanism, but on smaller datasets the complexity is comparable to additive models if not faster, as it is a single forward pass in the transformer. Learning the transformer is a meta-learning step that, as mentioned before, could be conceptually compared to tuning the specific GAM implementation in mgcv::gam.\\n\\n> Furthermore, once you have done the basis transformation, a GAM is a linear model. You don't need to train a transformer to learn a linear model, this can be done in a single nn.Linear layer (to be fair, you could benefit from meta-learning, but there are theoretical limits to this in linear models as well).\\n\\nWith the representation we choose, directly optimizing a linear model does not lead to an accurate result, as the binned representation does not add any smoothness constraints. The point of learning the transformer for this model is that it learns to regularize based on the data distribution seen during training. That is the fundamental contribution of this work: learning how to infer an additive model that optimally generalizes given a distribution over training datasets in the form of the synthetic prior.\\nIt would be interesting to show a direct comparison between a linear model learned on our encoding and the GAMformer model, I'm not sure we will be able to produce this in time for the rebuttal, though.\\nWith any other GAM, the regularization is hard-coded in the basis function or the regularization of the basis coefficients, while we learn the optimal regularization for generalization (such as smoothness or dealing with correlated variables) from scratch from the training data.\\n\\n> Maybe I am not too familiar with SHAP, but wouldn't it be possible to use any other XAI method that works on the level of predictions such as partial prediction plots?\\n\\nAgain, as TabPFN does not produce a model, these would be extremely expensive. I highly encourage you to read the TabPFN paper in detail to understand what would be necessary to apply partial prediction plots to TabPFN.\\n\\n> But a million-parameter model is not a glass-box either, is it? Just because an LLM does have a linear head that we can interpret doesn't mean we understand its reasoning or uncertainty.\\n\\nThe models produced by GAMformer are not million parameter models. The function that predicts for a given dataset is a compact GAM model. This is unlike a linear head on an LLM, that uses the latent representation computed with a transformer model. We are producing essentially a linear layer (after binning) that is applied to the **original input**; that is what makes it interpretable.\\n\\n> But I was talking about the shape functions of a simple GAM model\\nI understand this as including a spline-based GAM model? Unfortunately we likely won't have time to produce these graphs, even with the extended deadline, but it would indeed be an interesting addition. We were unable to obtain competitive results from mgcv:gam, which made the comparison of less interest to us.\\n\\n> mgcv uses a criterion to define the optimal smoothing parameter (such as the AIC). If you want to call this \\\"tuning\\\", so be it. But this happens out of the box, no additional data is required. You do not need to set any parameters.\\n\\nWe did use the default parameters and achieved bad results. We will share the code for you to confirm our results. From these results, we draw the conclusion that the default parameters do not yield good results, and manual intervention is required.\\n\\n> Due to the deadline extension, it would be interesting if the authors could also comment on my other points.\\n\\nWere there specific other points that you wanted us to address that we didn't address above?\\n\\nIt would be great to make sure you understand the architectural difference between this work and TabPFN and why this model is more interpretable than TabPFN. From the questions above, it seems these structural differences, which are the core contribution of our work, are not clear.\"}", "{\"summary\": \"The authors use prior fitted networks to train a large transform-type architecture that can then learn the shape functions of additive models in a single forward pass. The resulting model is competitive to other approaches with similar interpretability and capacity.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": [\"**Originality**: The idea of learning shape functions in-context is novel and interesting.\", \"**Numerical Experiments** The comparisons cover a variety of models from different classes, thus giving a bigger picture of GAMformer's capabilities.\"], \"weaknesses\": [\"## Major\", \"### Contributions\", \"**[C1]** The claim that \\\"experimental results demonstrate GAMformer's capacity to match the accuracy of leading GAMs\\\" might be accurate regarding performance, but the interpretability has not been sufficiently scrutinized. See comments on Experiments below.\", \"**[C2]** In light of GAMs requiring no tuning (at least in the `mgcv` package), the claim \\\"... to form shape functions ... eliminating the need for ... iterative learning and hyperparameter tuning\\\" does not seem particularly significant.\", \"**[C3]** The contribution claiming the model was applied to the MIMIC-II dataset lacks significance. This dataset has been analyzed previously. The current study does not add any new insights. The dataset itself is also not particularly challenging, yet the modeling approach seems to have missed a key property of the dataset (see **E3** below).\", \"### Technical soundness/correctness\", \"**[T1]** The introduction to GAMs is missing a distributional assumption (a GAM consists of both structural and distributional assumptions; see Wood, 2017).\", \"**[T2]** The simulated functions are not GAMs but deterministic functions. As correctly noted by the authors, a GAM is defined by a link function, yet they use a simple indicator function without induced noise or distributional assumptions for the simulation, which does not correspond to the data-generating process of a GAM. The code uses `sigmoid`, but there is no distribution involved (same for the regression task).\", \"**[T3]** \\\"Allocating bins based on the quantiles of the feature in the training dataset\\\" \\u2192 This approach is likely inferior to equidistant binning, as quantile-based binning alters the data distribution of the feature (see Li and Wood, 2017).\", \"**[T4]** The comparisons with `mgcv::gam` appear incomplete (see below).\", \"### Significance\", \"**[S1]** The computational costs of:\", \"fitting a GAM are $O(N_{train} \\\\cdot (K \\\\cdot p)^2)$ (see Wood, 2020), where $p$ is the number of features and $K$ the number of basis functions (in `mgcv`, often set to 10). For the data used by the authors, this would amount to 50-800 parameters;\", \"predicting with a fitted GAM is $O(N_{test} \\\\cdot K \\\\cdot p)$.\", \"In contrast, ICL requires millions of parameters, if I understand Sec. 3.2 correctly, and even with fast inference, it is slow compared to GAMs, where typically $N > p$ and hence the quadratic scaling of $N$ in the transformer is still the bottleneck. Moreover, the authors report that the model required 25 days on a high-performance GPU, whereas all the analyzed datasets could be fit within seconds using GAMs. GAMs can also be applied to datasets of size $10^8$ using `mgcv::bam` (see Wood et al., 2017).\", \"**[S2]** The method does not seem to outperform other models in prediction accuracy and appears to be inferior to TabPFN. TabPFN itself could also be analyzed using SHAP after computing the predictions, raising the question if a specific architecture is even necessary.\", \"**[S3]** I could not identify any other significant insights of theoretical nature or similar that could be derived from a GAMformer. In particular, I would assume that the SCMs in TabPFN likely already cover GAM-type models (related to **S2**). This raises the question of what additional benefit is gained by making them explicit, as done here.\", \"### Experiments\", \"**[E1]** The experiments do not show the shape functions of the GAM method, which would be particularly useful for illustrative examples.\", \"**[E2]** Simulations should be designed to correspond to an actual GAM to see whether the GAMformer can actually recover those (see **T2**).\", \"**[E2/T4]** The `mgcv::gam` should not be inferior to logistic regression if used correctly (see Figure 6).\", \"**[E3]** Isn't there censoring in the MIMIC datasets? A time-to-event model might be more appropriate in this case then.\", \"**[E4]** In the Appendix experiments, the authors switch to `pyGAM`, which is known to be inferior to `mgcv::gam`, and do not report the latter's performance.\", \"### Reproducibility\", \"**[R1]** The code does not provide competitor models.\", \"### Writing\", \"**[W1]** There is some redundancy between Sections 1 and 2, which disrupts the flow of reading.\", \"**[W2]** The notation $j_{x_i}$ is somewhat confusing, as $j$ is an index in the bins and $x_i$ represents the $i$th feature in $x$.\", \"## Minor / Technical soundness\", \"**[M1]** The $f$ functions are typically referred to as *smooth terms* or *smooth functions* in the GAM literature, not *shape functions* (a term seemingly invented by the NAM community). They are also not *partial dependence plots* (as these are plots, not functions; in GAM literature, they are referred to as *partial effects*).\", \"**[M2]** The $g$ function typically does not map to $\\\\mathbb{R}$ but to a subspace ($\\\\mathcal{Y}$, or more specifically, e.g., (0,1) for the logistic function).\", \"**[M3]** What is $q_\\\\theta$ in equation (2)?\", \"**[M4]** \\\"Spline-based GAMs use the backfitting algorithm\\\" $\\\\rightarrow$ Backfitting was proposed by Hastie and Tibshirani. More recent approaches, like those from Wood, use PIRLS or alternatives like INLA (see Wood, 2017).\", \"**[M5]** The citation in footnote 3 (and for the `mgcv` package in general) seems incorrect.\", \"**[M6]** No \\\"shape functions\\\" for pairwise smooth interactions are shown.\", \"## References\", \"Wood, 2017: https://www.taylorfrancis.com/books/mono/10.1201/9781315370279/generalized-additive-models-simon-wood\", \"Li and Wood, 2017: https://link.springer.com/article/10.1007/s11222-019-09864-2\", \"Wood et al., 2017: https://www.tandfonline.com/doi/full/10.1080/01621459.2016.1195744\", \"Wood, 2020: https://www.maths.ed.ac.uk/~swood34/test-gam.pdf\", \"## Suggestions for Improving the Paper\"], \"here_are_some_suggestions_for_improving_the_paper\": \"1. **Writing/Novelty/Significance**: 1) Clearly articulate how GAMformer advances beyond current GAM implementations and/or what additional insights they provide for PFNs or in-context learning. If you consider not changing your listed contributions, provide a more rigorous theoretical comparison with GAMs (computational complexity, etc.). I would, however, suggest changing your argumentation and thinking about what other aspects a PFN-type model can provide that a GAM cannot.\\n\\n2. **Technical Correctness and Clarity**: Revise the GAM background to more formally introduce GAMs, and merge the redundant parts of Sections 1 and 2.\\n\\n3. **Numerical Experiments**: Consider 1) simulating datasets that follow a GAM data-generating process, 2) comparing against ``mgcv::gam`` in performance, 3) showing shape functions also for GAMs.\\n\\n4. **Application**: If the authors' approach allows the inclusion of censoring, consider modifying your application. Alternatively, consider a different and more challenging dataset.\\n\\n5. **Reproducibility**: Include the code for competitor models.\", \"questions\": [\"**Q1**: I would be very happy if the authors could address the weaknesses I have mentioned above\", \"**Q2**: Are there any insights of GAMformers that I might have missed?\", \"**Q3**: Have the authors thought about analyzing the smoothness of GAMformers and whether this could provide more interpretable functions compared to the jagged functions of NAMs / EBMs?\", \"**Q4**: Have the authors thought about extending the class of GAMs? I would assume that this model could also learn a combination of GAMs, trees, NODEs, etc., and still remain interpretable.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"response continued\", \"comment\": [\"[E1] Could you please clarify? Figure 2, Figure 4, Figure 7, Figure 10, Figure 11 and Figure 12 compare the results of GAMformer with a baseline SOTA GAM model, EBMs.\", \"[E2 /T4] This seems to contradict [C2], and it\\u2019s unclear on what basis this claim is made. Either mgcv::gam requires tuning to perform reasonably, or it does not. We did not perform tuning and it was outperformed to a statistically significant degree by logistic regression. If it requires tuning (\\u201cusing it correctly\\u201d as you say) to outperform logistic regression, then [C2] does not seem valid.\", \"[E3] The MIMIC datasets are censored with respect to some outcomes. This is common for many medical datasets and outcomes. Although some models trained on the MIMIC datasets are, as the reviewer suggests, time-to-event models, most are not and training \\u201cstandard\\u201d classification models on the MIMIC data as we do is very common.\", \"[Q2] Yes, please see above.\", \"[Q3] This would indeed be interesting, however, we are not familiar with established ways to measure function smoothness for GAM models.\", \"[Q4] This is somewhat confusing; it\\u2019s unclear to us how a more complex combination of models could still be interpretable. While one of the interesting aspects of this work is to show that it is possible to learn specific parametric forms with a meta-learning approach, and this approach could translate to other model families, it\\u2019s unclear how these models could still be interpretable. We specifically restricted the model class to GAMs for this very purpose.\"]}", "{\"summary\": \"The paper addresses the problem of supervised learning for tabular data and proposes a solution based on generalized additive models. A key feature is the use of an attention-based neural network (Transformer) to process the training data and provide a prior over the parameters of the non-linear predictive functions. The learning process involves splitting the training data into a training set and a holdout set. A predictive likelihood over the holdout set is used to learn the prior based on the training set. Experiments on synthetic data and OpenML datasets are conducted to compare the proposed solution with explainable boosted machines, demonstrating its ability to achieve comparable predictive performance\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The paper is clear and well-written **Clarity**\\n2. The paper addresses an important and relevant problem, specifically how to leverage deep learning to learn a prior for predictive tasks on tabular data. **Relevance**\\n3. The code is available, and a Jupyter Notebook is provided to demonstrate how the proposed model and explainable boosted machines generate the predictive functions **Code Availability**. However, no checks have been performed to verify the reproducibility of the experiments.\", \"weaknesses\": \"1. The novelty of the paper is limited and incremental. **Novelty**. The main ideas have already appeared in two previous works [1,2], and the primary difference seems to be the use of a different classifier/regressor. In other words, instead of considering Bayesian neural networks or structural causal models like in [1,2], the authors focus on generalised additive models. In essence, the work can be seen as an application of existing ideas within the context of generalised additive models.\\n2. There are several vague and overstated claims that are not properly supported. For instance, the abstract mentions that the proposed solution generates highly interpretable predictive functions. **Soundness** However, this is also true for the competitors, and it is unclear what the real advantage of the proposed solution is over existing generalised additive models and other interpretable models (such as XGBoost). In the experiments (e.g., lines 304-305), it is stated that the proposed solution outperforms explainable boosted machines (EBMs), but these claims seem exaggerated. Firstly, in the low-data regime (32 samples) with a larger number of features (64), the proposed solution clearly underperforms compared to EBMs by 14 points, suggesting a possible blind spot and indicating that sufficient data is required for the proposed solution to perform on par. Secondly, it is unclear whether the differences in the results are statistically significant, as no standard deviation is provided. Similarly, for Figures 2 and 3, it is claimed that the proposed solution clearly learns smoother predictive functions. However, this is subjective and not consistently true (only the 1st and 3rd plots in Figure 2 support the authors' claim).\\n3. The experimental analysis lacks a consistent comparison across datasets and tasks with other interpretable models. Additionally, the analysis focuses on the case where the ground truth classifier lies within the hypothesis space. What about the agnostic case? **Quality**\\n4. The experiments are conducted on small datasets, reflecting the poor scalability of the approach. While the idea of synthesizing data may be reasonable for small datasets, it may not be tractable or feasible for higher-dimensional data, given the potential for combinatorial explosion. Scalability and feasibility are currently overlooked, which is a significant limitation of the proposed solution. As a result, it is unclear why one should prefer this approach over existing interpretable models that are more scalable. **Quality/Significance**\\n\\n**References** \\\\\\n[1] M\\u00fcller, Hollmann, Arango, Grabocka, Hutter. \\u201cTransformers can do Bayesian Inference\\u201d. ICLR 2022 \\\\\\n[2] Hollmann, M\\u00fcller, Eggensperger, Hutter. \\u201cTabPFN: A Transformer that Solves Small Tabular Classification Problems in A Second\\u201d. ICLR 2023\", \"questions\": \"Please, refer to the main weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Authors' response\", \"comment\": \"Thank you for your detailed comments. From [S1], [S2] and the suggested improvements, it seems that there is some misunderstanding of our method. Our method is not slower to perform inference than traditional GAMs, and produces GAMs with a comparable number of parameters. It also provides an extremely high degree of introspection, while PFN type models are completely opaque and extremely slow to predict.\", \"addressing_your_individual_points\": [\"[C1] The parametric form of the model makes it intrinsically interpretable. Whether the shape functions allow for meaningful interpretation is rather subjective and hard to quantify. We show a wide variety of shape functions produced by GAMformer in the paper for subjective analysis, so it would be great if you could clarify point [E1]. The shape functions produced by GAMformer are similar to and share many of the properties of shape functions produced by EBMs and NAMs.\", \"[C2] We compare against mgcv, and given the result in Figure 6, it is clear that mgcv without tuning is not a competitive method and is statistically significantly worse than Logistic Regression\", \"[C3] See T3. We follow a standard evaluation procedure to show that our methodology provides comparable insights to existing approaches.\", \"[T1] We use the definition of Additive models of Hastie, Tibshirani and Friedman, who we consider authoritative in the area.\", \"[T2] The reviewer is correct that the data generating process for the meta learning data used to train the transformer model does not correspond to the data-generating process of GAMs. The goal of the synthetic meta learning data is to produce a very large number of realistic tabular learning problems without restricting them to be GAMs. The restriction to GAMs occurs in the architectural constraint applied to the neural net architectures that the transformer can learn from the data. We believe this is similar in spirit to most experimental evaluations of GAMs where real-world data is not restricted to having been generated by a GAM, but a GAM model is then learned from this data.\", \"[T3] Quantile binning is employed by EBMs and leading gradient boosted regression tree implementations such as XGBoost, LightGBM and CatBoost, which are current gold-standard algorithms in machine learning. The binning obviously changes the distribution of the features, but given the immense practical success of this method, it is unclear why this should be considered a disadvantage. In practice, all of these methods (including ours) choose a binning that is fine enough not to cause significant loss of resolution but coarse enough to make learning computationally more efficient. Tree-based algorithms such as XGBoost, LightGBM, CatBoost and EBMs are not sensitive to the choice of interval or quantile-based binning as long as the binning resolution is sufficient.\", \"[S1] There seems to be a confusion about training, meta-training and prediction phases. State-of-the-art GAM methods usually employ histogram-based lookup tables for prediction, which is the method that GAMformer uses. The inference speed of GAMformer is not slower than that of other GAMs, and this conclusion likely means a misunderstanding of the method. Similarly, the amount of features in the predicted GAM models is bins * classes * features for multiclass classification and bins * features for binary classification, which is similar to the range of parameters in a fitted spline based model of 50-800 as you suggest (though the minimum in our implementation would be 64 for a single features and two classes).\", \"The millions of parameters are not fit to any individual datasets, but learned during meta-learning, which corresponds to the development of the algorithm in classical ML. Indeed all the datasets studied in the table can be fit within seconds using GAMformer. The 25 days of GPU training should be compared to the time it took to implement mgcv, not the inference time.\", \"[S2] As mentioned above, it\\u2019s not immediately clear how to analyze TabPFN with SHAP. This is likely computationally infeasible as TabPFN does not produce a model; also, it would only result in a post-hoc explanation, not a glass-box model.\", \"[S3] This statement is somewhat confusing. The SCMs are the data generating process used for training TabPFN, and we indeed assume they cover GAM-type models as we re-use this prior. However, TabPFN does not recover the SCM, and so the predictions made by TabPFN are completely opaque. The benefit of making the additive model explicit is that the user has access to the model structure, which is not possible with TabPFN.\"]}", "{\"title\": \"Shared response to all reviewers\", \"comment\": \"We want to thank all the reviewers for their insightful comments. We want to address some broader comments before addressing individual reviews.\\n\\n### Novelty\\nWhile this work builds on the work of Mueller and Hollmann, it presents several novel ideas that are absent from that work. We are producing a parametric, interpretable model with efficient inference via in-context learning. None of these are true for TabPFN, which does not explicitly represent the prediction function. We are not, as reviewer 7tYk suggests, using a different classifier/regressor. The work in Hollman (TabPFN) produces no model, and learns to perform predictions via in-context learning with a transformer, which means that for each individual prediction, the transformer model has to be invoked, leading to slow predictions compared with more traditional models. This also means that existing post-hoc methods like SHAP, which reviewer DRNQ suggests [S1] as an alternative, do not readily apply to TabPFN, as no model is produced. A brute-force version of SHAP could be applied to TabPFN by fixing the training set and varying prediction points; however, this essentially means running the transformer on all probing points required by SHAP and is therefore extremely computationally intensive compared to our approach. Furthermore, SHAP provides only post-hoc explanations that can be hard to interpret when aggregated to the feature level. In this work we use transformers to produce a GAM model that is interpretable, is very efficient at making predictions, and which can be edited if necessary to make corrections to the model because of bias in the training data.\\nWe also innovate beyond the architecture of TabPFN by supporting an arbitrary number of features, and providing a model that equivariant to the ordering of features.\\n\\n### Scalability\\nScalability of transformer-based architectures is a widely studied problem; solving it goes beyond the scope of this work. We are in the process of extending our model to more scalable architectures, however, any future development improving attention scalability (which is an incredibly active research field given the applications to LLMs) is likely to improve the scalability of our approach.\"}", "{\"comment\": \"I'm sorry to hear you don't think your concerns are addressed. How can we address them?\"}", "{\"title\": \"Authors' response\", \"comment\": \"Thank you for your comments and suggestions.\\n\\n### Novelty\\nAs mentioned in the shared response, we think there is substantial novelty in producing a **parametric**, **interpretable** model with **efficient inference** via in-context learning, which TabPFN does not.\\n\\n### Soundness\\nCould you please elaborate in how far XGBoost is an interpretable model? Gradient boosting models are generally considered to be black-box models that at most allow post-hoc interpretation. It is not the point of the paper to claim that we are outperforming EBMs across the board, and we are happy to adjust the phrasing. Rather, we want to claim that it is possible to create competitive additive models using in-context learning; this work is meant as a proof-of-concept of this idea, and we do not expect it to immediately replace existing solutions.\"}", "{\"summary\": \"The paper proposes an in-context learning approach for learning generalized additive models for tabular data building on prior work (PFN and TabPFN) for tabular classification.\\nThe training procedure executes on synthetic data by sampling a random causal graph and generating data from an initial random sample. The data is split into training and test datasets to simulate inference. \\nA transformer model applies attention across the data points and features and handles tabular data of varying sizes. A single forward pass of the transformer estimates the shape functions for the given in-context training data which are then applied to the test example. The shape functions themselves are represented as discrete functions which apply to discretized and binned features. The method is demonstrated experimentally on synthetic and real data including a mortality risk case study where the shape functions are used to interpret model predictions.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"The method appears to be a novel approach for learning generalized additive models.\\n\\nThe paper is well-written, ideas and goals are clearly stated, background work is acknowledged and limitations are addressed.\\n\\nExperiments are done on synthetic and real examples with an extensive public health case study interpreting the learned shape functions and their implications. \\n\\nThe paper discusses the limitations of the model which are 1) lack of accounting of higher-order interactions 2) lack of improvement over datasets larger than seen during training and 3) quadratic complexity of the transformer.\\n\\nAlso propose an extension to model higher-order effects by concatenating data and high-order effects.\", \"weaknesses\": \"The approach appears to be limited to discrete target values. Shape functions are learned as discretized functions over discretized features which could be limiting.\", \"questions\": \"Do you only consider discrete target variables in the experiments? Given that the features are binned and discretized, could the method be applied for regression with continuous variables?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"First, we would like to thank reviewer DRNQ for going the extra mile to understand our work and make suggestions about how to improve it. This kind of thorough reviewing is not so common these days, and we really appreciate it. Thank you.\\nWe also appreciate clearly articulating your concerns.\\n\\nWe will try to provide empirical evaluation for recovering a GAM and details on the evaluation procedure for mgcv after the (US) holiday weekend.\\n\\nAlthough we did not discuss how to generate error bars with GAMformer, the method we would use is the same bootstrap method used in EBMs and NAMs. Specifically, we would form multiple bootstrap samples of the data, use GAMformer to generate a GAM for each boostrap sample, and then return the mean and variance (or confidence interval) at each point on the learned shape function. The speed of GAMformer's forward pass makes this even more efficient than it is with methods such as EBMs or NAMs which need to iteratively refit a model for each bootstrap sample. We will add a short paragraph describing this method for generating confidence intervals to the final draft, as well as add confidence intervals to the shape functions in the figures.\\n\\nAs you suggest, GAMs formed by GAMformer have similar computational cost at prediction time as GAMs learned with other algorithms, so no win or loss there. And while the cost of the forward pass in GAMformer that generates a GAM is likely less than the cost of the iterative optimization required by algorithms such as EBMs or NAMs, this is traded off against a very large meta cost to train the GAMformer transformer in the 1st place. This tradeoff, however, is not as bad as it might seem. First, the GAMformer transformer model need only be trained once, and then it can be publicly served or distributed to efficiently generate many GAMs for many users and datasets. And we can already see above how this kind of efficiency could have utility for things like bootstrap analysis and exploratory modeling.\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for the clarifications, but my concerns remain unaddressed. Hence, I keep my score.\"}", "{\"comment\": \"Unfortunately we were not able to perform conclusive results for recovering GAM models from a GAM model with binomial distribution in time for the rebuttal.\\n\\nRegarding the training using `mgcv` we used the following code for training and prediction:\\n```\\nformula_str <- paste(target_variable, \\\"~\\\", paste(predictor_variables, collapse = \\\" + \\\"))\\nformula <- as.formula(formula_str)\\nif (num_classes == 2) {\\n # Binary classification: use gam with binomial family\\n model <- gam(formula, data = train_data, family = binomial)\\n test_probs <- predict(model, newdata = test_data, type = \\\"response\\\")\\n positive_class <- levels(data[[target_variable]])[1]\\n binary_labels <- as.numeric(test_data[[target_variable]] == positive_class)\\n roc_curve <- roc(binary_labels, test_probs)\\n auc_values <- c(auc_values, auc(roc_curve))\\n \\n} else {\\n model <- multinom(formula, data = train_data)\\n test_probs <- predict(model, newdata = test_data, type = \\\"probs\\\")\\n multiclass_roc <- multiclass.roc(test_data[[target_variable]], test_probs)\\n auc_values <- c(auc_values, as.numeric(auc(multiclass_roc)))\\n}\\n```\"}" ] }
7ha61H73pg
Understanding Layer Significance in LLM Alignment
[ "Guangyuan SHI", "ZEXIN LU", "Xiaoyu DONG", "zhangwenlong", "Xuanyu Zhang", "Yujie Feng", "Xiao-Ming Wu" ]
Aligning large language models (LLMs) through fine-tuning is essential for tailoring them to specific applications. Therefore, understanding what LLMs learn during the alignment process is crucial. Recent studies suggest that alignment primarily adjusts a model's presentation style rather than its foundational knowledge, indicating that only certain components of the model are significantly impacted. To delve deeper into LLM alignment, we propose to identify which layers within LLMs are most critical to the alignment process, thereby uncovering how alignment influences model behavior at a granular level. We propose a novel approach to identify the important layers for LLM alignment (ILA). It involves learning a binary mask for each incremental weight matrix in the LoRA algorithm, indicating the significance of each layer. ILA consistently identifies important layers across various alignment datasets, with nearly 90% overlap even with substantial dataset differences, highlighting fundamental patterns in LLM alignment. Experimental results indicate that freezing non-essential layers improves overall model performance, while selectively tuning the most critical layers significantly enhances fine-tuning efficiency with minimal performance loss.
[ "LLMs", "Alignment", "Important Layers" ]
https://openreview.net/pdf?id=7ha61H73pg
https://openreview.net/forum?id=7ha61H73pg
ICLR.cc/2025/Conference
2025
{ "note_id": [ "w4LP0pOw1L", "w2Ny4BwVLj", "uEoxlEMkfM", "tJKC6nBMH6", "sI4CFUf2v9", "rI1AunRSwd", "iNFtl3Ixvs", "hxzUKFGiPq", "bqXwVhxQLv", "X6mhckNypK", "VIYFPsp3CN", "RPR2R8IQzU", "PM2QyMusyd", "PI2kwM1GbK", "Lkbgp54iHz", "Gxa4UjxQdO", "Cmg0eg2Or9", "9kBKri7J5S", "6AXDnpjrU4", "5HrIRENE7J", "4phGFktPMB", "2tOegoTWv7", "2JsrW4q8qT", "23AZ1vQdXC" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1730654192271, 1732467151800, 1732876197671, 1730300888328, 1730572618950, 1732871686859, 1732719346480, 1730952874689, 1732510135346, 1732719128554, 1734338198470, 1732868587597, 1733226563363, 1732868654801, 1732719385748, 1732509310320, 1732876235902, 1732871613827, 1733065998069, 1732866637480, 1732868694088, 1730610642450, 1732871727882, 1732876155336 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5252/Reviewer_TMGU" ], [ "ICLR.cc/2025/Conference/Submission5252/Reviewer_epzj" ], [ "ICLR.cc/2025/Conference/Submission5252/Authors" ], [ "ICLR.cc/2025/Conference/Submission5252/Reviewer_epzj" ], [ "ICLR.cc/2025/Conference/Submission5252/Reviewer_qJ17" ], [ "ICLR.cc/2025/Conference/Submission5252/Authors" ], [ "ICLR.cc/2025/Conference/Submission5252/Authors" ], [ "ICLR.cc/2025/Conference/Submission5252/Reviewer_UHz2" ], [ "ICLR.cc/2025/Conference/Submission5252/Authors" ], [ "ICLR.cc/2025/Conference/Submission5252/Authors" ], [ "ICLR.cc/2025/Conference/Submission5252/Authors" ], [ "ICLR.cc/2025/Conference/Submission5252/Authors" ], [ "ICLR.cc/2025/Conference/Submission5252/Authors" ], [ "ICLR.cc/2025/Conference/Submission5252/Authors" ], [ "ICLR.cc/2025/Conference/Submission5252/Authors" ], [ "ICLR.cc/2025/Conference/Submission5252/Authors" ], [ "ICLR.cc/2025/Conference/Submission5252/Authors" ], [ "ICLR.cc/2025/Conference/Submission5252/Authors" ], [ "ICLR.cc/2025/Conference/Submission5252/Reviewer_qJ17" ], [ "ICLR.cc/2025/Conference/Submission5252/Reviewer_epzj" ], [ "ICLR.cc/2025/Conference/Submission5252/Authors" ], [ "ICLR.cc/2025/Conference/Submission5252/Reviewer_PkBY" ], [ "ICLR.cc/2025/Conference/Submission5252/Authors" ], [ "ICLR.cc/2025/Conference/Submission5252/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes a method for masking (pruning) layers of a pretrained LLM before fine-tuning (aligning) them to target tasks. The method appears to work by iteratively switching between optimizing the loss until it becomes stable and searching for a set of layers to mask that still sufficiently minimizes the loss, with the latter framed as a constrained optimization problem and adapted to LoRA for efficiency. Experiments with three open-source LLMs and four datasets suggest that layer importance ranking is consistent across datasets and that freezing (masking) unimportant layers may increase performance.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The work describes a potentially useful method\"], \"weaknesses\": \"1. Lack of clarity: the method is not well explained, and important technical details are missing\\n2. No comparison to similar methods\\n3. Potentially unsupported inferences from experimental results\\n4. Lack of mathematical rigor\\n\\nI elaborate on these points below.\\n\\n# Lack of clarity\\n\\nI struggled to understand how the method works. Terms such as \\\"important,\\\" \\\"unimportant,\\\" \\\"significant,\\\" and \\\"insignificant\\\" layers are not defined. Algorithm 1 is not explained nor connected to the formulas in Section 2 (e.g., what is K?). Additionally, I don\\u2019t understand how Eq. (7) is a reparametrization of Eq. (3), as there is no constraint on \\u2223\\u2223s\\u2223\\u2223 (i.e., Eq. (3) is a constrained optimization problem, while Eq. (7) is not). Consequently, it\\u2019s unclear how the method functions (where is the incentive to minimize the number of masked layers?). I also didn\\u2019t understand how Theorem 2.1 relates to the method and algorithm. What is L_\\u221e\\u200b in Eq. (8), and what is R in Eq. (9)?\\n\\nHow was fine-tuning performed? Was this instruction tuning, and on which portions of the dataset was it carried out? Are the results shown for the test portion? For example, for MMLU, only the test set is provided, so how was the split done?\\n\\nOverall, I failed to grasp how the method actually works. Do you fix the number of layers in advance (\\\"number of insignificant layers K\\\") and then rank and select the top-ranked layers? If so, this raises the question: why not initially select a smaller number of layers in the optimization algorithm and dispense with ranking altogether? I presume there are correlations between layers, and the method seems to be essentially selecting a subset of layers when identifying the \\\"important\\\" ones.\\n\\n# No comparison to similar methods\\n\\nThis work appears to be directly related to layer pruning (e.g., [1], [2], [3], and [4], to mention a few). The proposed method should be compared to existing methods, both conceptually and empirically, in terms of performance.\\n\\n- [1] Lie et al. 2024. Accelerating Inference in Large Language Models with a Unified Layer Skipping Strategy. https://arxiv.org/abs/2404.06954\\n- [2] Gromov et al 2024. The Unreasonable Ineffectiveness of the Deeper Layers. https://arxiv.org/abs/2403.17887\\n- [3] Chen et al 2024. Compressing Large Language Models by Streamlining the Unimportant Layer. https://arxiv.org/abs/2403.19135\\n- [4] van der Ouderaa, T. F., Nagel, M., Van Baalen, M., Asano, Y. M., & Blankevoort, T. (2023). The llm surgeon. _arXiv preprint arXiv:2312.17244_.\\n\\n# Potentially unsupported inferences\\n\\nA Jaccard similarity of importance layers (shown in Figure 1) leads the authors to conclude that \\\"important layers vary between different network architectures\\\" while \\\"there is a significant overlap (up to 90%) in the important layers identified by ILA across different alignment datasets.\\\" Visually, however, there also appear to be similarities across architectures for a fixed dataset, but this aspect hasn't been quantified. A claim of inter-architecture/intra-dataset similarity and intra-architecture dissimilarity would ideally be supported by statistical hypothesis tests (though demonstrating this statistically would require more models and datasets). In the absence of such evidence, I urge caution.\\n\\nSimilarly, results in Tables 3 and 4 are not accompanied by statistical tests of difference. If the authors wish to claim that the proposed method improves performance, I suggest running experiments with different seeds and conducting statistical tests for the significance of score differences between configurations with and without ILA, as the numerical differences are usually small.\\n\\n# Lack of mathematical rigor\\n\\nThe method description in section 2 could benefit from some mathematical rigor. \\n\\n- thetas and the binary mask: better defined as a sequence than a set (also for component-wise multiplication to be defined)\\n- eq (2) refers to the loss function as a two-argument function (which makes sense), but earlier this is not how the loss function is defined\\n- eq 8, 9: theta should be a vector (boldface symbol)\\n- eq 8, 9: one vertical bar missing\\n- assumption 2.2: epsilon-stable has been defined for a model wrt its loss, not for parameters directly. Is this now extending the definition of epsilon-stable to parameter vectors? What is the relation between assumption 2.2 and definition 1?\\n- Theorem 2.1: Theorem assumption 2.1 talks about the Lipschitz conitnuity of the loss function (not across iterations) -- it's unclear to me how this relates to stability across iterations. Assumption 2.2 is about stability across iterations, but I don't see how \\\"a sufficiently small epsilon\\\" makes theta_T stable by eq (10). If anything, a lager epsilon would make it easier to satisfy the inequality.\\n- proof: lines 16->17: where's the loss function gone?\\n- the algorithm coud perhaps be more clearly formalized with a while loop\\n- line 484: |\\u03b3t| = 225. Is this denoting the norm of the vector? Do you mean to say that there are that many layers in total? But since \\u03b3t is a binary vector, we'll have ||\\u03b3t||<=225\", \"questions\": [\"What is the overhead of running this method (runing time, computational complexity) in comparison to full-fine tuning?\", \"What is the stability of the algorithm for optimizing layer importance scores with respect to the initial scores?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Authors,\\n\\nAlthough the discussion period is drawing to a close, I am still looking forward to further discussions with you.\"}", "{\"title\": \"Response to Reviewer PkBY (Part 2/3)\", \"comment\": \"**W2: I believe that more baselines could be added for comparison, such as HFT [1], LISA [2], and GMT [3]. These methods also tune only a subset of model parameters, and I think including these baselines would make the paper more convincing.**\\n\\nThank you for your valuable suggestion. We agree that including more baselines could strengthen the paper. However, we would like to clarify that the primary contribution of our work is to derive a ranking of layer importances rather than proposing a new PEFT (Parameter-Efficient Fine-Tuning) algorithm. **Our focus is on understanding the relative importance of different layers during alignment fine-tuning, and we believe this insight can help inform and enhance the development of future PEFT methods.**\\n\\nWhile methods like HFT, LISA, and GMT do indeed target subsets of model parameters, **they are primarily focused on specific algorithmic approaches rather than on understanding layer significance per se. Our goal is to provide foundational insights that can help guide the design of more effective PEFT algorithms in the future, rather than directly developing a new method.**\\n\\n\\n**W3: The focus of this paper is on alignment, and it has also achieved performance improvements. However, I am curious whether the methods presented in this paper exhibit consistent performance in other areas, such as mathematical reasoning and code generation. I believe the authors could further discuss whether the ILA method has general applicability.**\\n\\nThank you for your insightful comment. We agree that exploring the general applicability of the ILA method beyond alignment tasks is an important direction for future work.\\n\\nWhile the current paper focuses specifically on **alignment fine-tuning**, we do believe that the method could have broader applicability, including in tasks like mathematical reasoning and code generation. This is because **alignment itself is inherently a multi-task problem, and the datasets used for alignment typically involve a wide range of tasks, including reasoning and code generation**. As a result, the **improvements in alignment** may reflect performance gains in these areas, even though we did not explicitly evaluate them in isolation in this study.\\n\\nOur **primary goal** in this paper was to provide a **foundational understanding of layer importance ranking during alignment fine-tuning**, which not only deepens our **understanding of alignment** itself but also has **the potential to advance PEFT algorithms**. By identifying which layers are most important for alignment, our findings could inform and guide future research on parameter-efficient fine-tuning techniques, such as LoRA, BitFit, and others, helping to refine and improve their designs. We believe this foundational work is a crucial step toward enhancing PEFT approaches.\\n\\nThe generalizability of ILA across tasks like reasoning and generation is indeed an exciting direction, and we plan to explore this in future work. We will clarify this point in the revised manuscript, noting that while our current experiments focus on alignment, the insights gained may have broader applicability and can potentially inspire further advancements in PEFT methodologies.\\n\\n**W4: Based on the results, it seems that they still correspond to Llama 2 7B, implying that the authors did not conduct experiments with Llama 3.1 8B, yet included this model in the baselines.**\\n\\nThank you for highlighting the concern regarding the use of Llama 2 7B and Llama 3.1 8B in our experiments and baselines. We appreciate the opportunity to provide clarification and justification.\\n\\n1. **Model Similarities:** \\n * While Llama 3.1 8B is a newer version, its architectural structure is fundamentally similar to Llama 2 7B. The primary differences lie in the tokenizer, training data volume, and data quality. These distinctions influence the model's performance but do not significantly alter its architectural behavior, especially concerning layer importance during alignment, which is the focus of this study.\\n * Our experiments on Llama 2 7B already demonstrate consistent and robust findings regarding layer importance during alignment. Given the structural similarities, these findings are likely to generalize to Llama 3.1 8B, which we included in the baselines to provide broader context.\\n2. **Why Llama 2 7B is Sufficient for This Study:**\\n * The goal of our work is to analyze layer importance across different architectures and datasets during alignment, rather than to evaluate absolute performance differences between model versions. Since Llama 2 7B and Llama 3.1 8B share nearly identical architectures, conducting experiments on both would yield highly redundant results, with minimal additional insights.The observed trends in layer importance are consistent across other experiments and random seeds, further supporting the generalizability of our conclusions.\"}", "{\"summary\": \"This paper proposes an ILA (identify the important layers for LLM alignment) method to identify the most relevant layers for alignment tasks, then improves performance by freezing irrelevant layers. The authors focus on Alignment tasks, including LIMA, Alpaca, and no robots.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"On one hand, the authors provide detailed motivation in the introduction explaining why freezing different layers during training is necessary, which can help reduce catastrophic forgetting to some extent. On the other hand, the paper offers comprehensive empirical evidence showing their proposed method achieves generally improved performance across LoRA, AdaLoRA, QLoRA and full fine-tuning.\", \"weaknesses\": \"Indeed, this paper's novelty is limited. The core motivation and main method of freezing certain layers to avoid overfitting was proposed three years ago in paper [1], which even provided finer-grained control over the degree of parameter freezing. In my view, the authors merely validated this approach on Alignment tasks (just one type of fine-tuning task). While I acknowledge the technical implementations differ, given the similar research motivations and the limited application scope of this method, I believe there's room for improvement.\\n\\nThe authors mainly study the impact of controlling layer freezing during fine-tuning on language models. However, since most experiments and methods are LoRA-based, I believe the discussion should focus more on full parameter fine-tuning instead.\\n\\n---\\n\\n[1] Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning\", \"questions\": \"See in weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a method for identifying the most significant layers in language models by solving a minimization problem across the layers. The authors apply their technique to the LLaMA-2-7B and Mistral-7B models, experimenting with both LoRA and full parameter tuning approaches. By identifying the key layers, they then focus fine-tuning efforts solely on these layers, optimizing model efficiency and performance.\", \"soundness\": \"2\", \"presentation\": \"4\", \"contribution\": \"2\", \"strengths\": \"1) The authors evaluate their method on several modern benchmarks, including MT-Bench\\n2) Interesting results from the perspective of layer significance. \\n3) The observed transferability across datasets is a strong indicator of the method\\u2019s robustness. I particularly appreciated the inclusion of out-of-distribution testing within the ablation study, along with the use of Jaccard similarities for comparison. \\n4) The paper benefits from a well-organized structure and visually appealing presentation.\", \"weaknesses\": \"Major Weaknesses:\\n1) Lack of technical details on optimal layers finding. \\n\\n1.1)This process can require considerable training time before stabilization occurs, especially for larger models. Moreover, it\\u2019s difficult to guarantee that stabilization will ever fully occur, given the non-convex nature of the optimization problem.\\n\\n1.2) Additionally, while the paper reports efficiency improvements, the measurements don\\u2019t account for the compute required for pre-training and layer selection. As a result, the overall computational cost could potentially exceed that of regular LoRA fine-tuning. It would be nice to account for that in the future.\\n\\n1.3) Please clarify how stability is checked. If Monte Carlo estimation is used, specify the sufficient number of samples and the reasoning behind this choice.\\n\\n\\n2) The effect is truly visible, but it may be just the regularization, that undermines all the strengths of the paper. For example, It's stated that this approach outperforms any type of regularization, yet no regularization baselines are provided. For instance, in the original LIMA paper, they apply progressive dropout to enhance robustness. You mention using dropout, but perhaps increasing it or raising weight decay could also be beneficial? \\n\\t\\n2.1) Another detail that caught my attention is the modest performance gain. This is acceptable for a parameter-saving method, but with the current experiments, it\\u2019s hard to determine if the gain is due to the method itself or simply an effect of regularization. In cases of such small improvements, it would be beneficial to include runs with multiple seeds and average the results. According to several studies, LoRA can be unstable, with results varying based on the seed and checkpoint used.\\n\\n3) The primary weakness here is the lack of novelty, as the main technique relies on identifying the most significant layers through gradient-based methods.\\n\\n3.1) This setup aligns with challenges commonly seen in compression studies, where quantization addresses sensitive layers and columns (https://arxiv.org/abs/2408.15300). However, no inspiration, metrics, or methods from these studies were referenced, cited, or discussed.\\n\\n3.2) Pruning, rather than quantization, seems most suitable here as it directly addresses this setting (see https://arxiv.org/abs/2310.06694, https://arxiv.org/abs/2204.00408). For instance, methods like Sheared LLaMA and CoFi learn binary masks to select not only layers but also attention heads and even individual neurons for fine-tuning. This makes the approach used here far from novel.\\n\\n3.3) Moreover, what is the motivation behind tuning specific layers? Why not tune the precise matrices or even the weights within those matrices, as outlined in studies above? This approach offers a more general formulation, with layer tuning being a subset of this broader framework.\\n\\nThe paper would be of greater practical interest if it either demonstrated an easy and highly effective application with clear comparisons that outperform previous methods or provided deep insights into the internal structure. Currently, it feels like it\\u2019s trying to balance between these options without fully achieving either, especially given that the core idea lacks novelty.\", \"minor_weaknesses\": \"1) I have doubts about the preliminary experiments, as FF layers simply add more parameters for training, making the settings unequal.\\n\\n2) The terms \\\"IFILA\\\" and \\\"ILA\\\" are used interchangeably in the paper, which can lead to confusion\", \"questions\": \"1) This is the first time I've come across someone referring to instruction tuning as \\\"alignment.\\\" Perhaps I missed this usage, but I'm more familiar with \\\"alignment\\\" in the context of RLHF or aligning models with human preferences.\\n\\n2) Drawing parallels with interpretability studies would be beneficial, as they provide extensive insights into the importance of layers. For instance, studies like this one (https://arxiv.org/pdf/2309.04827) and others have already provided insights on layer significance, which could strengthen the paper\\u2019s foundation and contextualize its approach.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Dear Reviewer TMGU (Part 2/3)\", \"comment\": \"**W2: No comparison to similar methods.**\\n\\n> This work appears to be directly related to layer pruning (e.g., [1], [2], [3], and [4], to mention a few). The proposed method should be compared to existing methods, both conceptually and empirically, in terms of performance.\\n[1] Lie et al. 2024. Accelerating Inference in Large Language Models with a Unified Layer Skipping Strategy. https://arxiv.org/abs/2404.06954\\n[2] Gromov et al 2024. The Unreasonable Ineffectiveness of the Deeper Layers. https://arxiv.org/abs/2403.17887\\n[3] Chen et al 2024. Compressing Large Language Models by Streamlining the Unimportant Layer. https://arxiv.org/abs/2403.19135\\n[4] van der Ouderaa, T. F., Nagel, M., Van Baalen, M., Asano, Y. M., & Blankevoort, T. (2023). The llm surgeon. arXiv preprint arXiv:2312.17244.\\n\\nThank you for raising this point about comparisons with related methods such as layer pruning and compression. While we acknowledge the conceptual similarity of identifying \\\"important\\\" layers, **the fundamental objectives and problems addressed by our method differ significantly from those of pruning and compression methods**. Below, we explain why direct comparisons are not applicable:\\n\\n* Layer Pruning/Compression Methods: The works cited ([1], [2], [3], [4]) aim to optimize model efficiency by skipping, removing, or compressing layers, typically targeting faster inference or smaller memory footprints. These methods are designed for model compression during deployment.\\n* Our Method: Our approach focuses on understanding the significance of layers during the alignment process (e.g., instruction tuning). The goal is to rank layers by their importance to alignment performance, enabling flexible adjustments to fine-tuning strategies based on computational constraints or desired performance levels.\\n\\nSince the objectives are fundamentally different, direct performance comparisons are not meaningful. For example, pruning methods evaluate metrics like inference latency or parameter reductions, which do not align with our goal of understanding and optimizing alignment fine-tuning.\\n\\n**W3: Potentially unsupported inferences from experimental results.**\\n>A Jaccard similarity of importance layers (shown in Figure 1) leads the authors to conclude that \\\"important layers vary between different network architectures\\\" while \\\"there is a significant overlap (up to 90%) in the important layers identified by ILA across different alignment datasets.\\\" Visually, however, there also appear to be similarities across architectures for a fixed dataset, but this aspect hasn't been quantified. A claim of inter-architecture/intra-dataset similarity and intra-architecture dissimilarity would ideally be supported by statistical hypothesis tests (though demonstrating this statistically would require more models and datasets). In the absence of such evidence, I urge caution.\\n\\nThank you for your valuable comment regarding the Jaccard similarity analysis and the need to quantify inter-architecture/intra-dataset and intra-architecture similarities. We appreciate your suggestion to use statistical methods to further validate these findings.\\n\\nTo address your concern more rigorously, we performed additional analysis and obtained the following results:\\n\\n1. **Quantification of Similarities:**\\n * For intra-architecture comparisons on different datasets, the Jaccard similarity is approximately 0.9, confirming that the important layers identified by ILA remain highly consistent across alignment datasets for the same architecture.\\n * For inter-architecture comparisons on the same dataset, the Jaccard similarity is significantly lower, approximately 0.67, demonstrating clear differences in the important layers identified across architectures, even when aligned on the same dataset. **These results align with our original claim that important layers vary more across architectures than across datasets for a fixed architecture.**\\n2. **Consistency Across Random Seeds:** To further ensure the reliability of our observations, we repeated the experiments using three different random seeds. The results remained consistent, with negligible variance in the computed Jaccard similarities. This strengthens our conclusion that the observed trends are robust and not an artifact of random initialization or stochastic optimization processes.\"}", "{\"title\": \"Response to Reviewer UHz2 (Part 2/3)\", \"comment\": \"**Q1: What does \\\"IFILA\\\" represent in Table 3?**\\n\\nThank you for pointing this out! \\\"IFILA\\\" in Table 3 is a typo and should actually be \\\"ILA,\\\" referring to our proposed method. We will correct this error in the revised manuscript to avoid any confusion. \\n\\n**Q2: Are the descriptions at L303-304 wrong for Table 3? It seems that there is no $\\\\gamma$ in Table 3.**\\n\\nWe appreciate the reviewer highlighting this concern. The description at Lines 303\\u2013304 refers to:\\n\\n> The consistent identification of important layers despite the optimization of $\\\\gamma$ with varying random seeds.\\n\\nThis statement is indeed accurate and aligns with the experimental results. The $\\\\gamma$ values are used internally in our ILA framework to rank layer importance, which is reflected in the results presented in Table 3. While Table 3 does not explicitly display $\\\\gamma$, it implicitly validates the stability of $\\\\gamma$ by demonstrating consistent Jaccard similarity across random seeds.\\n\\n**Q3: According to L253, three distinct datasets are used to evaluate the Language Understanding Ability aspect. Would you please clarify why there are only two datasets for this aspect in the paper?**\\n\\nWe appreciate the reviewer catching this inconsistency. The mention of three distinct datasets at Line 253 is a typo. In the paper, we evaluated the Language Understanding Ability aspect using two datasets: MMLU and Hellaswag, as correctly described and presented in the experimental results.\\n\\n**Q4: According to L323, approximately 25% of the unimportant layers are removed. This statement should be related to Table 3 and Table 4. So we can understand that 75% of important layers are retained for the results in Table 3 and Table 4, which show the main results of the proposed method. However, Table 7 does not list the memory usage of the proposed method with 75% important layers. Does this show that the proposed method needs more memory to outperform LoRA?**\\n\\nWe appreciate the reviewer raising this point and would like to clarify the relationship between memory usage and layer selection in our method.\\n\\n1. **Memory Usage of ILA vs. LoRA:** **It is inherently impossible for our proposed method to consume more memory than standard LoRA.** By design, ILA reduces the memory footprint because it fine-tunes only a subset of layers deemed important (e.g., 75% in this case), whereas LoRA modifies all target layers. In essence, our method applies a selective, reduced version of LoRA, and thus, it cannot exceed the resource demands of full LoRA fine-tuning.\\n2. **Connection to Tables 3, 4, and 7**: Tables 3 and 4 report performance results using the 75% most important layers retained and show that our method outperforms baseline approaches in this configuration. **Table 7 reports memory usage for configurations where only 30% of important layers are fine-tuned, emphasizing the efficiency of our method under aggressive layer reduction.** While memory usage for the 75% configuration is not explicitly listed in Table 7, it is guaranteed to remain lower than standard LoRA since fewer layers are being fine-tuned.\\n3. **Completeness of Results:** For completeness and to provide additional evidence, the GPU memory usage measurements for the 75% important layer configuration is presneted as follows:\\n\\n| | LoRA(100%) | LoRA(75%) |\\n|:----------------------:|:----------:|:---------:|\\n| GPU Memory Usage (MiB) | 32988 | 30760 |\"}", "{\"summary\": \"This paper introduces a novel and interesting approach, Important Layers for Alignment (ILA), to enhance the fine-tuning efficiency of large language models (LLM) for alignment by identifying and selectively tuning the most important layers. The unimportant layers identified by the proposed ILA method will be freezed during the last part of fine-tuning.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method can offer improvments for fine-tuning LLMs while saving memory usage.\\n2. Extensive experiments are provided in this paper.\", \"weaknesses\": \"1. **Limited improvements**. According to the results presented, ILA does not bring much improvments. The performances of the models using ILA and the ones without ILA are close. For example, in Table 4, only 0.12% increase from \\\"Full Finetune\\\" to \\\"Full Finetune w/ILA\\\" with the LLAMA 2-7B model.\\n2. **Potential overlap with existing PEFT methods.** The authors may need to clarify why we need an additional PEFT method and what values freezing unimportant layers can bring compared with the existing PEFT methods.\", \"questions\": \"1. What does \\\"IFILA\\\" represent in Table 3?\\n2. Are the descriptions at L303-304 wrong for Table 3? It seems that there is no $\\\\gamma$ in Table 3.\\n3. According to L253, **three** distinct datasets are used to evaluate the Language Understanding Ability aspect. Would you please clarify why there are only two datasets for this aspect in the paper?\\n4. According to L323, approximately 25% of the unimportant layers are removed. This statement should be related to Table 3 and Table 4. So we can understand that 75% of important layers are retained for the results in Table 3 and Table 4, which show the main results of the proposed method. However, Table 7 does not list the memory usage of the proposed method with 75% important layers. Does this show that the proposed method needs more memory to outperform LoRA?\\n5. According to Table 3 and Table 11, the proposed method often failed to enhance performance on the Hellaswag dataset. Are there potential reasons for this discrepancy?\\n6. Why is AdaLoRA w/ILA not included in the comparison? Is there was a specific reason for the omission?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for your valuable feedback and for expressing your interest in continuing our discussions. We have addressed your concerns below and hope our clarifications would enhance your evaluation of our work. We appreciate your engagement and are more than willing to explore any further topics or questions you may have.\"}", "{\"title\": \"Response to Reviewer UHz2 (Part 1/3)\", \"comment\": \"We sincerely thank the reviewer for their thorough evaluation and valuable feedback on our work.\\n\\n**W1: Limited improvements.**\\n\\nWe appreciate the reviewer\\u2019s observation regarding the performance differences between models using ILA and those without it. While the numerical improvements may appear modest in some cases, we would like to emphasize the following key points that highlight the value and impact of ILA:\\n\\n1. **Consistency Across Datasets and Metrics:** As shown in the experimental results, **ILA consistently improves performance across various datasets and evaluation metrics** (e.g., MMLU, Hellaswag, MT-Bench). Even **modest gains in certain benchmarks are significant** given the competitive baselines and the inherent difficulty of the tasks, especially in conversational ability. This consistency demonstrates the robustness of our approach.\\n2. **Efficiency Gains:** ILA offers efficiency benefits by identifying and freezing less important layers, reducing computational cost and memory requirements. For example, as shown in Table 6 and Table 7\\uff0cfine-tuning only 30% of important layers with ILA achieves nearly identical performance to fine-tuning all layers, leading to significant resource savings.\\n3. **Improved Stability and Generalizability:** ILA enhances model stability and generalizability. Cross-dataset experiments (Table 2) show that ILA's importance rankings are stable, allowing effective reuse of layer selection.\\n4. **Alignment with Research Trends:** Recent studies **URIAL [1]** suggest that alignment tasks predominantly involve stylistic shifts rather than drastic performance changes in language understanding. In this context, even small improvements signify meaningful progress, as our method aligns with and enhances the nuanced adjustments required for alignment.\\n\\nIn summary, while the performance gains may appear incremental in isolation, they are achieved alongside significant improvements in efficiency, stability, and generalizability, which collectively demonstrate the utility of ILA for practical fine-tuning and alignment tasks.\\n\\n\\n[1] Lin, B. Y., Ravichander, A., Lu, X., Dziri, N., Sclar, M., Chandu, K., ... & Choi, Y. (2023, December). **The unlocking spell on base llms: Rethinking alignment via in-context learning.** In The Twelfth International Conference on Learning Representations.\\n\\n**W2: Potential overlap with existing PEFT methods.**\\n\\nWe appreciate the reviewer\\u2019s point regarding the overlap with existing Parameter-Efficient Fine-Tuning (PEFT) methods and the need to clarify the unique contributions of ILA. Below, we outline the distinctive value that ILA brings compared to existing PEFT methods:\\n\\n1. **Focus on Layer Importance:** Unlike conventional PEFT methods such as LoRA, AdaLoRA, and QLoRA, which focus primarily on optimizing parameter efficiency by modifying matrix ranks or quantization, ILA **takes a complementary approach by quantifying the importance of individual layers**. This unique perspective enables us to selectively freeze unimportant layers, thereby improving computational efficiency while maintaining or even enhancing performance.\\n2. **Complementary, Not Redundant:** **ILA is not designed to replace existing PEFT methods but rather to complement them.** As demonstrated in our experiments, ILA can be integrated with LoRA or QLoRA to achieve better efficiency and performance compared to using these methods alone.\\n3. **Practical Benefits in Fine-Tuning:**\\n * **Reduced Resource Requirements:** As shown in Table 7, freezing unimportant layers identified by ILA significantly reduces GPU memory usage (e.g., a 22.4% reduction with LoRA and 30.3% with QLoRA) without sacrificing performance. This advantage is crucial for deploying large models in resource-constrained environments.\\n * **Cross-Dataset Generalizability:** ILA offers a reusable importance ranking for a given model architecture across multiple datasets. This unique feature eliminates the need for dataset-specific adjustments, further distinguishing ILA from existing PEFT methods that generally optimize for a single task or dataset.\\n4. **Theoretical Insight into Alignment:** ILA provides a deeper understanding of alignment's influence on model behavior, aligning with the broader research goal of demystifying the fine-tuning process.\\n\\n**Clarified Value Proposition:** **ILA is not merely an additional PEFT method but a distinct approach that focuses on layer-level significance in alignment.** It enhances existing methods by leveraging layer importance, thus offering practical efficiency gains, cross-dataset generalizability, and theoretical insights into the alignment process.\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"title\": \"Response to Reviewer qJ17 (Part 1/3)\", \"comment\": \"We sincerely thank the reviewer for the thorough evaluation and valuable feedback on our work.\\n\\n**W1:Lack of technical details on optimal layers finding.**\\n\\n>This process can require considerable training time before stabilization occurs, especially for larger models. Moreover, it\\u2019s difficult to guarantee that stabilization will ever fully occur, given the non-convex nature of the optimization problem.\\n\\n1. **Training Time Before Stabilization:** While larger models may require more iterations to stabilize, our experiments indicate that full stabilization (i.e., \\u03f5-stable) is not strictly necessary for accurate layer importance identification. As shown in our results (e.g., Table 4), even early-stage stabilization (25\\u201350% of training milestones) provides sufficient information to rank layers effectively. **This reduces the computational burden significantly.**\\n2. **Non-Convex Optimization Challenge:** It is true that the optimization problem is non-convex, which makes global stabilization difficult to guarantee. However, the local stabilization observed in our experiments, measured by the consistent convergence of the layer importance rankings across different runs and datasets (evidenced by high Jaccard similarities), demonstrates the practical reliability of our approach despite the theoretical challenges.\\n\\n>Additionally, while the paper reports efficiency improvements, the measurements don\\u2019t account for the compute required for pre-training and layer selection. As a result, the overall computational cost could potentially exceed that of regular LoRA fine-tuning.\\n\\nThank you for your feedback. We would like to emphasize that the computational cost of our approach is manageable and justified, given the following key points:\\n\\n1. **Reusability Across Datasets:** The identified important layers are highly consistent across datasets, as demonstrated by the high Jaccard similarity in our experiments. This means the layer importance ranking computed on one dataset (e.g., No Robots) can be directly reused for other datasets (e.g., LIMA, Alpaca-GPT4) without requiring additional computation.\\n2. **Efficient Layer Selection:** As detailed in the Ablation Study (Observation 3), the layer selection process is computationally efficient. For instance, in the case of Llama 2-7B, the selection required only around 11 minutes after reaching partial stabilization, which is typically achieved within 25\\u201350% of training milestones. This cost is minimal compared to full fine-tuning or repeated PEFT processes.\\n3. **Core Focus:** **The primary goal of our work is to study and understand layer importance ranking during alignment, not to propose a new PEFT algorithm.** The insights gained from our approach are intended to guide more efficient and effective alignment strategies, making this computational overhead a valuable investment.\\n\\n>Please clarify how stability is checked. If Monte Carlo estimation is used, specify the sufficient number of samples and the reasoning behind this choice.\\n\\nThank you for your question. In addition to monitoring changes in the loss function, we directly check the stability of the layer importance rankings obtained at different iterations. This provides a practical way to evaluate whether the algorithm has stabilized\\n\\n**W2: The effect is truly visible, but it may be just the regularization, that undermines all the strengths of the paper.** \\n\\n>For example, It's stated that this approach outperforms any type of regularization, yet no regularization baselines are provided. For instance, in the original LIMA paper, they apply progressive dropout to enhance robustness. You mention using dropout, but perhaps increasing it or raising weight decay could also be beneficial?\\n\\nThank you for bringing up this point. To clarify, in our experiments, we used fixed dropout and weight decay values to regularize the training process. These settings were applied uniformly across all methods to ensure a fair comparison and isolate the impact of our proposed approach.\\n\\n>Another detail that caught my attention is the modest performance gain. This is acceptable for a parameter-saving method, but with the current experiments, it\\u2019s hard to determine if the gain is due to the method itself or simply an effect of regularization.\\n\\nFor all baselines, we **conducted hyperparameter searches** to determine the optimal learning rates and other relevant parameters. This ensures that each baseline is performing at its best, providing a fair and robust comparison.\\n\\n**Our method was evaluated using the same hyperparameter settings** determined for the baselines. We did not make any additional modifications to the hyperparameters when applying our layer selection algorithm. This ensures that the observed performance gains are attributable to our method, rather than any advantage stemming from changes in hyperparameter tuning.\"}", "{\"title\": \"Response to Reviewer qJ17\", \"comment\": \"Thank you for your thoughtful feedback and for recognizing the contributions of our work. We also appreciate your decision to raise your score in light of our efforts. However, we hope to provide additional clarifications that will further illustrate the empirical contributions and methodological novelty of our approach.\\n\\n**Empirical Contribution:**\\n\\nFirstly, we believe our work makes a significant empirical contribution. Our algorithm consistently and reliably outperforms existing PEFT algorithms, such as LoRA and AdaLoRA. Specifically, we have observed that more recent algorithms like LISA (https://arxiv.org/pdf/2403.17919) do not yield such huge performance gains, especially when hyperparameters are carefully tuned for LoRA. This indicates that a finely tuned LoRA remains highly effective. Moreover, in MT-Bench scores, the LLaMA 2-13B model only achieves a modest improvement of **0.4-0.6 points** over the LLaMA 2-7B model. Therefore, a steady **0.1-0.2 point** improvement in our paper is not negligible and should be considered meaningful. **From an efficiency standpoint**, our algorithm also shows performance improvements in the famous quantized version of LoRA (QLoRA), demonstrating its effectiveness across various PEFT approaches.\\n\\nThis underlines that **our work is not simply a re-application of existing methods but makes an empirical contribution in the context of PEFT, with clear performance benefits over state-of-the-art approaches.**\\n\\n**Methodology Novelty:**\\n\\nRegarding the concern about the novelty of our method, while we acknowledge that identifying important components has been explored in prior work, our approach differs significantly in both design and application. Our method offers a unique definition of layer importance, which is learned directly through a gradient descent-based mask learning process. This approach directly aligns with our definition of layer importance and contrasts with other methods, such as compression techniques that typically focus on finer selection of neurons or layers.\\n\\nRegarding the fine-tuning process, we believe that neuron-level importance does not necessarily lead to a more substantial performance improvement. Fine-tuning typically shows a higher tolerance for errors in layer selection, **which is why we use LoRA\\u2019s low-rank matrix product to represent parameter changes**. This approach **greatly enhances the efficiency of our layer-selection algorithm.** In contrast to methods that require learning detailed neuron-level masks\\u2014which incur significant computational cost\\u2014our mask learning strategy is far more efficient and better suited for fine-tuning tasks.\\n\\nWhile compression methods tend to focus on more granular selections, our approach remains computationally feasible for fine-tuning, especially when applied to large-scale models. Therefore, **we propose using layer-level masks to multiply LoRA's low-rank matrix, which not only enhances efficiency but also ensures the practicality of applying our method to large models.**\\n\\nAdditionally, **we observed that layer importance rankings across different alignment datasets are remarkably consistent, which is an interesting and potentially impactful finding**.\"}", "{\"title\": \"Response to Reviewer qJ17 (Part 2/3)\", \"comment\": \"**W3: The primary weakness here is the lack of novelty, as the main technique relies on identifying the most significant layers through gradient-based methods.**\\n\\n>This setup aligns with challenges commonly seen in compression studies, where quantization addresses sensitive layers and columns (https://arxiv.org/abs/2408.15300). However, no inspiration, metrics, or methods from these studies were referenced, cited, or discussed.\\n\\n>Pruning, rather than quantization, seems most suitable here as it directly addresses this setting (see https://arxiv.org/abs/2310.06694, https://arxiv.org/abs/2204.00408). For instance, methods like Sheared LLaMA and CoFi learn binary masks to select not only layers but also attention heads and even individual neurons for fine-tuning. This makes the approach used here far from novel.\\n\\nThank you for your insightful comment. While we acknowledge that methods like pruning and quantization, including those mentioned (e.g., Sheared LLaMA, CoFi), aim to optimize models by removing or masking components such as neurons or attention heads, our work targets a different problem.\\n\\nPruning methods, as you correctly pointed out, **focus on model compression by eliminating unnecessary components to reduce the size and computational load during inference.** These techniques typically address the problem of **creating smaller, more efficient models** for deployment.\\n\\nIn contrast, our approach is focused on alignment rather than compression. **We aim to identify and fine-tune the most important layers that have the greatest impact on model performance for alignment**. This layer selection is driven by a gradient-based method that quantifies each layer's contribution to task-specific alignment, rather than seeking to prune or reduce the model size. Therefore, the primary goal of our method is to optimize performance for a specific task rather than reduce the model\\u2019s size or computational requirements during inference.\\n\\nThus, the two objectives \\u2014 **compression** (via pruning or quantization) and **task alignment** (via layer selection) \\u2014 **are fundamentally different**. While pruning and quantization aim to remove redundant parameters for efficiency, our approach focuses on improving task performance by fine-tuning the layers most aligned with the target task. This makes our method complementary to compression techniques, but not directly comparable or derivative of them.\\n\\nWe hope this clarifies the distinction between our work and the methods mentioned.\\n\\n>Moreover, what is the motivation behind tuning specific layers? Why not tune the precise matrices or even the weights within those matrices, as outlined in studies above? This approach offers a more general formulation, with layer tuning being a subset of this broader framework.\\n\\nThe choice to focus on layer-level tuning rather than neuron-level importance stems from several practical considerations. First, **most existing PEFT (parameter-efficient fine-tuning) methods, such as LoRA, operate by adjusting parameters at the layer level** (e.g., low-rank adaptations) because this maintains efficiency and scalability. **Neuron-level tuning would require fine-tuning a larger number of parameters** by adopting our ILA, which is computationally more expensive and challenging to integrate into existing frameworks.\\n\\nMoreover, while neuron-level tuning might provide a more granular approach, it is not directly compatible with the current PEFT paradigm, which is designed to modify only a few parameters at a time while minimizing computational cost. Layer-level tuning provides a practical balance, as it can effectively capture the most impactful parts of the model while adhering to the computational constraints of existing methods.\\n\\n**Q1: This is the first time I've come across someone referring to instruction tuning as \\\"alignment.\\\" Perhaps I missed this usage, but I'm more familiar with \\\"alignment\\\" in the context of RLHF or aligning models with human preferences.**\\n\\nThank you for highlighting this point. To further clarify: \\n\\n1. **Dataset Design for Alignment:** All the datasets used in our experiments\\u2014LIMA, Alpaca-GPT4, and No Robots\\u2014are specifically designed for alignment tasks. These datasets aim to improve model behavior by fine-tuning it to better align with desired outputs, whether through instruction-following data (e.g., Alpaca-GPT4) or curated human demonstrations (e.g., No Robots, LIMA). \\n2. **Clarifying Terminology:** While we recognize that \\\"alignment\\\" is often associated with RLHF, in our work, we adopt a broader definition that includes supervised methods like instruction tuning. This reflects the purpose of the datasets used, all of which are created to refine models' outputs to align with human instructions or task objectives.\"}", "{\"title\": \"Response to Reviewer UHz2 (Part 3/3)\", \"comment\": \"**Q5: According to Table 3 and Table 11, the proposed method often failed to enhance performance on the Hellaswag dataset. Are there potential reasons for this discrepancy?**\\n\\nWe appreciate the reviewer highlighting this observation and raising the question regarding the performance of our method on the Hellaswag dataset. While our proposed method consistently improves performance across most datasets and evaluation metrics, it indeed shows smaller or less consistent gains on Hellaswag. Below, we outline potential reasons for this discrepancy:\\n1. **Nature of the Hellaswag Dataset:** \\n * Hellaswag focuses heavily on commonsense reasoning, requiring models to select the most plausible continuation of a given context. This task primarily relies on pre-trained knowledge, which is less affected by fine-tuning for alignment purposes. \\n * As mentioned in L265 of the manuscript, we specifically note that significant performance improvements in language understanding tasks such as MMLU and Hellaswag are not expected after alignment. Instead, the focus is on ensuring that the model **retains its pre-trained knowledge** while improving conversational and stylistic alignment. The results in Tables 3 and 4 show that our method successfully achieves this goal across various datasets.\\n3. **Core Focus of This Work:** It is important to emphasize that our primary objective is not to propose a new PEFT method to achieve maximum performance on specific datasets. **Instead, our goal is to provide a deeper understanding of layer significance in the alignment process.** The identification of important layers and their stability across datasets provides valuable insights into how alignment influences different components of large language models. The slight underperformance on Hellaswag does not detract from this core contribution.\\n4. **Task-Specific Layer Importance:** Our method identifies and fine-tunes layers important for alignment across datasets. While these layers are critical for conversational and stylistic improvements (as evidenced in Vicuna, MT-Bench, etc.), **they may not overlap perfectly with the layers most critical for commonsense reasoning tasks like Hellaswag**. This discrepancy highlights a potential avenue for future exploration, such as incorporating task-specific criteria into layer importance ranking.\\n5. **Empirical Variability**: Fine-tuning results can exhibit dataset-specific variability, particularly in tasks with inherently challenging contexts like Hellaswag. This variability may result from the dataset\\u2019s domain-specific reasoning patterns, which might require distinct tuning strategies.\\n\\n**Q6: Why is AdaLoRA w/ILA not included in the comparison? Is there was a specific reason for the omission?**\\n\\nThank you for pointing out this important consideration. The reason for not including AdaLoRA combined with ILA (our proposed method) lies in **the conceptual overlap between the two approaches and the distinct focus of our work.**\\n\\nAdaLoRA inherently adjusts the rank of incremental matrices during fine-tuning. When the rank is reduced to zero for a particular layer, it effectively means no adapter is added, and thus, the parameters of that layer remain unchanged. In this sense, AdaLoRA implicitly identifies less critical layers by dynamically reducing their contribution. However, the goal of AdaLoRA is primarily to minimize resource usage by adapting the parameter budget dynamically, rather than explicitly analyzing or ranking the importance of layers.\\n\\nOur proposed method, ILA, complements this perspective by explicitly focusing on quantifying and ranking layer importance during the alignment process. Unlike AdaLoRA, ILA is designed to study and optimize the alignment process by isolating critical layers, which **provides deeper insights into the model's behavior and allows for targeted improvements in performance and efficiency.** Thus, our work focuses more on understanding and leveraging layer importance for alignment rather than proposing another parameter-efficient fine-tuning (PEFT) algorithm.\\n\\nGiven the conceptual overlap, we prioritized evaluating ILA with standard PEFT methods (e.g., LoRA, QLoRA) to better showcase its unique contribution. While combining ILA with AdaLoRA might yield further resource savings, such a combination would also require disentangling the overlapping contributions of these two approaches, which could confound the interpretation of results. \\n\\nWe appreciate your suggestion and will consider conducting experiments to evaluate this integration in follow-up studies. Thank you for highlighting this perspective!\"}", "{\"title\": \"Response to Reviewer epzj\", \"comment\": \"**W1: This paper has limited novelty.**\\n\\n>Indeed, this paper's novelty is limited. The core motivation and main method of freezing certain layers to avoid overfitting was proposed three years ago in paper [1], which even provided finer-grained control over the degree of parameter freezing. While I acknowledge the technical implementations differ, given the similar research motivations and the limited application scope of this method, I believe there's room for improvement.\\n\\nThank you for your thoughtful feedback. We acknowledge that layer freezing techniques have been explored in prior work, including the paper you cited [1]. However, we would like to highlight several key distinctions that set our work apart in terms of both technical contributions and the application scope:\\n1. **Specificity to LLM Alignment Tasks:** While layer-freezing techniques have been explored in prior work, our study addresses the unique challenges and nuances of **LLM alignment tasks**, which fundamentally differ from traditional fine-tuning tasks. Alignment tasks are designed to tailor a model\\u2019s behavior across a wide range of scenarios, rather than optimizing for a single task or domain. For instance, alignment datasets typically encompass diverse skills such as reasoning, mathematics, coding, and conversational fluency. This diversity makes the **identification of important layers a more complex and multi-faceted challenge compared to task-specific fine-tuning approaches**.\\n2. **Key Observation:** A key insight from our work is that despite the diversity of tasks within alignment datasets and differences across datasets, **we observe remarkable consistency in the layers deemed important for alignment**. This indicates that alignment tasks share underlying commonalities, which manifest as stable patterns in layer significance. Such cross-task and cross-dataset consistency has not been previously demonstrated in the literature and is a significant finding of our work.\\n\\nIn conclusion, unlike previous studies that focus on freezing layers for individual tasks, our approach reveals how alignment fine-tuning universally influences LLMs across multiple task types. This provides a more holistic understanding of LLM behavior during alignment, which is critical for efficient resource allocation and improved fine-tuning strategies.\\n\\n**W2: The paper focuses on layer freezing but should emphasize full parameter fine-tuning over LoRA-based methods.**\\n\\n>The authors mainly study the impact of controlling layer freezing during fine-tuning on language models. However, since most experiments and methods are LoRA-based, I believe the discussion should focus more on full parameter fine-tuning instead.\\n\\nThank you for raising this concern. We appreciate the opportunity to clarify why LoRA is central to our experiments and ablation studies, and why it aligns with our research goals.\\n\\n1. **LoRA Aligns with Our Goal of Identifying Parameter Significance:** LoRA intrinsically utilizes low-rank matrices to estimate parameter updates, which is conceptually aligned with our goal of understanding layer-wise parameter significance. In our framework, layers deemed unimportant are not updated, which corresponds to not adding LoRA adapters to those layers. This direct compatibility between LoRA's design and our objective makes it a natural choice for studying layer significance in fine-tuning.\\n2. **LoRA Achieves Performance Comparable to Full Parameter Fine-Tuning:** Empirical evidence, including our experiments, shows that LoRA achieves performance on par with full parameter fine-tuning across a wide range of tasks. For example, in Tables 3, Table 4, and Table 11, the results demonstrate that LoRA performs similarly to full fine-tuning in both language understanding (e.g., MMLU) and conversational tasks (e.g., MT-Bench, Vicuna). This ensures that our findings are not limited by the choice of LoRA but are representative of effective fine-tuning strategies in general.\\n3. **LoRA is Resource-Efficient:** LoRA significantly reduces computational and memory costs by updating only a small number of low-rank matrices instead of the entire parameter set. This resource efficiency is particularly important for large models, such as LLaMA-2-7B and Mistral-7B, where full parameter fine-tuning becomes infeasible in many practical settings. By using LoRA, we can perform detailed ablation studies with a manageable computational footprint while maintaining performance comparable to full fine-tuning.\\n\\n[1] Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning\"}", "{\"title\": \"Response to Reviewer PkBY (Part 3/3)\", \"comment\": \"**W5: The experimental design in the paper is not sufficiently reasonable. I think there should be separate comparative experiments for AdaLoRA and AdaLoRA w/ ILA to further demonstrate the general applicability of the ILA method.**\\n\\nThank you for pointing out this important consideration. The reason for not including AdaLoRA combined with ILA (our proposed method) lies in **the conceptual overlap between the two approaches and the distinct focus of our work.**\\n\\nAdaLoRA inherently adjusts the rank of incremental matrices during fine-tuning. When the rank is reduced to zero for a particular layer, it effectively means no adapter is added, and thus, the parameters of that layer remain unchanged. In this sense, AdaLoRA implicitly identifies less critical layers by dynamically reducing their contribution. However, the goal of AdaLoRA is primarily to minimize resource usage by adapting the parameter budget dynamically, rather than explicitly analyzing or ranking the importance of layers.\\n\\nOur proposed method, ILA, complements this perspective by explicitly focusing on quantifying and ranking layer importance during the alignment process. Unlike AdaLoRA, ILA is designed to study and optimize the alignment process by isolating critical layers, which **provides deeper insights into the model's behavior and allows for targeted improvements in performance and efficiency.** Thus, our work focuses more on understanding and leveraging layer importance for alignment rather than proposing another parameter-efficient fine-tuning (PEFT) algorithm.\\n\\nGiven the conceptual overlap, we prioritized evaluating ILA with standard PEFT methods (e.g., LoRA, QLoRA) to better showcase its unique contribution. While combining ILA with AdaLoRA might yield further resource savings, such a combination would also require disentangling the overlapping contributions of these two approaches, which could confound the interpretation of results. \\n\\nWe appreciate your suggestion and will consider conducting experiments to evaluate this integration in follow-up studies. Thank you for highlighting this perspective!\\n\\n**Q1: Is the overall training process of the ILA method divided into two steps? The first step involves executing the ILA algorithm from Algorithm 1, and the second step involves freezing the unimportant layers selected by ILA before re-fine-tuning.**\\n\\nThank you for your thoughtful feedback regarding the two-step process in ILA. **While your understanding of the training process is accurate**, it is important to clarify the core contribution of our work and how it relates to parameter-efficient fine-tuning (PEFT) methods.\\n\\n1. **Core Contribution of ILA:**\\n * **ILA is not a new PEFT algorithm but a method for identifying layer importance rankings during the fine-tuning process.** This ranking is the primary outcome of the ILA algorithm and can serve as a general-purpose tool for optimizing various downstream tasks or applications.\\n * **The subsequent steps (e.g., freezing unimportant layers, re-fine-tuning) represent applications of the layer importance rankings.** These applications demonstrate how the identified rankings can be utilized to improve fine-tuning efficiency, model performance, or resource utilization.\\n2. **Distinction from PEFT Algorithms:**\\n * Unlike PEFT methods like LoRA or AdaLoRA, which modify how fine-tuning is performed (e.g., low-rank adaptation), ILA operates as an analysis tool that complements these techniques. For example: ILA identifies which layers contribute most to alignment. PEFT methods like LoRA can then leverage this information to selectively apply fine-tuning to those layers, improving efficiency.\\n * This distinction allows ILA to work alongside various fine-tuning approaches, as shown in our experiments with LoRA and AdaLoRA.\\n\\n**Q2: What does IFILA mean?**\\n\\nThank you for pointing this out! \\\"IFILA\\\" in Table 3 is a typo and should actually be \\\"ILA,\\\" referring to our proposed method. We will correct this error in the revised manuscript to avoid any confusion.\\n\\n**Q3: In Table 6, why do the experiments on LIMA include ILA (75%) and ILA (30%), but there are no corresponding entries for NoRobots?**\\n\\nThank you for pointing out the inconsistency in Table 6 regarding the inclusion of ILA (75%) and ILA (30%) results for LIMA but the lack of corresponding entries for NoRobots. This omission was an oversight in the preparation of the manuscript. **The entry labeled \\\"NoRobots w/ ILA\\\" in the original table corresponds to the result for NoRobots w/ ILA (75%).** Missing Data for NoRobots w/ ILA (30%) is presented as follows:\\n\\n**Updated Table 6**\\n| Datasets | Methods | MMLU | Hellaswag | Vicuna | MT-Bench |\\n|:--------:|:-----------------:|:-----:|:---------:|:------:|:--------:|\\n| NoRobots | LoRA w/ ILA (75%) | 54.45 | 61.13 | 6.77 | 5.05 |\\n| NoRobots | LoRA w/ ILA (30%) | 54.11 | 61.32 | 6.74 | 4.91 |\"}", "{\"title\": \"Dear Reviewer TMGU (Part 1/3)\", \"comment\": \"We sincerely thank the reviewer for the thorough evaluation and valuable feedback on our work.\\n\\n**W1: Lack of clarity: the method is not well explained, and important technical details are missing.**\\n> Definition of \\\"important,\\\" \\\"unimportant,\\\" \\\"significant,\\\" and \\\"insignificant\\\" layers\\n\\nWe acknowledge that these terms were not explicitly defined in the manuscript. In our work, the terms \\\"important\\\" and \\\"significant\\\" refer to layers whose changes during fine-tuning significantly affect the model\\u2019s alignment performance. Conversely, \\\"unimportant\\\" or \\\"insignificant\\\" layers are those whose changes have minimal impact on performance, as validated by experiments.\\n\\nTo improve clarity, we will revise the text to explicitly state that \\\"layer importance\\\" is determined experimentally by ranking layers based on their importance scores ($\\\\{s_{t}^i\\\\}_{i=1}^N$) and subsequently validating the significance of the rankings via ablation experiments. Layers with higher are deemed \\\"important\\\" because selectively tuning only these layers preserves or enhances fine-tuning efficiency with minimal performance degradation.\\n\\n> Connection between Algorithm 1 and the formulas in Section 2.\\n\\nAlgorithm 1 directly implements the method described in Section 2. We will improve the text to explicitly link the algorithm steps to the equations: \\n* **$K$ (in Algorithm 1)**: Refers to the number of layers deemed \\\"insignificant\\\" during the ranking and selection process, determined by sorting the layers based on their importance scores ($\\\\{s_{t}^i\\\\}_{i=1}^N$)\\n* **Optimization Process**: Eq. (3) defines a constrained optimization problem with a mask \\n$\\\\pmb{\\\\gamma}_t$, whereas Eq. (7) simplifies this by reparameterizing $\\\\gamma_t^i=\\\\sigma(s_t^i)$ and optimizing the importance scores $s_t^i$ without explicitly enforcing constraints during optimization. After sorting $s_t^i$, **the constraint in Eq. (3) ($\\\\||\\\\pmb{\\\\gamma}_t\\\\||< H$) is indirectly applied by selecting only a subset of layers (top-ranked by $s_t^i$) to retain**. It satisfies the spirit of the constraint in Eq. (3) by effectively ranking the layers according to their significance.\\n\\nWe will revise the manuscript to explicitly connect these elements and better explain how Algorithm 1 operationalizes Eq. (7).\\n\\n> I failed to grasp how the method actually works. Do you fix the number of layers in advance (\\\"number of insignificant layers K\\\") and then rank and select the top-ranked layers? If so, this raises the question: why not initially select a smaller number of layers in the optimization algorithm and dispense with ranking altogether? I presume there are correlations between layers, and the method seems to be essentially selecting a subset of layers when identifying the \\\"important\\\" ones.\\n\\n**Our method provides a ranking of layer importance for all layers rather than fixing the number of layers ($K$) during optimization.** This ranking aligns with our goal to understand the significance of different layers during alignment and offers the following advantages:\\n\\n* **Flexibility in Fine-Tuning**: A ranking allows users to adjust $K$ based on their specific needs. For example: If maximizing performance is the priority, users can set $K$ to include more layers for fine-tuning. If computational efficiency is critical, users can reduce $K$ to tune only the most important layers.\\n* **Avoiding Re-Training Costs**: Fixing $K$ during optimization would require re-training the model whenever $K$ changes, as the optimization process would need to re-select layers. By separating the ranking step, **our method allows $K$ to be adjusted post hoc without requiring additional training, which is more practical and computationally efficient.**\\n* **Handling Layer Correlations:** The ranking accounts for inter-layer correlations, capturing the relative importance of layers even when their contributions are interdependent. This would be difficult to achieve by pre-fixing $K$ during optimization.\\n\\n> Explanation of terms (e.g. $\\\\mathcal{L}_\\\\infty$ in Eq (8), and $R$ in Eq (10))\\n\\n$\\\\mathcal{L}_\\\\infty$ is a typo, in fact, it should be $\\\\mathcal{L}$. $R$ is a constant that represents the bound on parameter changes during the stable phase of training.\"}", "{\"title\": \"Answer to Rebuttal\", \"comment\": \"Thank you for your detailed response and for addressing the concerns regarding technical details and regularization. I appreciate the additional information provided.\\n\\nRegarding the concern about novelty, while I understand the fundamental differences in the downstream application and goals, the upstream or proxy task\\u2014identifying salient weights, columns, layers, etc.\\u2014remains the same. What I am trying to convey is that a method applied across different tasks is still fundamentally the same method. Despite differences in application, it does not make the underlying approach novel. For example, applying a previously published method like contrastive loss\\u2014originally used for Sentence Transformers\\u2014to a new context such as graph embeddings does not render the contrastive loss function itself novel. The same principle applies here.\\n\\nIf the focus of your paper had been on empirically adapting and demonstrating the efficacy of this method within the PEFT paradigm, it could be considered an empirical contribution. However, as you note, PEFT is not the primary focus of your work, and the method itself has been explored in prior literature.\\n\\nOverall, I appreciate the interesting application of this approach within PEFT, and I have raised my score in recognition of the paper's contributions in this regard. However, I consider the paper borderline due to the limited methodological novelty. I respectfully encourage the authors to reconsider their positioning of the work. While applying the technique in a PEFT context is valuable, the technique itself is well-studied and not entirely novel.\"}", "{\"title\": \"Thanks for your response\", \"comment\": \"Thank you very much for your thoughtful response!\", \"i_maintain_my_current_evaluation_score_for_the_following_reasons\": \"While I acknowledge that your focus is on alignment tasks, particularly instruction tuning, I haven't observed additional efforts specifically made for \\\"alignment\\\" in your methodology. To substantiate your claim that \\\"identification of important layers is a more complex and multi-faceted challenge compared to task-specific fine-tuning approaches,\\\" you should conduct in-depth analyses across various common fine-tuning tasks. For example, if you were to conduct comparative experiments across different tasks such as reasoning, mathematics, style control, summarization, NER, etc., and if the empirical evidence from these tasks demonstrated that previous methods (including ChildTuning and other approaches mentioned by reviewers like HFT and LISA) were not effective specifically for language model alignment, while your approach showed particularly significant improvements in alignment tasks, then such a claim would be well-supported.\"}", "{\"title\": \"Response to Reviewer qJ17 (Part 3/3)\", \"comment\": \"**Q2:Drawing parallels with interpretability studies would be beneficial, as they provide extensive insights into the importance of layers. For instance, studies like this one (https://arxiv.org/pdf/2309.04827) and others have already provided insights on layer significance, which could strengthen the paper\\u2019s foundation and contextualize its approach.**\\n\\nThank you for this valuable suggestion. We appreciate the reference to interpretability studies, which indeed provide important insights into layer significance. Below, we outline how we will address this point:\\n\\nWe appreciate the reviewer\\u2019s suggestion to draw parallels with interpretability studies, particularly those examining layer significance. Indeed, such studies provide valuable insights into the understanding of model behavior and the identification of important components within neural networks.\\n\\nIn our work, we specifically focus on the identification of important layers for the alignment task, and while the research you mentioned (e.g., https://arxiv.org/pdf/2309.04827) offers valuable perspectives on layer significance, our approach differs in its objective and methodology. **Our primary goal is to identify layers critical for the alignment process during fine-tuning, rather than general interpretability or feature importance in standard pre-trained models.**\\n\\nThat said, we recognize the potential for cross-pollination between these areas. We will revise the manuscript to incorporate relevant references to interpretability studies that discuss the importance of layers, particularly those that highlight methods for identifying critical components in neural networks. By drawing these connections, we hope to position our work within the broader context of model interpretability and strengthen the paper\\u2019s foundation.\\n\\nAdditionally, we will emphasize how our approach contributes specifically to the fine-tuning and alignment tasks, which is a unique aspect compared to traditional interpretability studies. This will allow readers to better understand the specific role of layer importance in alignment and how it can inform efficient model adaptation.\"}", "{\"summary\": \"This paper proposes the ILA method, which first trains the model using LoRA until it reaches a stable phase (i.e. when the parameter changes during training are below a certain threshold). Then, a binary mask is added for individual training to assess the importance of each layer. Finally, unimportant layers are frozen, and fine-tuning is performed to achieve better performance.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"This paper is well-written and easy to understand. The proposed ILA method is also intuitive and has achieved excellent results.\", \"weaknesses\": \"1. I am somewhat concerned about whether the contributions of this paper are sufficient, as [1][2][3][4] indicate that adjusting certain parameters/layers during the fine-tuning process can indeed lead to effective improvements. Additionally, [5] shows that there is significant redundancy in the parameters during the SFT process. I believe that existing work already highlights the necessity of adjusting certain parameters during the post-training phase. The authors should emphasize the contributions of this paper more clearly, explain the similarities and differences with existing methods, and include comparisons with those methods.\\n\\n2. I believe that more baselines could be added for comparison, such as HFT [1], LISA [2], and GMT [3]. These methods also tune only a subset of model parameters, and I think including these baselines would make the paper more convincing.\\n\\n3. The focus of this paper is on alignment, and it has also achieved performance improvements. However, I am curious whether the methods presented in this paper exhibit consistent performance in other areas, such as mathematical reasoning and code generation. I believe the authors could further discuss whether the ILA method has general applicability.\\n\\n4. I am not sure whether the experimental results in Table 6 correspond to Llama 2 7B or Llama 3.1 8B. Based on the results, it seems that they still correspond to Llama 2 7B, implying that the authors did not conduct experiments with Llama 3.1 8B, yet included this model in the baselines.\\n\\n5. The experimental design in the paper is not sufficiently reasonable. For example, the inclusion of the AdaLoRA method in Table 3 feels rather abrupt. I believe that the method proposed in this paper should be a pluggable approach that can be applied to AdaLoRA, but the authors have awkwardly inserted AdaLoRA into the experiments and compared it with LoRA and LoRA w/ ILA. I think there should be separate comparative experiments for AdaLoRA and AdaLoRA w/ ILA to further demonstrate the general applicability of the ILA method.\\n\\n6. Table 8 should include comparisons with fine-grained selection methods proposed in [1].\\n\\nFinally, I would be very happy to engage in further discussion with the authors. My main concern is the contributions of this paper, as existing work has already indicated that freezing certain parameters can lead to improvements. I need to see more experimental comparisons and a deeper discussion of the innovations presented in this paper. If these issues are addressed, I would be happy to raise my score.\\n\\n\\n[1] HFT: Half Fine-Tuning for Large Language Models\\n\\n[2] LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning\\n\\n[3] Gradient-Mask Tuning Elevates the Upper Limits of LLM Performance\\n\\n[4] Investigating Layer Importance in Large Language Models\", \"questions\": \"1. Is the overall training process of the ILA method divided into two steps? The first step involves executing the ILA algorithm from Algorithm 1, and the second step involves freezing the unimportant layers selected by ILA before re-fine-tuning.\\n\\n2. What does IFILA mean? This abbreviation does not appear in the main text, but it is mentioned in the experimental tables without any explanation.\\n\\n3. In Table 6, why do the experiments on LIMA include ILA (75%) and ILA (30%), but there are no corresponding entries for NoRobots?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Dear Reviewer TMGU (Part 3/3)\", \"comment\": \"**W4: Lack of mathematical rigor.**\\n\\nThank you for your detailed feedback on improving the mathematical rigor. Below, we address each issue concisely:\\n\\n1. Equation (2): The loss function $\\\\mathcal{L}$ will be consistently defined as a one-argument function $\\\\mathcal{L}(\\\\pmb{\\\\theta})$.\\n2. Equations (8) and (9): We will update $\\\\theta$ to boldface ($\\\\pmb{\\\\theta}$) to indicate vectors.\\n3. Assumption 2.2 and Theorem 2.1: We would like to clarify that this assumption is solely used to bound the updates of the model parameters, which facilitates the derivation of **Theorem 2.1**. It does not directly extend the definition of \\u03f5-stability to parameters. **The detailed proof of Theorem 2.1, which demonstrates how this assumption supports the theorem, is provided in the appendix for completeness.**\\n4. Proof: Lines 16\\u201317 (loss function omission). Thank you for pointing out the issue in the proof section regarding the omission of the loss function between lines 16\\u201317. This was indeed a typographical error. We will correct this in the revised manuscript to ensure the continuity and clarity of the derivation. \\n5. Algorithm: A while loop will be added to better formalize the iterative process and the transition between stages.\\n6. line 484: This refers to the total number of layers, not the norm of the vector. We will clarify this to avoid ambiguity, stating $\\\\|\\\\gamma_t\\\\|_0<=225$\\n\\n**Q1\\uff1aWhat is the overhead of running this method (runing time, computational complexity) in comparison to full-fine tuning?**\\n\\nWe acknowledge the concern about the potential overhead of tracking the **average training time per iteration** and GPU memory usage. Based on the experimental data provided, we can conclude the following regarding the impact of this method on training time and GPU memory usage:\", \"table_1\": \"Training time of 1 iteration and GPU memory usage for Full Finetune and Full Finetune w/ ILA. The experiments were conducted on an **NVIDIA A100 GPU** with a **batch size of 2** and a maximum token length of 1024.\\n| | Training time (ms) | GPU Memory Usage (MiB) |\\n|:--------------------------:|:------------------:|:----------------------:|\\n| Full Finetune | 527 | 81078 |\\n| Full Finetune w/ ILA (30%) | 403 | 33458 |\\n| Full Finetune w/ ILA (75%) | 432 | 53924 |\\n\\n\\n\\n**Q2\\uff1aWhat is the stability of the algorithm for optimizing layer importance scores with respect to the initial scores?**\\n\\nThank you for your question. Our algorithm is indeed stable with respect to the initial layer importance scores. Specifically, during the optimization process, we observe that the importance scores converge reliably regardless of the initialization, as long as the initial scores are reasonably chosen.\", \"table_2\": \"The Jaccard similarities of important layers identified during fine-tuning of LLAMA 2-7B on the LIMA dataset with varying initial scores.\\n| Initial Scores | 4.0 | 2.0 | 1.0 |\\n|:--------------:|:----:|:----:|:---:|\\n| 4.0 | - | - | - |\\n| 2.0 | 0.83 | - | - |\\n| 1.0 | 0.78 | 0.88 | - |\"}", "{\"title\": \"Response to Reviewer PkBY (Part 1/3)\", \"comment\": \"**W1: Previous works have highlighted the necessity of adjusting certain parameters during the post-training phase.**\\n\\n> I am somewhat concerned about whether the contributions of this paper are sufficient, as [1][2][3][4] indicate that adjusting certain parameters/layers during the fine-tuning process can indeed lead to effective improvements. Additionally, [5] shows that there is significant redundancy in the parameters during the SFT process. I believe that existing work already highlights the necessity of adjusting certain parameters during the post-training phase. The authors should emphasize the contributions of this paper more clearly, explain the similarities and differences with existing methods, and include comparisons with those methods.**\\n\\nThank you for raising this important point. We appreciate the opportunity to clarify our paper\\u2019s contributions and positioning relative to existing work. While prior research, such as [1][2][3][4], has explored parameter or layer selection during fine-tuning, our work aims to address a **fundamentally different question**: **understanding what LLMs learn during alignment through a systematic study of layer importance in the context of instruction tuning**.\\n\\n1. **Core Objective: Understanding Alignment through Instruction Tuning**\\n * Unlike previous work that primarily focuses on proposing new Parameter-Efficient Fine-Tuning (PEFT) algorithms, our core goal is to better understand the alignment process in instruction tuning. Specifically, we aim to identify which layers are most critical for alignment and provide an importance ranking of these layers, as opposed to simply freezing or modifying certain parameters during fine-tuning.\\n * By introducing a rigorous definition of layer importance and developing a novel framework (\\\\modelname{}) to learn and rank layer significance, we go beyond practical efficiency and directly address fundamental questions about the inner workings of alignment. Our findings, such as the consistent layer importance ranking across datasets, provide new insights into the behavior of aligned LLMs and their reliance on specific layers for stylistic and task-specific adjustments.\\n2. **Key Differences with Existing Methods**\\n * [1] HFT: This work focuses on halving the number of layers involved in fine-tuning to improve efficiency but does not explore the inherent importance of layers during alignment. Our work identifies which layers contribute most significantly to alignment, enabling both theoretical insights and practical benefits.\\n * [2] LISA: While LISA explores memory-efficient strategies by sampling parameters, it does not provide a formal definition of importance or layer ranking. In contrast, ILA proposes a gradient-based optimization to quantify and rank the significance of layers, allowing us to analyze their roles systematically.\\n * [3] Gradient-Mask Tuning: This method aims to improve tuning efficiency by masking gradients but does not explicitly address the alignment process or provide interpretability regarding layer behavior. Our work complements such efforts by focusing on understanding alignment at a granular, layer-specific level.\\n * [4] Investigating Layer Importance: While closely related, this work emphasizes analysis without proposing a concrete method to utilize the findings for improved understanding or performance. Our work bridges this gap by providing an actionable framework (ILA) and demonstrating its utility in both practical fine-tuning and theoretical exploration.\\n3. **Clarifying Novelty and Contributions**\\n * To highlight our unique perspective, we will revise the Introduction to emphasize that the core contribution of this work lies in understanding the alignment process through instruction tuning, not merely proposing another PEFT algorithm.\\n * Specifically, we focus on defining and identifying layer importance and showing how this insight enables us to achieve consistent rankings across datasets and architectures. This consistency provides a deeper understanding of the alignment process and its reliance on specific layers for task adaptation.\\n4. **Empirical Comparisons**\\nWhile our work is not primarily focused on proposing a new PEFT algorithm, we agree that additional empirical comparisons with methods like HFT or LISA could help contextualize our contributions further. These experiments will highlight the differences in objectives and reinforce the unique value of ILA in providing interpretability and alignment-focused insights.\"}" ] }
7hRuaiRlgZ
Dynamic Alignment of Representations for Enhanced Chain-of-Thought Reasoning in Large Language Models
[ "Chenxi Huang", "Liang Xie", "Chen Shen", "Shaotian Yan", "Sinan Fan", "Zhihong Gu", "Binbin Lin", "Deng Cai", "Jieping Ye" ]
Representations encode rich semantic information, implying that editing them could serve as a effective tool (i.e., DAS, REFT) for parameter-efficient finetuning (PEFT). However, existing approaches typically focus on general categories of representations or selecting an appropriate number of continuous representations for each datasets, which limits their adaptability and performance. In contrast, our method dynamically selects representations requiring intervention at the instance level, referred to as misaligned representations, which are characterized by a lack of semantic information or appropriate attention. Identifying these misaligned representations poses challenging, as they serve different roles in varying contexts. It is evident that crucial representations, which are those that primarily receive information flow from themselves or significantly influence other representations, are likely to encompass misaligned representations. Consequently, we simplify the task by pivot our focus to crucial representations and aim to accurately locate them. We adaptively update crucial representation amidst uncertainty, freezing the base model while learning an updated direction for each layer. Involving both identification and updating of representations, we present a PEFT method, termed Dynamic Alignment of Representations (DAR). We validate the effectiveness of our method on eight diverse datasets across two scenarios, arithmetic and commonsense, and three base models: LLaMA-2-7B, LLaMA-2-13B, and LLaMA-3-8B. Notably, our method yields improvements of 17.47% and 3.11% over LLaMA-2-7B and ReFT on the GSM8K dataset, respectively. Additionally, it requires only 51 times fewer parameters than LoRA, demonstrating significant parameter efficiency. Furthermore, our method can be easily extended to few-shot learning.
[ "Large Language Models; LLM reasoning; LLM COT; PEFT" ]
https://openreview.net/pdf?id=7hRuaiRlgZ
https://openreview.net/forum?id=7hRuaiRlgZ
ICLR.cc/2025/Conference
2025
{ "note_id": [ "vEi8TMpmkN", "hsVU8pcO16", "YIj5uzKQ2X", "QFl7Ou5RBj", "Fsq407E1lB" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1731900018795, 1730096085144, 1730696192127, 1730736194799, 1730259558256 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission6740/Authors" ], [ "ICLR.cc/2025/Conference/Submission6740/Reviewer_bZx5" ], [ "ICLR.cc/2025/Conference/Submission6740/Reviewer_PufB" ], [ "ICLR.cc/2025/Conference/Submission6740/Reviewer_iyLj" ], [ "ICLR.cc/2025/Conference/Submission6740/Reviewer_bXCN" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": [\"This study proposes an efficient representational fine-tuning method which dynamically selects and intervene representations at the instance level.\", \"The experiment results validate the effectiveness of the proposed method on arithmetic and commonsense on three base models: LLaMA-2-7B, LLaMA-2-13B, and LLaMA-3-8B.\", \"This method is proven to be extended to few-shot learning.\"], \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"This study proposed an efficient dynamic representational fine-tuning method. The proposed method is novel.\", \"The authors conducted a wide range of experiments to validate the proposed method with eight diverse datasets across two scenarios, arithmetic and commonsense, and three base models.\"], \"weaknesses\": [\"The proposed method employs the same modular addition method as the previous study, ReFT, with the only difference being the dynamic finding of important representations via filtering. This may limit the impact of the novelty.\", \"Regarding the experimental results (Table 1-3), the proposed method has dramatically fewer trainable parameters than LoRA, but the difference is smaller than ReFT, which is the basis of the proposed method, by about 1/2. In my understanding, the difference is attributed to the hyper-parameter setting, such as the number of intervening tokens or dimension size of low-rank projection matrix. In addition, the proposed method requires running the models once to find the misaligned representation and then again to edit the representation, which doubles the total computational effort required. So, the total computational cost of the proposed method may nearly equal to ReFT.\", \"The study did not discuss which filtering is the better as a conclusion among SAF, MAF, and MSF.\", \"Lack of explanations in 3.Experiment section. For example, the explanation of the word \\\"continue\\\" in Table 1-3 is not present in the main text. There is a few explanation in the caption of Table 1: \\\"The \\u2713 means that the misaligned representations is identified from the misaligned representations in the previous layer.\\\" but this is not sufficient explanation for readers to understandable. Overall, much information is not included in the main text but with a richer description in the table captions.\"], \"questions\": [\"The authors propose the following three filtering approach: Self-Referential Attention Filtering (SAF), Self-Referential Saliency Filtering (SSF), and Multi-Referential Attention Filtering (MAF). As a natural conception, why not try MSF (Multi-Referential Saliency Filtering)?\", \"I don't think that the representation found via the proposed filtering is necessarily a misaligned representation because the filtering process just finds out the impactful tokens through the lens of attention. It may be a terminological issue of how to define the word \\\"misalignment\\\", but what do authors think about this?\", \"Equation(2): Is it a standard attention score of Transformer (but there is no symbols like key and query), or is it a newly proposed attention score for this study? (Moreover, h^l_l seems to be weired.)\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes to improve representation fine-tuning (ReFT), a parameter-efficient method for selecting and linearly projecting certain hidden states in an LM at inference time which can improve performance on certain tasks. The paper proposes a method for better finding *which* representations in a model to change: instead of treating this as a hyperparameter selected using a validation set, the authors propose to select representations based on those that have a significant influence on other representations and/or on the future representations at the same token position (where influence is measured using attention scores and saliency metrics). The authors demonstrate performance gains over both ReFT and randomly selecting representation positions on a variety of tasks for Llama models of various sizes and versions.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": [\"Originality: to the best of my knowledge, there hasn't been work on changing how ReFT method selects representations to edit. The proposal to select important representations on an instance-level basis as those with a large effect on later operations of the network makes sense.\", \"Significance: parameter-efficient finetuning is a valid and important direction for research. The authors' gains over ReFT seem consistently substantial on 3 different models and 8 datasets (though see below concerns about results).\"], \"weaknesses\": [\"Contribution: the abstract & introduction fail to convey the problem that the paper is trying to solve. This is partially because a) the term \\\"representations\\\" seems to be overloaded to mean both textual input tokens and hidden states, b) necessary background on the ReFT method from prior work is not provided, and c) important/key terms that are proposed like \\\"misaligned representations\\\" and \\\"crucial representations\\\" aren't ever formally defined. I followed the problem laid out in Figure 1, but did not understand Figure 2 at all, in part because the notation in the Figure isn't explained.\", \"Contribution: the stated claim of parameter efficiency (51x fewer params than LORA) is not unique to the proposed method, since the prior work this work builds off of/compares to, ReFT, achieves 15-65x fewer parameters.\", \"Related Work: the related works section is lacking a lot of historical work on information flow and attention pattern analysis in NLP (just to name a couple, see https://aclanthology.org/2020.acl-main.385/, https://aclanthology.org/P19-1580/, etc.) as well as on intervention on LLM representations (see, e.g., the ReFT paper's related work for reference)\", \"Method: some key details of the proposed method aren't explained and/or are extremely difficult to follow. To name a few- how are the most important representations chosen, given the proposed metrics in Section 2.2? Is there a threshold? A hyperparameter for how many to select? It appears there is a method for this based on Table 5, but they're not described anywhere in the main text. Is the update rule proposed in Section 2.3 identical to that in prior work? A lot of the results seem to hinge on something called \\\"Continue\\\" which is never described.\", \"Results: it is not described how hyperparameter search is conducted for the ReFT baseline or the proposed methods, which raises doubt over whether the stated gains of the proposed method over ReFT may be due to more tuning. Also, why are the performances you report for ReFT so much lower than those reported in the original paper? This makes it difficult for me to have full confidence in the presented results.\", \"Writing: overall, I found the paper quite difficult to understand, missing key details, and in need of substantial writing improvements. I elaborated on some more writing issues in \\\"Questions\\\" below.\"], \"questions\": [\"Comments/suggestions:\", \"citations that are not explicitly the subject of a sentence should use `\\\\citep{}` instead of `\\\\citet{}`\", \"many typos & grammar errors throughout (found 11 on first 3 pages): would recommend a thorough proofread. This is particularly noticeable in Fig. 1\", \"experiments are conducted on only one model family (Llama)\", \"notation issues, e.g., $M(\\\\mathbf{h})$ doesn't really make sense to indicate a subset of $\\\\mathbf{h}$. Values $f$ and $l$ of ReFT are never defined. Many other undefined or not clearly defined variables in Section 2.2.\", \"why must misaligned representations be a subset of crucial representations?\", \"citations should be provided for saliency scores. Attention scores as a signal of influence should be considered after the dot product is taken; see https://aclanthology.org/2020.emnlp-main.574/ for more discussion\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Dynamic Alignment of Representations (DAR), a novel parameter-efficient fine-tuning method designed to enhance chain-of-thought reasoning in large language models. Unlike existing approaches that focus on general categories or continuous blocks of representations, DAR dynamically identifies and updates misaligned representations at the instance level. The method operates through two key mechanisms: identifying crucial representations via self-referential and multi-referential filtering and updating these representations through adaptive learning while keeping the base model frozen. The effectiveness of DAR was demonstrated across eight diverse datasets involving arithmetic and commonsense reasoning tasks, using LLaMA-2-7B, LLaMA-2-13B, and LLaMA-3-8B as base models. The method's efficiency and adaptability, including its potential for few-shot learning applications, make it a significant contribution to the field of language model fine-tuning.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This paper introduces a novel dynamic approach to identifying misaligned representations. Meanwhile, it also develops a unique dual-filtering mechanism (self-referential and multi-referential) and proposes an adaptive updating strategy for representations.\\n2. The method is very efficient and only uses 51 times fewer parameters compared to LoRA.\\n3. The authors also conducted a lot of experiments to show the effectiveness of their method across 8 diverse datasets, covering both arithmetic and commonsense reasoning tasks. Also, the experiments verify the methods on three different base models, including Llama2-7B, Llama2-13B, and Llama3-8B.\", \"weaknesses\": \"1. Some parts of the paper are hard to follow. For example, Section 2 refers to Figure 1 multiple times when discussing how to identify crucial tokens based on their representations. However, Figure 1 doesn't contain any representations at all. Also, there is no detailed explanation of Figure 1, even in its caption and the Introduction, making the example quite confusing. It seems that \\\"per\\\" is the only crucial token identified, and only changing \\\"per\\\" to \\\"a\\\" can lead to the correct answer.\\n2. The paper interchangeably used \\\"misaligned representations\\\" and \\\"crucial representations.\\\" The authors propose two methods to identify crucial representation in Sections 2.1 and 2.2. Here, they detect the attention score and select the tokens with high incoming and outgoing attention scores. However, in all other parts of the paper (e.g., abstract, introduction, and other subsections in the method), the authors automatically treat those crucial representations as misaligned tokens without further explanation. This pose a significant doubt about the motivation of the paper.\\n3. The potential sensitivity to hyperparameter tuning isn't extensively analyzed. The paper doesn't thoroughly discuss the impact of the threshold in self-referential filtering.\", \"questions\": \"No\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Dynamic Alignment of Representations (DAR), a novel parameter-efficient fine-tuning method for improving chain-of-thought reasoning in large language models. The method identifies and updates \\\"crucial representations\\\" that are likely to contain problematic misaligned representations, using two key strategies: self-referential filtering (representations receiving information mainly from themselves) and multi-referential filtering (representations influencing multiple others). Evaluated across eight datasets using LLaMA models, DAR achieves significant improvements over baseline methods while using far fewer parameters - achieving 17.47% improvement over base LLaMA-2-7B on GSM8K while using 51x fewer parameters than LoRA. The method proves effective for both arithmetic and commonsense reasoning tasks and extends well to few-shot learning scenarios.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. Instead of using fixed intervention points like previous work, this paper introduce a new concept of \\\"misaligned representations\\\" and develop a method to identify them through crucial representations. Their dual-path method combining attention and saliency scores feels like a natural but novel extension of existing ideas.\\n\\n2. The author validated their method across 8 different datasets using 3 different model sizes. The results are quite remarkable - achieving a 17.47% improvement on GSM8K while using 51x fewer parameters than LoRA shows both the effectiveness and efficiency of their method. The ablation studies are comprehensive and help build confidence in the approach.\\n\\n3. The paper is also well written. The methodology was explained step by step, with clear diagrams and intuitive examples supporting the mathematical formulations.\", \"weaknesses\": \"1. The relationship between self-referential and multi-referential filtering is not deeply explored. It would be valuable to understand when one approach might be preferable over the other.\\n\\n2. The base model comparison focuses only on the LLaMA family. Testing on other architectures (e.g., Mistral) would better demonstrate the method's generalizability.\\n\\n3. While parameter efficiency is demonstrated, there's no discussion of training efficiency or convergence compared to other methods.\", \"questions\": \"1\\uff0c Have you observed any patterns in which type of filtering (self-referential vs multi-referential) works better for different types of tasks?\\n\\n2\\uff0c The threshold \\u03b1 seems to significantly impact performance. Do you have any guidelines for selecting this parameter for new tasks?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}" ] }
7hM5597bCv
DIAR: Diffusion-model-guided Implicit Q-learning with Adaptive Revaluation
[ "Jaehyun Park", "Yunho Kim", "Sejin Kim", "Byung-Jun Lee", "Sundong Kim" ]
We propose a novel offline reinforcement learning (offline RL) approach, introducing the Diffusion-model-guided Implicit Q-learning with Adaptive Revaluation (DIAR) framework. We address two key challenges in offline RL: out-of-distribution samples and long-horizon problems. We leverage diffusion models to learn state-action sequence distributions and incorporate value functions for more balanced and adaptive decision-making. DIAR introduces an Adaptive Revaluation mechanism that dynamically adjusts decision lengths by comparing current and future state values, enabling flexible long-term decision-making. Furthermore, we address Q-value overestimation by combining Q-network learning with a value function guided by a diffusion model. The diffusion model generates diverse latent trajectories, enhancing policy robustness and generalization. As demonstrated in tasks like Maze2D, AntMaze, and Kitchen, DIAR consistently outperforms state-of-the-art algorithms in long-horizon, sparse-reward environments.
[ "Diffusion model", "offline RL", "Q-learning" ]
Reject
https://openreview.net/pdf?id=7hM5597bCv
https://openreview.net/forum?id=7hM5597bCv
ICLR.cc/2025/Conference
2025
{ "note_id": [ "wTfvT0F8bm", "u5hAe7aIs8", "ro1P9k7VOk", "qThoCp7TB0", "qADSf6gAmL", "mIzexdcG5b", "hnefF3zX6a", "fTea47CCrQ", "UjGPb1jfiI", "RjI9WIioWS", "MHKgrRQNfe", "JrvCDmGUPB", "IlPxTJxLfJ", "GOl4ul78jE", "EXrCGbvsMC" ], "note_type": [ "official_comment", "official_comment", "decision", "official_comment", "official_review", "official_comment", "official_review", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1732616955823, 1732771065396, 1737524012860, 1732617542661, 1729341959537, 1732617687759, 1730461714992, 1734306495005, 1732617931355, 1732961247561, 1730687481141, 1732683994395, 1730386858552, 1732617817138, 1733091207685 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9902/Authors" ], [ "ICLR.cc/2025/Conference/Submission9902/Reviewer_VqTk" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9902/Authors" ], [ "ICLR.cc/2025/Conference/Submission9902/Reviewer_y9BV" ], [ "ICLR.cc/2025/Conference/Submission9902/Authors" ], [ "ICLR.cc/2025/Conference/Submission9902/Reviewer_VqTk" ], [ "ICLR.cc/2025/Conference/Submission9902/Area_Chair_JVEV" ], [ "ICLR.cc/2025/Conference/Submission9902/Authors" ], [ "ICLR.cc/2025/Conference/Submission9902/Reviewer_f1C1" ], [ "ICLR.cc/2025/Conference/Submission9902/Reviewer_XDtn" ], [ "ICLR.cc/2025/Conference/Submission9902/Reviewer_y9BV" ], [ "ICLR.cc/2025/Conference/Submission9902/Reviewer_f1C1" ], [ "ICLR.cc/2025/Conference/Submission9902/Authors" ], [ "ICLR.cc/2025/Conference/Submission9902/Reviewer_XDtn" ] ], "structured_content_str": [ "{\"title\": \"General Response\", \"comment\": [\"Dear Reviewers and Meta Reviewer,\", \"Thank you for taking the time to review our submission, \\\"DIAR: Diffusion-model-guided Implicit Q-learning with Adaptive Revaluation.\\\" We deeply appreciate your thoughtful feedback and constructive suggestions, which have greatly contributed to improving the quality of our work. Below, we provide a detailed response to each of your comments.\", \"We have provided a more detailed explanation of our concept, DIAR, and clarified the distinctions between our approach and existing methods.\", \"This study focuses on addressing the challenges of long-horizon tasks with sparse rewards, and our approach was specifically designed to tackle these issues. Through experiments, we demonstrated that our method performs well in environments such as Maze2D, AntMaze, and Kitchen.\", \"We tested Adaptive Revaluation (AR) under scenarios that required certain assumptions, and we are considering extending it to a broader range of applications in the future.\", \"We revised complex sentences in the paper, improving their clarity to make them easier for readers to understand.\", \"Should you have any further questions or suggestions, please put your comments on OpenReview. We will address all the raised concerns according to the reviewing policy.\"]}", "{\"comment\": \"Thank you for your detailed response. I appreciate the improvement for IQL by incorporating the diffusion-generated latent vector, and I believe this is a reasonable approach. However, I still think there are several issues with the work:\\n\\n1. The presentation still needs improvement. I suggest that the authors highlight the best result for each task in the table and calculate the average performance of each method across. Additionally, I recommend a more detailed summary and analysis of the experimental results. For example, by comparing the results of IQL and DIAR, we can observe the contribution of the latent vector in learning more accurate Q-values.\\n\\n2. My main concern lies in the limitations of adaptive revaluation. Adaptive revaluation can only be applied to tasks where the value function monotonically increases with the timestep, such as the AntMaze and Maze2d tasks listed by the authors. However, the remaining locomotion tasks in D4RL do not meet this condition.\\n\\nTherefore, I maintain my score.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer XDtn\", \"comment\": \"**Weakness 1,5.** We have reviewed the sentences that were difficult to understand in the paper and revised them to make the content easier to understand. The revised content is marked in blue text.\\n\\n**Weakness 2,3 & Q1.** Our approach is different from simply introducing value function learning into LDCQ. We aimed to make Q-function learning more precise by incorporating a value function, which helps ensure balanced training of the Q-function. By providing constraints from the value function during training, the Q-function can predict Q-values with greater accuracy. Additionally, the diffusion model, as a generative model, is capable of producing diverse and meaningful latent vectors. We wanted to leverage the diffusion model's ability to generate a variety of samples for both Q-function and value function learning. In the Bellman equation for Q-function learning in LDCQ, $Q(s_t,z) \\u2190 (r_{t:t+H}+\\\\gamma^H Q(s_{t+H}, argmax(Q(s_{t+H},z_i)))$, the reward term $r_{t:{t+H}}$ cannot be determined if $z$ is not sampled from the offline dataset. However, by introducing the value function, we can indirectly evaluate the value of latent vectors sampled from the diffusion model. As a result, DIAR uses not only the offline dataset but also latent vectors sampled from the diffusion model in the training process for Q-functions and value functions. With this approach, we demonstrated performance improvements on datasets such as Maze2D, AntMaze, and Kitchen. \\n\\n**Weakness 4.** We agree with the reviewer\\u2019s observation that many optimal algorithms have been proposed for D4RL tasks. However, as shown in our comparative analysis (e.g., Maze2D), there is still significant room for performance improvement. Nevertheless, we believe it is important to further validate the algorithm's performance across various environments. In the future, we plan to evaluate our model on a wider range of datasets to better assess its effectiveness.\\n\\n**Q2.** In this study, we focused on long-horizon sparse reward problems, such as those in Maze2D, AntMaze, and Kitchen. Accordingly, the methods proposed in DIAR were specifically designed to perform well in these environments, and they demonstrated significant performance improvements. However, we aim to expand our approach in the future to incorporate more general ideas, enabling strong performance in other environments, such as half-cheetah, walker2d and hopper.\"}", "{\"summary\": \"Offline RL utilizing Q-functions can be further enhanced if it effectively handles OOD data generated by diffusion. The paper suggests improving IQL to train a value function using OOD but constrained actions, by instead using a skill prior learned by diffusion. Furthermore, the paper suggests adaptive re-evaluation, which re-plans the trajectory if the future value function becomes worse than the current value function. DIAR outperforms prior approaches in long-horizon, sparse-reward environments.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"S1. DIAR consistantly outperforms prior works\", \"Table 1 shows that DIAR outperforms prior offline RL & diffusion-based offline RL works in 7 out of 9 tasks.\", \"S2. Simplicity of proposed Adaptive Re-evaluation\", \"Adaptive Re-evaluation simply compares the future value function and current value function while decision making, preventing the policy from heading to worse states.\", \"With the assumption of goal-conditioned RL, the method is valid.\"], \"weaknesses\": [\"W1. Lack of novelty compared to LDCQ\", \"It seems that DIAR is an incremental improvement of LDCQ, which changed the base offline algorithm from BCQ to IQL. Specifically, DIAR uses the same procedure of LDCQ for getting the latent priors, using $\\\\beta$-VAE for latent representation and getting the latent priors via diffusion.\", \"Please correct me if my understanding is incorrect in Q1.\", \"W2. Restricted to goal-conditioned tasks\", \"Due to the sparse-reward assumption for the AR (adaptive re-evaluation), the application is limited to goal-conditioned tasks.\", \"Since DIAR is limited to goal-conditioned tasks, comparison with offline goal-conditioned RL algorithms will be insightful. Quick way to do improve this point will be adding the result of HIQL [1], which already have experimental results for AntMaze and Kitchen environments. If authors have enough time and resources, comparison with other offline GCRL methods such as GoFar [2], SMORe [3] will be informative.\", \"Additionally, it will be exciting if there is a way to generalize AR process to dense reward tasks, making the whole algorithm generally applicable.\", \"W3. Effectiveness of re-evaluation\", \"According to Table 2, the performance improves in 5 tasks, and decreases in 4 tasks.\", \"Analyzing the failure cases of re-evaluation will be informative to further understand this behavior.\", \"[1] Seohong Park, et al. HIQL: Offline Goal-Conditioned RL with Latent States as Actions, NeurIPS 2023\", \"[2] Yecheng Jason Ma, et al. Offline Goal-Conditioned Reinforcement Learning via f-Advantage Regression, NeurIPS 2022\", \"[3] Harshit Sikchi, et al. SMORE: Score Models for Offline Goal-Conditioned Reinforcement Learning, ICLR 2024\"], \"questions\": [\"Q1. Lack of novelty compared to LDCQ\", \"DAIR seems to be IQL version of LDCQ + Adaptive re-evaluation. Is this understanding correct?\", \"Would you mind to highlight the differences between LDCQ and DIAR?\", \"Q2. Comparison with goal-conditioned RL method\", \"Can you compare DAIR with offline goal-conditioned RL methods (e.g. HIQL, GoFar, SMORe)?\", \"Comparison with goal-conditioned RL will be informative for those willing to apply DIAR for goal-conditioned tasks.\", \"Q3. Why adaptive re-evaluation sometimes degrades the performance?\", \"The performance decreases in 4 out of 9 tasks when AR is applied.\", \"Examples for the failure cases of adaptive re-evaluation (e.g. Maze2D environments) and analysis for those will be informative to further understand the results.\", \"Q4. Effectiveness of AR\", \"Can you apply AR for other methods (e.g. LDCQ, IQL) and see the improvements? One can apply AR for Q function to increase, if there is no value function in the method. Please do not hesitate to note the challenges if you have for applying AR to other methods.\", \"If you have any, can you share the idea of generalizing AR process to dense reward tasks? It will be exciting if there is a way to generalize AR process, making the whole algorithm generally applicable.\", \"Q5. Loose bound of AR\", \"While deriving the formula of AR, it seems that the tight bound for $V(s_{t+H})$ is $\\\\gamma^{-H} V(s_t)$.\", \"Can you share your thoughts on using a tighter bound $V(s_{t+H}) \\\\geq \\\\gamma^{-H} V(s_t)$ instead of $V(s_{t+H}) \\\\geq V(s_t)$?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer VqTk\", \"comment\": \"**Weakness 1.** We reviewed and revised sentences from the paper that were difficult to understand. The revised content is highlighted in blue for clarity.\\n\\n**Weakness 2.** As the reviewer mentioned, LDM and IQL are existing methods in their original forms. In our model, DIAR, LDM serves two purposes: (1) generating candidate latent vectors for decision-making and (2) aiding in Q-function training. However, our model is not simply a combination of IQL and LDM. We aimed to make Q-function learning more precise by introducing a value function, which ensures balanced training of the Q-function. By training the Q-function with constraints derived from the value function, we can achieve more accurate Q-value predictions. Additionally, the diffusion model, as a generative model, can produce a diverse range of meaningful latent vectors. We sought to leverage this capability for both Q-function and value-function training. In the Bellman equation used for Q-function learning in LDCQ, $Q(s_t,z) \\u2190 (r_{t:t+H}+\\\\gamma^H Q(s_{t+H}, argmax(Q(s_{t+H},z_i)))$, the reward term $r_{t:{t+H}}$ is unknown if $z$ is not sampled from the offline dataset. However, by incorporating the value function, we can indirectly evaluate the value of latent vectors sampled from the diffusion model. Thus, DIAR utilizes both the offline dataset and latent vectors generated by the diffusion model during Q-function and value-function training. With this approach, we demonstrated performance improvements on datasets such as Maze2D, AntMaze, and Kitchen.\\n\\n**Weakness 3.** The need to incorporate a latent diffusion model (LDM) is also highlighted in the LDCQ paper for several reasons:\\n- Flexible decoder design: Since the latent diffusion model operates in the latent space, even discrete action spaces can be effectively represented. This allows for more flexibility in designing the decoder.\\n- Temporal abstraction: With the help of LDM, a powerful generative model, it is possible to create a temporally abstract and information-dense latent space.\\n- Faster training and inference: Generating latent vectors is much more efficient than directly generating action-state sequences, as done in traditional methods, leading to faster training and inference processes.\\n\\n**Weakness 4.** In Maze2D and AntMaze, rewards are given only when the agent reaches the goal, and all other states yield a reward of 0. Under the assumption of an expert policy, the value increases steadily as the agent gets closer to the goal. However, if additional rewards are provided for states outside the goal region, this condition may no longer hold, potentially leading to performance degradation when using AR. In the case of Kitchen, there are several intermediate sub-goals, and rewards are given when these sub-goals are achieved. In such scenarios, it can be challenging to make optimal decisions using AR based solely on the value function. We would greatly appreciate any ideas, suggestions, or feedback on AR, as they could significantly help us improve our algorithm.\"}", "{\"summary\": \"This paper presents a diffusion-guided offline reinforcement learning method, DIAR, designed to address challenges posed by out-of-distribution samples and long-horizon planning. Specifically, DIAR first trains a VAE to extract trajectory representations, which are then used as generation targets for training a corresponding diffusion model. DIAR subsequently leverages the representations generated by the diffusion model to support the learning of the value function and policy network. Finally, the authors validate the effectiveness of DIAR through experiments on sparse reward tasks.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Offline RL problem in sparse reward tasks is significant, and previous methods have struggled to effectively address it.\\n2. Compared to the baselines, DIAR demonstrates superior overall performance on sparse reward tasks.\", \"weaknesses\": \"1. The writing in this paper could be improved, as the logic is not always coherent, and some sentences are difficult to understand. For example, \\u201cHowever, offline RL relies on the given dataset, so the learned policy may be inefficient or misdirected if the data is poor quality or biased.\\u201d in lines 44-46.\\n2. The novelty of the method is limited, as both learning a latent diffusion model [1] and implicit Q-learning [2] are existing approaches.\\n3. The use of the diffusion model lacks clear motivation. The authors need to further explain why it is necessary to input the latent representation $z$ during the IQL stage.\\n4. The adaptive revaluation approach is not entirely reasonable, as expert policies in many tasks do not satisfy $V(s_{t+1}) > V(s_t)$. For example, this is often the case in finite-horizon tasks with positive reward functions. Consequently, the theoretical analysis in the paper relies on overly strong assumptions that do not generalize well to other tasks.\\n\\n[1] Reasoning with Latent Diffusion in Offline Reinforcement Learning. ICLR, 2024.\\n\\n[2] Offline Reinforcement Learning with Implicit Q-Learning. ICLR, 2022\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"Authors present a model-based offline RL method using diffusion models that outputs sequence-level distributions to handle long horizons and deal with compounding error issues that occur with 1-step models. Their method also addresses out-of-distribution samples with a learned value function.\", \"strengths\": \"DIAR has strong performance on D4RL and the adaptive reevaluation component is novel.\", \"weaknesses\": \"The paper clarity needs improvement and contribution appears weak -- it combines components from existing works and evaluation is performed on a narrow set of environments.\\n\\nOverall, the paper still seems quite immature -- the writing needs improvement and most of the contributions come from existing works. The adaptive revaluation, which is the key novelty of the paper, demonstrates limited applicability, and its empirical effectiveness remains unclear. For these reasons, I vote to reject.\", \"additional_comments_on_reviewer_discussion\": \"Authors were unable to satisfactorily address the main concerns of reviewers in the rebuttal phase, leading reviewers to maintain their scores.\"}", "{\"title\": \"Response to Reviewer y9BV\", \"comment\": \"**Q1.** DIAR is different from simply adding value function learning to LDCQ. Our goal was to make Q-function learning more precise, so we introduced a value function to ensure balanced training of the Q-function. By incorporating constraints from the value function during training, we can predict Q-values with greater accuracy. Additionally, the diffusion model, as a generative model, can produce a wide variety of meaningful latent vectors. We aimed to leverage this capability to enhance both Q-function and value-function learning. In the Bellman equation for Q-function learning in LDCQ, $Q(s_t,z) \\u2190 (r_{t:t+H}+\\\\gamma^H Q(s_{t+H}, argmax(Q(s_{t+H},z_i)))$, the reward term $r_{t:{t+H}}$ is undefined if $z$ is not sampled from the offline dataset. However, by introducing the value function, we can indirectly evaluate the value of latent vectors generated by the diffusion model. As a result, DIAR uses not only the offline dataset but also latent vectors sampled from the diffusion model in the training process for both Q-functions and value functions. This approach led to significant performance improvements on datasets such as Maze2D, AntMaze, and Kitchen.\\n\\n**Q2.** Our goal was to address offline reinforcement learning problems using a diffusion model. To this end, we compared DIAR with algorithms like Diffuser and DD, as well as skill-based algorithms. While the tasks in our experiments focused on goal based tasks, we did not explicitly involve providing the goal as input or solving tasks where the objective is to directly reach a specified destination. However, since DIAR can also function as a goal-conditioned RL algorithm, it would be possible to include the goal as part of the input and conduct additional experiments for comparison under goal-conditioned settings. Evaluating the ability to find an optimal path from the current state to a specified goal is as important as the experiments we have conducted so far and could offer further insights into DIAR\\u2019s capabilities.\\n\\n**Q3.** The value function is used to evaluate whether the current decision is appropriate, and if it is not, a new decision is generated. However, it is challenging to train the value function perfectly. If the value function fails to accurately assess the value of the current state, the new decision generated may not be more optimal. Additionally, since we do not have access to unlimited data, this limitation can pose challenges for the DIAR model. Currently, AR is designed based on a few assumptions, but we plan to refine it into a more general algorithm in the future, making it applicable to a wider range of tasks and domains.\\n\\n**Q4.** The purpose of introducing AR was to address the possibility of finding a more optimal choice when selecting a long action sequence all at once. Therefore, if there is a method to evaluate the current decision, AR can be applied even without relying on the value function. Additionally, AR can be used in environments with dense rewards. However, to enable more general applicability, the algorithm needs to be further expanded and refined.\\n\\n**Q5.** As the reviewer mentioned, a tighter bound for the value is indeed $\\\\gamma^{-H}V(s_t)$. In Section 4.3, the bound is calculated using $\\\\gamma^{-H}V(s_t)$. However, when applying AR, we did not explicitly use $\\\\gamma$. Instead, we evaluated the latent vectors' values based on the value function $V(s_t)$. To account for the possibility of a noisy value function, we incorporated a slight margin, allowing AR to operate effectively when there is a significant difference in value. When the value differences are small, the algorithm tends to follow the original decision more closely, maintaining a balance between exploration and adhering to the initial choice.\"}", "{\"comment\": \"Thank you for the authors' response. I appreciate the clarification that DIAR is different from LDCQ. However, the introduction of the value function and the use of the diffusion model to generate latent variables are techniques that have been attempted in previous works. The article combines these techniques, and it seems that the novelty is still insufficient to support the claim. Therefore, I maintain my score.\"}", "{\"summary\": \"The paper proposes an offline RL algorithm utilizing latent diffusion skill models for temporal abstraction, and Q learning with these skills. During policy rollouts, the learnt value function and temporally abstract world model are used to evaluate whether the currently used skill is optimal. If not, a new skill latent is selected. The method is demonstrated on D4RL tasks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1. The method has good performance on D4RL compared to baselines.\", \"weaknesses\": \"1. The paper is messily written in my opinion, and it is difficult to parse what the major contribution of the paper is. This seems like primarily an engineering paper, but this is not clearly communicated.\\n2. The method is a combination of existing offline RL algorithms (primarily LDCQ and IQL), but there is no proper reason given for this particular configuration of components. The only novel addition seems to be the use of the value function for deciding when to stop executing a skill, but this is a simple iterative improvement.\\n3. Novelty is not strictly necessary, but the additions made here are not well justified at all with no coherent story surrounding it.\\n4. This is not a direct criticism of the paper, but D4RL has been quite over-optimized in the offline RL community now, small engineering improvements to boost the score in this benchmark does not give any signal to the true value of the method.\\n5. More general writing criticism, a lot of the paper repeats itself and feels like padding more than informative content. For example, section 4.3 \\u201cTheoretical Analysis of DIAR\\u201d is very elementary and adds no value.\", \"questions\": \"1. What is the primary contribution of the paper? Do the authors pitch the paper as a novel offline RL algorithm?\\n2. Since the authors only evaluate on D4RL, why is there no evaluation of the locomotion tasks (half-cheetah, walker2d, hopper)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"**Q1.**\\n\\nThank you for clarifying the distinction between DIAR and LDCQ. I now have a clearer understanding of the motivation behind incorporating the value function and the differences between the two approaches.\\n\\nHowever, I still perceive DIAR as an \\\"IQL version of LDCQ\\\" combined with \\\"adaptive revaluation.\\\" While the authors argue that DIAR is distinct from IQL + LDCQ, the critic updates closely mirror those of IQL, with the primary difference being the skill-based approach. Additionally, the diffusion latent extraction appears to be derived from LDCQ. Specifically, lines 251\\u2013264 seem to follow the IQL approach with $z\\\\_t$ replacing $a\\\\_t$ (I would strongly recommend citing IQL in relation to the expectile loss here).\\n\\nWhile combining existing methods is a valid and valuable contribution, it typically requires strong and broad empirical support to compensate for limited novelty.\\n\\n&nbsp;\\n\\n**Q2.** \\n\\nI appreciate the clarification that DIAR is not a goal-conditioned RL algorithm. However, its applicability appears largely restricted to goal-conditioned tasks. Without demonstrating its performance in more diverse environments, comparisons with general-purpose algorithms may be unfair, as DIAR seems particularly tailored to goal-conditioned tasks.\\n\\n&nbsp;\\n\\n**Q3, Q4, Q5.** \\n\\nThank you for addressing the practical considerations surrounding adaptive revaluation (AR). \\nI agree that refining adaptive revaluation into a more generalized framework is a promising direction for future work.\\nHowever, applicability of AR still remains limited, and effectiveness of AR is not strongly demonstrated in the current version.\\n\\n&nbsp;\\n\\nDespite the clarifications, I still have two remaining concerns:\\n* **Novelty**: The critic updates largely follow IQL, and the diffusion latent extraction process is derived from LDCQ.\\n* **Effectiveness of Adaptive Revaluation**: The adaptive revaluation, which is the key novelty of the paper, demonstrates limited applicability, and its empirical effectiveness remains unclear.\\n\\nDue to these unresolved concerns, I will maintain my current score.\"}", "{\"summary\": \"The paper presents a novel offline reinforcement learning (RL) framework that leverages diffusion models to address challenges such as out-of-distribution samples and long-horizon decision-making. By introducing an Adaptive Revaluation mechanism, the DIAR framework dynamically adjusts decision lengths based on current and future state values, enhancing long-term decision accuracy. The Q-value overestimation is mitigated through the generation of diverse latent trajectories. Empirical results on tasks like Maze2D, AntMaze, and Kitchen demonstrate that DIAR consistently outperforms state-of-the-art algorithms, underscoring its potential for real-world applications in robotics and autonomous systems.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The introduction of the Adaptive Revaluation mechanism is novel. It dynamically modifies decision lengths based on comparative state values, enhancing decision-making flexibility.\\n2. This approach is systematically evaluated, with performance improvements validated by extensive empirical results, as shown by the statistically significant outperformance of DIAR over comparable models.\\n3. The structure of the paper is well-organized and logical, breaking down the methodology, implementation, and adaptive mechanisms in detail.\", \"weaknesses\": \"1. The experiments are primarily conducted on three well-known offline RL environments (Maze2D, AntMaze, and Kitchen) of total 9 datasets. While these provide a foundation, they are limited in diversity, which could restrict understanding of DIAR\\u2019s generalizability. Including more datasets makes the environmental results more persuasive.\\n2. Adaptive revaluation is proposed as a mechanism to improve decision flexibility, yet its theoretical grounding is somewhat limited. For instance, the model could benefit from a more rigorous analysis of how adaptive revaluation specifically affects trajectory optimization, particularly in long-horizon scenarios where trajectory value predictions might become noisy or overly optimistic.\\n3. The mixed training of Q and V, as well as the use of weighted training techniques, has not been evaluated through an ablation study, making it unclear what their contributions are.\\n4. There is no sensitivity analysis on the hyper-parameters, such as $\\\\tau$ and $\\\\beta$.\", \"questions\": \"1. The article states that the mixed training of Q and V reduces overestimation issues by utilizing latent states generated by the diffusion model. However, I question the effectiveness of this approach in mitigating overestimation, as the latent states from the diffusion model are also not part of the original data. Additionally, there is no pessimistic approach incorporated into the value function during this process. How can this be justified?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer f1C1\", \"comment\": \"**Weakness 1.** In this study, we focused on long-horizon sparse reward problems, such as those in Maze2D, AntMaze, and Kitchen. Accordingly, the methods proposed in DIAR were designed to perform well in these environments, achieving significant performance improvements. However, we aim to expand our approach in the future to work effectively in other environments, such as half-cheetah, walker2d and hopper, by developing more generalizable ideas. We are always open to new ideas or discussions about extending the DIAR algorithm to a broader range of tasks.\\n\\n**Weakness 2.** It is challenging to train a value function without any errors. A noisy value function struggles to accurately assess the value of a state, which can hinder the ability to make optimal decisions. In fact, experiments analyzing the impact of AR show that it improves performance in some cases but causes slight decreases in others. Additionally, since we do not have access to unlimited data, this limitation can make it difficult for the DIAR model to perform well in certain scenarios. Currently, AR is designed based on several assumptions, but we plan to refine it into a more general algorithm in the future, enabling its application across a wider range of domains.\\n\\n**Weakness 3 & Q1.** DIAR is not simply an extension of LDCQ with a value function added. Our goal was to make Q-function learning more precise, and to achieve this, we introduced a value function to ensure balanced training of the Q-function. By incorporating constraints derived from the value function, we can predict Q-values with greater accuracy. Additionally, the diffusion model, as a generative model, can produce a diverse range of meaningful latent vectors. We sought to leverage this capability for both Q-function and value-function learning. In the Bellman equation used for Q-function learning in LDCQ, $Q(s_t,z) \\u2190 (r_{t:t+H}+\\\\gamma^H Q(s_{t+H}, argmax(Q(s_{t+H},z_i)))$, the reward term $r_{t:{t+H}}$ is undefined if $z$ is not sampled from the offline dataset. However, by introducing the value function, we can indirectly evaluate the value of latent vectors sampled from the diffusion model. As a result, DIAR incorporates not only the offline dataset but also latent vectors generated by the diffusion model in the training process for Q-functions and value functions. Using this approach, we demonstrated significant performance improvements on datasets such as Maze2D, AntMaze, and Kitchen.\\n\\n**Weakness 4.** According to the LDCQ paper, a fixed value for $\\\\beta$ is used, highlighting its advantage of being less sensitive to various hyperparameters. In our current experiments, we also use a fixed value of 0.9 for $\\\\tau$. However, as the reviewer suggested, analyzing the sensitivity of parameters like $\\\\tau$ could be a valuable direction for further investigation.\"}", "{\"comment\": \"I thanks the authors for their response, however I still think the addition of the Value function makes this a combination of IQL and LDCQ, which is too small a change in my opinion. I will keep my score.\"}" ] }
7gGl6HB5Zd
Manifold Induced Biases for Zero-shot and Few-shot Detection of Generated Images
[ "Jonathan Brokman", "Amit Giloni", "Omer Hofman", "Roman Vainshtein", "Hisashi Kojima", "Guy Gilboa" ]
Distinguishing between real and AI-generated images, commonly referred to as 'image detection', presents a timely and significant challenge. Despite extensive research in the (semi-)supervised regime, zero-shot and few-shot solutions have only recently emerged as promising alternatives. Their main advantage is in alleviating the ongoing data maintenance, which quickly becomes outdated due to advances in generative technologies. We identify two main gaps: (1) a lack of theoretical grounding for the methods, and (2) significant room for performance improvements in zero-shot and few-shot regimes. Our approach is founded on understanding and quantifying the biases inherent in generated content, where we use these quantities as criteria for characterizing generated images. Specifically, we explore the biases of the implicit probability manifold, captured by a pre-trained diffusion model. Through score-function analysis, we approximate the curvature, gradient, and bias towards points on the probability manifold, establishing criteria for detection in the zero-shot regime. We further extend our contribution to the few-shot setting by employing a mixture-of-experts methodology. Empirical results across 20 generative models demonstrate that our method outperforms current approaches in both zero-shot and few-shot settings. This work advances the theoretical understanding and practical usage of generated content biases through the lens of manifold analysis.
[ "zero-shot", "few-shot", "generated image detection", "total-variation", "curvature", "score function", "diffusion models" ]
Accept (Poster)
https://openreview.net/pdf?id=7gGl6HB5Zd
https://openreview.net/forum?id=7gGl6HB5Zd
ICLR.cc/2025/Conference
2025
{ "note_id": [ "t932xMN7VU", "sC0RocaajX", "rq041Q2yn3", "rKEUPAYnRf", "qifndiESrt", "nyYZ5gRQXS", "m7QMtH2q5a", "eNLEHARlGn", "dO61g8VgK0", "XiZsrvJ4MZ", "TaaNTZEX1m", "SSuUzjgf1N", "OZACFTrzII", "LPqM23Gf7u", "FgBWjqNOoh", "DTvKaB0u0s", "5KA1dcqLHm", "3k7YjNnumA", "0H8d2ABaSf" ], "note_type": [ "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1734858631050, 1729688987995, 1732012826056, 1733065504584, 1733152963448, 1732009834565, 1730374726194, 1732011876728, 1730580941002, 1732947021307, 1732022626603, 1732783021516, 1732516714004, 1733229210933, 1737523963079, 1730643453801, 1733044013094, 1732979028245, 1732014336966 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission9132/Area_Chair_x2qK" ], [ "ICLR.cc/2025/Conference/Submission9132/Reviewer_qnVq" ], [ "ICLR.cc/2025/Conference/Submission9132/Authors" ], [ "ICLR.cc/2025/Conference/Submission9132/Authors" ], [ "ICLR.cc/2025/Conference/Submission9132/Reviewer_GHKQ" ], [ "ICLR.cc/2025/Conference/Submission9132/Authors" ], [ "ICLR.cc/2025/Conference/Submission9132/Reviewer_GHKQ" ], [ "ICLR.cc/2025/Conference/Submission9132/Authors" ], [ "ICLR.cc/2025/Conference/Submission9132/Reviewer_DPAQ" ], [ "ICLR.cc/2025/Conference/Submission9132/Reviewer_Xh3J" ], [ "ICLR.cc/2025/Conference/Submission9132/Authors" ], [ "ICLR.cc/2025/Conference/Submission9132/Authors" ], [ "ICLR.cc/2025/Conference/Submission9132/Authors" ], [ "ICLR.cc/2025/Conference/Submission9132/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission9132/Reviewer_Xh3J" ], [ "ICLR.cc/2025/Conference/Submission9132/Reviewer_qnVq" ], [ "ICLR.cc/2025/Conference/Submission9132/Authors" ], [ "ICLR.cc/2025/Conference/Submission9132/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"The paper presents a novel mathematical framework for quantifying bias in generated images. The key idea is that in generated images, there is bias that can distinguish them from real images. All reviewers are positive of the value of the work, its technical novelty and its strong empirical justification.\", \"additional_comments_on_reviewer_discussion\": \"There were no significant comments or changes during the reviewer discussion.\"}", "{\"summary\": \"The paper presents a novel approach to detecting AI-generated images using a zero-shot and few-shot framework. By analyzing biases inherent in the manifold of pre-trained diffusion models, the authors introduce a new mathematical criterion based on score functions, curvature, and gradient analysis. This approach generalizes well to unseen generative techniques and outperforms existing methods in both zero-shot and few-shot scenarios. Extensive experiments across a diverse set of generative models further validate the effectiveness of the proposed method.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper provides a sound theoretical foundation by integrating manifold analysis with diffusion models, advancing the field of generated image detection.\\n\\nThe empirical results show strong performance, with the proposed method outperforming current state-of-the-art approaches.\\n\\nThe authors conducted experiments across various datasets and generative techniques, including GANs, diffusion models, and commercial tools, providing strong evidence for the robustness of the method.\", \"weaknesses\": \"According to the article, the proposed curvature and gradient based metric for detecting generated images is closely related to the score function. However, it is unclear why this method also performs well with models where score functions are not inherently applicable, such as CycleGAN. The authors are encouraged to clarify this connection and explain why the proposed metric shows strong performance even in such cases where score function analysis is not directly relevant.\\n\\nThere are citation issues on page 7 of the article, where footnotes and page numbers are not correctly referenced. The authors should revise the citation formatting to ensure all references are accurate and properly aligned with the content.\\n\\nThe proposed method appears to fit naturally within a zero-shot framework, relying solely on input samples and corresponding perturbations. It is unclear why the few-shot setting was introduced, given that zero-shot scenarios are typically more challenging and often reflect real-world situations. The authors are encouraged to clarify the need for a few-shot setting and explain why zero-shot alone would not suffice as a more compelling and realistic approach.\", \"questions\": \"Please see weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the thoughtful feedback and for the careful attention given to our methodology and mathematical details. Below we provide our answers.\\n\\n**Q1:** Improve the implementation details in Section 4.3\\n\\n**A1:** Thank you for raising this issue. To improve the readability of Section 4.3 and the ease of reproducibility, the calculation pipeline of $C(x_0)$ was broken down into three clear steps, referencing Fig. 1 to aid with a high-level illustration. Additionally, the CLIP mappings have been restructured into a two-stage process and are now explained in a less formal, more intuitive manner, reducing the overly mathematical approach that previously hindered clarity. We also separated best practices from insights. \\n\\nPlease do not hesitate to raise further questions regarding unclear points or any additional details that may have been overlooked. \\n\\n**Q2:** Typos\\n\\n**A2:** We have fixed the typos and corrected equation presentations - thank you for finding these.\\n\\n**Q3:** Will incorporating to the method other diffusion models that are more advanced than SD1.4 improve its performance?\\n\\n**A3:** In Sec. 5.2 under **Sensitivity and Ablation Analysis** we have readily evaluated our method using two advanced diffusion models: **SD v2 Base** and **Kandinsky 2.1**, where the latter was released in September 2023 - the same month as SDXL. We acknowledge this analysis may have been overlooked as it was only mentioned in the text. In the revised manuscript, we have highlighted this experiment by adding a summary table of the results \\u2013 see Table 2.\\n\\nAlthough SD v2 Base and Kandinsky 2.1 models differ in size and generation techniques, our experiments demonstrated consistent results, with a minor AUC decrease of less than 1%. This consistency highlights the robustness of our method across different diffusion models. With that said, it also demonstrates that more advanced models do not necessarily improve the performance.\\n\\nIf further details or additional experiments on this matter would be helpful, please let us know and we would be happy to provide them.\"}", "{\"comment\": \"Thank you for the feedback, and for the recognition of our contributions to the field.\"}", "{\"comment\": \"Thank the authors for the response. This addressed most of my concerns, I will maintain my score.\"}", "{\"comment\": \"We greatly appreciate the reviewers' recognition of our work and the constructive feedback. This prompted us to conduct additional experiments on robustness (Table 2, Fig. 10) and computational efficiency (Appendix Sec. G.3), strengthening the rigor and validation of our approach. Below we answer the questions in detail:\\n\\n**A1:** Thank you for pointing this out. We conducted additional experiments to analyze our method inference time and memory requirements. Using a single A100 GPU, we observed a runtime of **2.1 seconds per sample** upon fully parallelized processing of the perturbations.\\nFor comparison, our primary competitor, AEROBLADE, requires **5.4 seconds per sample** on the same A100 GPU, making our method significantly more computationally efficient in terms of runtime.\\nThe additional memory required by our method is negligible, as it applies the diffusion model during inference and records only a single scalar value per sample as the final result.\\nThis analysis has been included in the Appendix (see Section G.3) for further reference.\\n\\n**A2:** Thank you for highlighting the importance of hyperparameter robustness. We provide sensitivity analysis for the perturbation no. and the spherical noises level hyperparameters in Section 5.2 (\\\"Sensitivity and Ablation Analysis\\\") with further details in Appendix Section G.2.\\n\\nRegarding the **perturbation no.**, our experiments indicate that increasing the no. of perturbations $s$ improves detection performance. Specifically, testing $s$=4, 8, 16, 32, and 64 resulted in average AUC scores of 0.828, 0.829, 0.830, 0.833, and 0.835, respectively. To address the reviewer\\u2019s suggestion, **we also evaluated the robustness to perturbation no. across different models**. These experiments revealed only minimal variations in performance (0.1\\u20130.2%) with increased $s$ - see Fig. 10 in Appendix Section G.2.\\n\\nWe also evaluate our method under varied **spherical noise levels**. In our experiment, increasing the radii by a factor of 10 resulted in a 1.5% performance decrease, while increasing by a factor of 100 led to a 2% decrease. \\n\\nPlease note that for improved clarity and presentation, we have added a table in the main manuscript that summarizes the results of this experiment (see Table 2).\\n\\nThese results show that we remain SOTA even under changes in both hyper-parameters. As guidance for noise strength search we have used $\\\\sqrt{d}$ as a point of reference to the sphere's radius, because of its theoretical relation to $\\\\mathcal{N}(0,I)$ in $d$ dimensions (where $d$ is the denoised signal dimension). We also propose to test with higher $s$ and smaller perturbations, since the trend of our tests indicate that these may obtain improved performance.\\n\\n**A3:** Thank you for this valuable feedback. Indeed, evaluating our method on various image post-processing options would provide valuable information on the robustness of our method to real-world possibilities. To address this, we follow [1, 2] and test Gaussian blur post-processing. \\n\\nWe conducted experiments using Gaussian blur. Specifically, we applied OpenCV\\u2019s Gaussian blurring functionality with two kernel sizes (and the default associated variance): medium blur (Kernel Size = 3) and high blur (Kernel Size = 7). Under medium blur conditions, our method\\u2019s accuracy decreased by 1.2%, while under high blur conditions, accuracy reductions were 6.2%.\\n\\nThis analysis has been included in the main manuscript in Section 5.2 \\u2018Sensitivity and Ablation Analysis\\u2019 part and in the Appendix (see Section G.2) for further reference.\\n\\n[1] Ricker, J., Lukovnikov, D., & Fischer, A. (2024). AEROBLADE: Training-Free Detection of Latent Diffusion Images Using Autoencoder Reconstruction Error. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9130-9140).\\u200f\\n\\n[2] He, Z., Chen, P. Y., & Ho, T. Y. (2024). RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection. arXiv preprint arXiv:2405.20112.\\u200f\"}", "{\"summary\": \"This work addresses the challenge of distinguishing between real and AI-generated images by analyzing biases on the probability manifold of pre-trained diffusion models. The authors develop a method that offers a scalar criterion for classification in zero-shot settings, and experiments demonstrate its effectiveness against current methods.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The theoretical analysis of current diffusion models\\u2019 score functions is comprehensive and novel, potentially inspiring further research in this area.\\n\\nThe proposed method generalizes well to unseen generative techniques and achieves superior performance over existing approaches in both zero-shot and few-shot settings.\", \"weaknesses\": \"The implementation details in Section 4.3 should be elaborated further to enhance readability and reproducibility.\", \"typo_errors\": [\"L153 and L149, inconsistent use of $\\\\mathcal N$ and $N$.\", \"L146 and L170, inconsistent use of $\\\\mathbb{R}$ and $R$\", \"L157, better use latex log $\\\\log$ for clarity.\", \"L191, use latex \\\\` for upper quotas.\", \"inconsistent use of Sec. and Section\", \"L298-L299, unexpected equation.\"], \"questions\": \"The author choose SD-1.4 to implement the proposed method, have the author tried other diffusion models, especially recently more advanced methods, such as SDXL and SD3. Does this helps improve the performance?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your constructive feedback, which has been instrumental in improving our work. In response, we empirically demonstrate our key assumptions (Figs. 3.a, 4.c and 7) and test the reliability of our approximations (Fig 3.b-d) through analyzeable and illustrative cases.\\n\\n**A1:** Thank you for raising the point on \\u201cregion of local maxima assumption\\u201d. To this end we conducted an experiment with a 2D Gaussian Mixture Model (GMM) dataset - see Appendix B for details. This setting allows for tractable Probability Density Function analysis and was inspired by [1] which also used 2D GMMs to gain insights into diffusion models. As shown in Fig. 3.a, reverse diffusion processes consistently terminate near local maxima, supporting our assumption. This 2D experiment offers intuitive, tractable evidence for similar behaviors in high-dimensional models. Further statistics at larger scale support this in Fig.7 in the Appendix.\\n\\nFurthermore, this assumption aligns with prior observations: Fig. 2 in [2] illustrates a similar assumption for text generation, albeit relying on LLM\\u2019s explicit probability modeling. Fig. 2 in [1] shows how a diffusion model learns basins of local maxima, which \\\"overtake\\\" a non-maxima basin present in the true probability distribution. Recently, [3] associated suboptimally trained diffusion models with \\\"bumpy\\\" probability manifolds.\\n\\nFigs. 3.a, 7 support our specific perspective of the local maxima property. Please let us know of additional tests that may further substantiate it.\\n\\n**A2 :** Thank you for the suggestion to provide an error analysis of the curvature approximations. To this end, we conducted an experiment using an analytic 2D function with complex yet \\u201cnice\\u201d (differentiable) topography, enabling true quantities for validation.\\n\\nThe curvature $\\\\kappa$ (Eq. 8) around a data point is approximated through discrete sampling of its neighborhood boundary. Though grounded in Gauss Divergence Thm., we recognize that the finite sample size warrants error analysis. The findings, presented in Fig. 3 (b-d), demonstrate the following:\\n\\n1. **Expected Behaviour:** The Local maxima receives higher true $\\\\kappa$ than the saddle point (Fig. 3.b).\\n\\n2. **Robustness to Low-Sample Approximations:** Even at low sample sizes with large error bars, maxima and saddle points are separable by a threshold (Fig. 3.c). **This separability is the focus of our paper.**\\n\\n3. **Consistent Estimator:** The $\\\\kappa$ estimates\\u2019 mean is close to the true value across sample sizes (Fig. 3.c), while their std decays exponentially (Fig. 3.d).\\n \\nThis experiment provides essential reliability verification of our theoretical framework. If any important statistics can be added - please inform us. Details are in Appendix A. \\n \\n**A3 :** We would like to kindly draw your attention to the fact that the bias mentioned in line 122 refers to prior methods that train detectors on generated images. Such training introduces significant bias to the generation techniques encountered during training as documented in Fig. 2 of [4] and Fig. 2 of [5]. In contrast, our zero-shot method is training-free leveraging a pre-trained SD1.4. Importantly, SD1.4 was pre-trained on real images (without generated images). \\n\\nWe acknowledge the bias potential inherent to using a specific model (SD1.4). To test our method in this regard, we evaluated our approach on over 100K generated and 100K real images across 20 generation techniques \\u2013 some introduced after SD1.4. Our approach shows strong generalization to unseen generation techniques (Fig. 5, Table 1), providing thorough evidence that it is not confined to the biases of SD1.4.\\n \\n \\n**A4 :** Another important concept that requires an illustration is **the concentration of measure**, which describes how high-dimensional Gaussian samples concentrate around a sphere - a phenomenon critical to our derivations. To this end we added an additional 2D illustration in Fig 4.c as follows:\\n\\n1. For a $d$-dimensional $\\\\epsilon\\\\sim\\\\mathcal{N}(0,I)$ we obtained the $\\\\chi$-distributed $E\\\\|\\\\epsilon\\\\|$, $\\\\mathrm{Var}(\\\\|\\\\epsilon\\\\|)$ (norm\\u2019s mean and variance).\\n\\n2. We visualized corresponding 2D samples scaled to have these mean and variance of their norm, effectively simulating the phenomenon in 2D.\\n\\nAs $d$ increases, the radius becomes larger, and the variance converges, empirically illustrating the expected \\u201c*thin shell*\\u201d spherical distribution. \\n\\n**References**\\n\\n[1] Song & Ermon, NeurIPS 2019, \\\"Generative modeling by estimating gradients of the data distribution\\\"\\n\\n[2] Mitchell, et al., ICML 2023, \\\"Detectgpt: Zero-shot machine-generated text detection using probability curvature\\\"\\n\\n[3] Zahra Kadkhodaie et. al, ICLR 2024, \\u201cGeneralization in diffusion models arises from geometry-adaptive harmonic representations\\u201d\\n\\n[4] Epstein et. al, ICCV 2023 \\u201cOnline detection of ai-generated images\\u201d\\n\\n[5] Ojha et. al., CVPR 2023, \\u201cTowards universal fake image detectors that generalize across generative models\\u201d\"}", "{\"summary\": \"The paper explores a novel method to detect AI-generated images, focusing on zero-shot and few-shot regimes. It identifies key challenges in the field, such as the need for data upkeep with traditional supervised learning methods and limited theoretical grounding for current approaches. The authors propose a framework based on the implicit biases within the manifold of a pre-trained diffusion model, leveraging score-function analysis to approximate manifold curvature and gradient in the zero-shot setting. They extend the method for few-shot scenarios by incorporating a mixture-of-experts strategy. The proposed method demonstrates enhanced performance across 20 generative models.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The idea of leveraging manifold-induced biases from pre-trained diffusion models to detect generated images is novel and interesting.\", \"The methodology, essential theoretical formulations, and results are well-articulated, with equations and definitions supporting the approach.\", \"Experimental results are promising, and Figures 1 and 2 are intuitive and easy to understand.\"], \"weaknesses\": \"The primary concern is the robustness; please see the Questions below.\", \"questions\": \"1. The proposed method relies on pre-trained diffusion models and manifold analysis, which are implemented in high-dimensional space, potentially increasing computational costs. Could the authors provide an analysis of inference time and memory requirements?\\n\\n2. The method depends on certain hyperparameters, such as perturbation strength and the number of spherical noises. How robust is the method to these parameters across different models, and what guidelines can be provided for selecting these parameters?\\n\\n3. The authors tested the impact of JPEG compression on the method and reported a slight performance decrease. How does the method perform with other types of image post-processing, such as augmentation, denoising, and flipping?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thanks for the detailed clarification. I have checked the revised materials. As my concerns are addressed, I will raise my rating accordingly.\"}", "{\"title\": \"small update\", \"comment\": [\"Two additional figures have been added to the end of the Appendix of the revised manuscript to enhance completeness and clarity:\", \"**Fig. 12, relating to A1**: Presents excerpts from the mentioned prior work's observations, relating to the \\\"region of local maxima\\\" property, providing additional contextual support for our assumptions.\", \"**Fig. 11, relating to A2**: Extends the analysis of Fig. 3 (b-d), verifying the conclusions across all five interest points of the probability function (3 local-maximas, 2 saddle-points).\"]}", "{\"title\": \"Discussion Summary\", \"comment\": \"Dear Reviewers,\\n\\nWe sincerely appreciate your thoughtful feedback, which significantly enhanced our work. As we enter the final 5 days of discussions, we summarize the progress made in addressing all the concerns raised - which led to enhanced clarity, validated approximations, and demonstrated robustness, supported by key experiments presented in Table 2 and Figures 3, 4.c, 7, 10, and 11 (detailed below). We welcome any further feedback or questions you may have.\\n\\nBest regards,\\n\\nThe Authors\\n\\n**Summary of Our Contributions and Strengths:**\\n\\n- **Innovative Theoretical Framework:** We present novel mathematical derivations to detect generated images as points near local maxima of the (log) probability manifold. To this end we present novel derivations, combining score-function analysis and high-dimensional considerations, for curvature and gradient approximations. The resulting formula can be applied with any pre-trained diffusion model.\\n- **Exceptional Generalization with a +39.1% Average AUC Improvement** over the state-of-the-art zero-shot methods, AEROBLADE and RIGID (2024). This was validated on a comprehensive benchmark of **200K images**, including real and generated samples from **20 diverse generative techniques** (Table 1, Fig. 5). The remarkable improvement stems from our method's superior cross-technique stability, as detailed in Fig. 8 of the Appendix. Furthermore, when integrated into a few-shot setting, our approach enhances performance by **+4% to +7%**, underscoring its applicability. \\n\\n**Summary of the Key Concerns Raised and our Responses and Experiments:**\\n- **Justification for the Local Maxima Assumption (Reviewer Xh3J):** \\n1. We build on DetectGPT by Mitchell et al., 2023 [1], which revolutionized zero-shot detection of *generated text*, relying on the same assumption (lines 69, 186 in $\\\\textcolor{green}{\\\\text{green}}$). Our research can be perceived as the first extension of [1] to the image domain. However, [1] uses LLM's explicit probability modeling, rendering the method unsuitable for images, since GANs and diffusion models are implicit models. To address this challenge, we base our method on score function analysis. \\n2. **Fig. 3.a** further justifies this assumption with empirical evidence of reverse diffusion trajectories terminating near local maxima. We used Gaussian Mixture Model data, inspired by Song & Ermon, 2019 [2] (Details and statistics are in Appendix A, Fig. 7).\\n- **Error Analysis of Curvature Approximations (Reviewer Xh3J):** In **Figs. 3.b\\u2013d and 11**, we validate our curvature estimator using an analytic function with ground-truth curvature, demonstrating high reliability:\\n1. *Expected Behavior:* Local maxima have higher values than saddle points.\\n2. *Robustness:* Maxima and saddle points are separable even with low sample sizes (Fig. 3.c).\\n3. *Consistent Estimator:* Our curvature estimations are **empirically unbiased** (Fig. 3.c) with **exponentially decaying std** (Fig. 3.d).\\n\\n- **Robustness Analysis (Reviewers DPAQ, GHKQ):** **Table 2** presents sensitivity and ablation studies, demonstrating strong robustness, maintaining SOTA performance under all tests requested by the reviewers:\\n1. Hyperparameter changes.\\n2. Image corruptions\\n3. Alternative, more advanced, base diffusion models. \\n4. **Fig. 10** in the Appendix further details per-model hyperparameter robustness.\\n\\n- **Visualization and Improved Writing (Reviewers qnVq, Xh3J, GHKQ):**\\n1. **Fig. 4.c** illustrates the high-dimensional *concentration of measure*, central to our derivations.\\n2. Writing was clarified, including explanations to calculate our criterion (Sec. 4.3), motivation for the few-shot approach (Sec. 5.2), citations in Limitations section, and typographical corrections.\\n\\nThank you once again for your time and detailed feedback. It truly helped us improve our work.\\n\\n**References (Edited)**\\n\\n[1] Eric Mitchell, Yoonho Lee, Alexander Khazatsky, Christopher D. Manning, and Chelsea Finn. DetectGPT: Zero-shot machine-generated text detection using probability curvature. In *International Conference on Machine Learning*, pp. 24950\\u201324962. PMLR, 2023.\\n\\n[2] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. *Advances in Neural Information Processing Systems*, 32, 2019.\\n\\n**Original comment (references inadvertently omitted):** 28.11 \\n\\n**Edit to include references:** 3.12\"}", "{\"title\": \"Additional Robustness Points\", \"comment\": \"**Point 1: Figure 10 (right panel).** The Appendix now includes the requested per-model perturbation-strength ($\\\\alpha$) sensitivity analysis (in addition to the ($s$) per-model sensitivity analysis). Reminder: Table 2 shows that $\\\\alpha$ can be scaled $\\\\times 100$ while keeping overall SOTA performance. The per-model analysis reveals excellent robustness as well, where even upon $\\\\times 100$ scaling of $\\\\alpha$, most models show less than -0.05 decrease in AUC.\\n\\n**Point 2: Remarkable conclusions regarding the robustness** to the perturbation number ($s$) were drawn from **Figure 3(b\\u2013d)**, bridging the practical sensitivity analysis with theory-verifying experiments. Although Figure 3(b\\u2013d) was initially conducted in response to Reviewer Xh3J's concerns without an explicit intention to test robustness, this attribute emerged naturally in this setting. We conducted an error analysis of a **2D known, analytic probability function**. In this analysis, the curvature $\\\\kappa$ around a data point is approximated using spherical perturbations according to the formula we employ for image detection. We leverage access to the analytic probability function to calculate the error of this approximation. \\n\\nIn **Figure 3(c)**, we observe that the error increases as $s$ decreases but remains sufficiently small to effectively distinguish maxima from saddle points, even at low $s$. Importantly, **distinguishing local maxima on the image probability manifold is our main task**, reminder: We hypothesize that the ability to distinguish maxima translates to the ability to identify generated images. Thus, the robustness to low $s$ in terms of distinguishing maxima in **Figure 3(c)** aligns with the robustness to low $s$ in terms of generated image detection, as detailed in **Table 2 (top right)** and **Figure 10 (left)**. \\n\\nTo further emphasize this robustness perspective, we have added per-point histograms, shown in **Figure 11 (top right)** - page 25 of the Appendix, alongside new captions marked in green.\"}", "{\"comment\": \"Thank you for the thoughtful review, we are glad our response addressed most of your concerns.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper introduces a zero/few-shot framework for detecting AI-generated images by analyzing the inherent biases of a pre-trained diffusion model. The authors hypothesize that generated images are more likely to occupy stable local maxima on this learned manifold, characterized by specific curvature and gradient properties. By approximating these properties, they create a criterion to distinguish between real and generated images without requiring large datasets or retraining. Empirical results show their method outperforms other detection approaches across multiple generative models.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": [\"Zero-shot and few-shot capability makes the method practical.\", \"The theoretical perspective is interesting.\", \"Empirical results seem promising.\"], \"weaknesses\": \"1. Some theoretical assertions, such as the assumption that generated samples are more likely to be stable local maxima on the learned manifold, are not fully justified. This assumption underpins the detection criterion, but the paper does not offer a thorough mathematical or empirical rationale to support it.\\n2. The paper relies heavily on approximations in score-function and curvature estimations (e.g., Eq.5, 16-18). However, there is limited discussion or analysis of the tightness of these approximations. This could lead to questions about the reliability of the approximations, especially when they form the foundation of the theoretical claims. It would be beneficial if the authors provided error bounds analysis or empirical justifications for these approximations.\\n3. In line 122, the authors argue that previous methods still rely on access to generative methods during training, leading to biases towards those generation techniques. However, the proposed approach also relies on a pre-trained SD1.4. How does the proposed approach avoid the bias from it? For example, a realistic sample generated from a more recent model may not sit on a stable local maximum in the learnt log probability manifold of SD1.4.\\n4. The presentation can be improved. The mathematical analysis can benefit from illustrative examples, while the detailed proof can be moved to appendix.\", \"questions\": \"See weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thanks for reply\", \"comment\": \"Overall, my concern has been addressed. However, the claim that 'One possible explanation lies in the shared characteristics of generative model manifolds due to similarity in training data' needs further exploration and proof. Therefore, I lean toward maintaining my score.\"}", "{\"comment\": \"Thank you for your feedback, and updated rating.\"}", "{\"comment\": \"We appreciate the recognition of the sound theory and robust performance of our approach. Below we answer the questions raised.\\n\\n**Q1:** \\u201c..it is unclear why this method also performs well with models where score functions are not inherently applicable..\\u201d\\n\\n**A1:** We appreciate this question, which addresses a fundamental challenge in the field of generative image detection: Mitigating the biases that arise when a method is exposed to specific generative techniques, which could impair its generalization to new, unseen techniques. Although such generalization is a notable strength of our approach, in the Limitations section we acknowledge that we do not have an extensive theory regarding the generalizability to unseen generative techniques.\\n\\nOne possible explanation lies in the shared characteristics of generative model manifolds due to similarity in training data. There is limited availability of large-scale datasets [1,2], and many generative models are trained on similar datasets , which may lead to overlapping or closely related learned probability manifolds. This is despite the differences in architecture, size, and configuration [3]. Consequently, our method, which is exposed to one model\\u2019s manifold, may effectively use it as a proxy for detecting images generated by other generative models. This hypothesis aligns with observations in the literature that models trained on comparable datasets exhibit commonalities [4,5].\\n\\nAn interesting hypothetical challenge in the field may arise with the introduction of an entirely novel dataset\\u2014one that no generative model has been trained on to date, followed by training of new generative models. In such a scenario, it is uncertain whether existing methods would maintain their performance on the resulting newly generated images, as their success may be tied to biases in currently available datasets. However, we believe that our method can adapt effectively, and that substituting the SD1.4 model with a diffusion model trained on the new dataset is likely to yield strong performance.\\n\\nTo better clarify this in the manuscript, we expanded the discussion in the Limitations section to better outline the shared data perspective and include the references added here.\\n\\n[1] Villalobos , P., Ho, A., Sevilla, J., Besiroglu, T., Heim, L., & Hobbhahn, M. Position: Will we run out of data? Limits of LLM scaling based on human-generated data. In Forty-first International Conference on Machine Learning.\\u200f\\n\\n[2] https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai?embedded-checkout=true\\n\\n[3] Nalisnick, E., Matsukawa, A., Teh, Y. W., Gorur, D., & Lakshminarayanan, B. Do Deep Generative Models Know What They Don't Know?. In International Conference on Learning Representations.\\n\\n[4] Kornblith, S., Norouzi, M., Lee, H., & Hinton, G. (2019, May). Similarity of neural network representations revisited. In International conference on machine learning (pp. 3519-3529). PMLR.\\n\\n[5] Nguyen, T., Raghu, M., & Kornblith, S. (2020). Do wide and deep networks learn the same things? uncovering how neural network representations vary with width and depth. arXiv preprint arXiv:2010.15327.\\n\\n**Q2:** Citation issues on page 7\\n\\n**A2:** In the revised version, we have corrected all citation formatting and ensured that footnotes and page numbers are accurately referenced. \\n\\n**Q3:** \\\" The authors are encouraged to clarify the need for a few-shot setting and explain why zero-shot alone would not suffice as a more compelling and realistic approach.\\\"\\n\\n**A3:** We agree that zero-shot scenarios often reflect real-world situations, and our solution is fully suitable for such cases.\\n\\n\\nNonetheless, in some scenarios, managing a small amount of generated data can be justified if it leads to significant performance improvements - making few-shot methods appropriate. Our research demonstrates that integrating our method with a SOTA few-shot technique, Cozzolino et al. (2024), yields a 4\\u20137% improvement in detection performance without violating the few-shot setting. \\n\\nFor users willing to invest in data maintenance, our method provides a valuable enhancement to few-shot frameworks. It serves as an easy-to-integrate plugin, offering a flexible trade-off between data availability and performance improvement.\\n\\nEven though few-shot methods for generated image detection offer practical potential, balancing a trade-off between maintenance and performance, this domain is still largely under-explored (as are zero-shot methods). Our work contributes a practical approach to both regimes, excelling in zero-shot scenarios and enhancing few-shot performance.\\n\\nWe have clarified the motivation for introducing the few-shot evaluation in the revised manuscript in Section 5.2, under *Mixture of Experts with Few-shot Approaches*.\"}" ] }
7gGVDrqVaz
3D-Prover: Diversity Driven Theorem Proving With Determinantal Point Processes
[ "Sean Lamont", "Christian Walder", "Amir Dezfouli", "Paul Montague", "Michael Norrish" ]
A key challenge in automated formal reasoning is the intractable search space, which grows exponentially with the depth of the proof. This branching is caused by the large number of candidate proof tactics which can be applied to a given goal. Nonetheless, many of these tactics are semantically similar or lead to an execution error, wasting valuable resources in both cases. We address the problem of effectively pruning this search, using only synthetic data generated from previous proof attempts. We first demonstrate that it is possible to generate semantically aware tactic representations which capture the effect on the proving environment, likelihood of success and execution time. We then propose a novel filtering mechanism which leverages these representations to select semantically diverse and high quality tactics, using Determinantal Point Processes. Our approach, 3D-Prover, is designed to be general, and to augment any underlying tactic generator. We demonstrate the effectiveness of 3D-Prover on the miniF2F-valid and miniF2F-test benchmarks by augmenting the ReProver LLM. We show that our approach leads to an increase in the overall proof rate, as well as a significant improvement in the tactic success rate, execution time and diversity.
[ "Theorem Proving", "Formal Reasoning", "Search", "Representation Learning", "Pruning", "Filtering", "Diversity" ]
Reject
https://openreview.net/pdf?id=7gGVDrqVaz
https://openreview.net/forum?id=7gGVDrqVaz
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z44Hn6QQE8", "vVubtSDS5L", "sRMlXMlmt6", "rNC9xGysEy", "q727Yveqg1", "lsLBzFTpjq", "gH9FJWrrGC", "gEZOr7VZUj", "gBNyTGaRVK", "eknBa3KKbp", "XL7QqU3KeY", "TZBBey44UA", "TBsoDklAdo", "SIz9fnwzUn", "NtJJSk8AkX", "Lt1PjJleCn", "FPHbeTbaFs", "DeTcO0eebN", "8CzUHYuOw6", "6pKYzzCsHJ", "3h512pA12H", "37ogs8i5ks", "1lhK7X3qQ4" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731996341630, 1733272780191, 1730834654547, 1730279484022, 1732247333915, 1731123800073, 1732505876628, 1732703172422, 1731996890868, 1731996426915, 1731996353049, 1731996970325, 1731996802738, 1737523535637, 1730649155933, 1731997147651, 1735098935402, 1733181668286, 1731996898671, 1731997871366, 1732661833670, 1731996727279, 1731996374616 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2842/Authors" ], [ "ICLR.cc/2025/Conference/Submission2842/Authors" ], [ "ICLR.cc/2025/Conference/Submission2842/Reviewer_1vwm" ], [ "ICLR.cc/2025/Conference/Submission2842/Reviewer_xeEr" ], [ "ICLR.cc/2025/Conference/Submission2842/Authors" ], [ "ICLR.cc/2025/Conference/Submission2842/Reviewer_HiN1" ], [ "ICLR.cc/2025/Conference/Submission2842/Reviewer_HiN1" ], [ "ICLR.cc/2025/Conference/Submission2842/Reviewer_hTv9" ], [ "ICLR.cc/2025/Conference/Submission2842/Authors" ], [ "ICLR.cc/2025/Conference/Submission2842/Authors" ], [ "ICLR.cc/2025/Conference/Submission2842/Authors" ], [ "ICLR.cc/2025/Conference/Submission2842/Authors" ], [ "ICLR.cc/2025/Conference/Submission2842/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2842/Reviewer_hTv9" ], [ "ICLR.cc/2025/Conference/Submission2842/Authors" ], [ "ICLR.cc/2025/Conference/Submission2842/Area_Chair_rHeZ" ], [ "ICLR.cc/2025/Conference/Submission2842/Authors" ], [ "ICLR.cc/2025/Conference/Submission2842/Authors" ], [ "ICLR.cc/2025/Conference/Submission2842/Reviewer_xeEr" ], [ "ICLR.cc/2025/Conference/Submission2842/Authors" ], [ "ICLR.cc/2025/Conference/Submission2842/Authors" ], [ "ICLR.cc/2025/Conference/Submission2842/Authors" ] ], "structured_content_str": [ "{\"title\": \"General Comment\", \"comment\": \"We would like to thank all the reviewers for their detailed and constructive feedback.\\nWe appreciate the time and effort you have put into reviewing our paper,\\nand we are grateful for the opportunity to address your comments.\\n\\n# G.1\\n\\nBased on the comments of reviewer HiN1 and 1vwm, we agree that the motivation and intuition behind our transition model architecture in Section 2 could be improved.\\nWe understand that this may have caused some confusion, so we make a general comment\\nhere to clarify (which we will adapt and include in our revision).\\n\\nThe primary motivation of the architecture shown in Figure 2 is to generate\\nappropriate feature vectors to enable the application of DPP in the 3D-Prover algorithm.\\nDPP requires a vector encoding the attribute which we want to sample diversely.\\nBy generating vectors which reflect the impact of tactics on the environment, our transition model architecture hence\\nenables DPP to select tactics based on the diversity of their outcome.\\n\\nUsing the Encoder first, before the Decoder/Predictor, is done to enable the learning of these representation vectors.\\nBy bottle-necking the tactic to this encoding, the Encoder must learn an effective representation to ensure that the\\nDecoder and Predictor have enough information to determine the subsequent effect of the tactic on the environment.\\nIf we used a Decoder only architecture, this would not provide us with the tactic representation vector for DPP in the 3D-Prover algorithm.\\nThe ALL TOKENS and NO TACTIC baselines provide some alternative architectures for comparison, showing how \\neffective the Encoder is at generating useful representations for these predictions. \\n\\nFollowing training, 3D-Prover discards the Decoder, using only the Encoder and Predictor to evaluate candidate tactics\\nas outlined in Algorithm 1, so we wish to make the Predictor as small and efficient as possible (hence the small MLP architecture for the Predictor).\\n\\nIt is true that alternative architectures might improve upon this, however we demonstrate ours is effective both for\\npredicting the environment outcome, and for generating\\nuseful representations which can improve proof search (as we show in the Autoencoder comparison in 3.3.2 and Appendix\\nA.3). The primary contribution of the paper is in the search augmentation, so an improved architecture here \\nwould only serve to further improve upon that.\\n\\n# G.2\\n\\nWe ran an additional experiment on the LeanDojo Novel\\nPremises benchmark testing 3D-Prover on a larger dataset.\\nThis dataset has 2000 proofs in comparison to the 244 from miniF2F-valid and miniF2F-test,\\nallowing us to evaluate the performance of 3D-Prover on a larger scale.\\n\\nFollowing the same methodology, we trained a transition model from a single ReProver attempt,\\nbefore evaluating 3D-Prover, using K=32 for the filtering. We compare to the model with no filtering, and top-K=32.\\n\\nAdditionally, we examine the distribution of proof lengths found from this experiment.\\nTo account for different proofs of the same goal, we adjust proof lengths to be the shortest found from any attempt (\\ne.g. if 3D-Prover finds a proof of length 10, which was found in 3 steps by No Filtering, we count it as length 3).\\nHence, all proof lengths reported are the shortest found by any method.\\n\\nWe report the number of proofs found by each approach, organised by the proof length in the below table.\\n\\n| Proof Length | 3D-Prover (K=32) | Top-K (K=32) | No Filtering (K=64) |\\n|--------------|------------------|--------------|---------------------|\\n| 1 | 236 | 233 | **237** |\\n| 2 | 167 | 162 | **174** |\\n| 3 | **134** | 126 | 131 |\\n| 4 | **60** | **60** | 54 |\\n| 5 | **40** | 39 | 24 |\\n| 6 | **7** | 6 | 2 |\\n| 7 | **2** | 0 | 0 |\\n| Total | **646** | 626 | 622 |\\n\\nTo summarise the results of this experiment, we found a relative improvement of 3.2\\\\% over top-K=32, and a 3.9\\\\%\\nrelative improvement over no filtering in terms of the number of proofs found.\\nWe see that 3D-Prover finds deeper proofs, while maintaining a high proof success\\nrate for shallower proofs, unlike Top-K. The no filtering approach, as expected, finds the most shallow proofs,\\nhowever quickly drops off in performance for deeper proofs.\\nWe also note that 3D-Prover found the 2 longest proofs of length 7, with neither baseline finding any. \\n\\nThis gives some additional confidence in the benefits of our approach for a larger dataset, \\nwith the improvement over no filtering addressing some concerns of reviewer 1vwm.\"}", "{\"title\": \"Summary of Reviews and Author Responses\", \"comment\": \"We are extremely grateful to all reviewers for their insightful comments and engagement with our work.\\nOur proposed approach was recognised as a fresh (hTv9), innovative (1vwm), well-motivated and practical (xeEr)\\nperspective on tree search in theorem proving.\\nOur evaluation demonstrated our approach to be effective (xeEr), with strong and impressive (1vwm) results,\\nwhere we established an improvement upon ReProver on miniF2F-valid and miniF2F-test without modifying the underlying\\nmodel (HiN1, xeEr, 1vwm, hTv9).\\nBased on reviewer feedback, we strengthened our results further with comparisons to a no filtering setup (1vwm), as well as\\nevaluating over a larger dataset (Appendix A.6, comment G.2).\\n\\nWe thank reviewer HiN1 and xeEr for their positive support of the paper,\\nwith all of their concerns addressed and incorporated into the current revision.\\nWe are grateful to reviewer 1vwm, whose suggestion of a comparison to a no filtering setup has\\nhelped to further strengthen our results.\\nWe have included detailed responses to the remaining suggestions/concerns of 1vwm in this discussion,\\nwhich has helped improve our revision as detailed below.\\nWe finally thank reviewer hTv9 for their comments and active engagement in the discussion.\\nWe believe we have addressed their major concern regarding the comparative computational cost of our approach, with\\nan additional experiment showing improvements when filtering time is included in the environment budget.\"}", "{\"summary\": \"The paper presents a method to address a key issue in automated theorem proving: the vast search space that grows exponentially with proof depth due to numerous potential tactics. To manage this complexity, the authors introduce 3D-Prover, a filtering mechanism for proof tactics that prioritizes diversity and quality using Determinantal Point Processes (DPPs). The method involves two parts:\\n - Tactic Representation Learning: The authors generate semantic representations of tactics to predict their impact on the proving environment, likelihood of success, and execution time. This prediction model uses past proof attempts (synthetic data) to form representations of tactics based on their effect rather than mere syntactic similarity.\\n - Filtering Mechanism: Using DPPs, 3D-Prover filters the tactic pool, selecting those that are both high quality and semantically diverse. This selection optimizes proof success while minimizing redundant tactic exploration.\\n\\nThis paper augments the ReProver LLM (proposed in an earlier paper) by introducing a tactic filtering mechanism (3D-Prover) at each step of the proof search, whereby the tactic space is reduced. The authors use Best-first Search for proof search where nodes are expanded in order of their cumulative log probability. Specifically, for each tactic state, the ReProver LLM is used to sample 64 candidate tactics. From the 64 tactics, the proposed tactic-filtering mechanism is used to select a subset of K (set at 8, 16 or 32) tactics. A lower value of K indicates a strong filtering and is aimed to cut down the proof search even more. The authors compared their filtering (from 64 tactics to K = 8, 16 or 32 tactics) with two baselines viz., Top-K (selects top K tactics from 64 candidate tactics, based on their log probabilities) and Random (selects K tactics at random from the 64 candidate tactics). \\n\\n3D-Prover is tested on the miniF2F-test and miniF2F-valid benchmark, where the basic prover framework is the same as the ReProver LLM, and 3D-Prover is used as a tactic filtering step at each state. The authors claim that filtering tactics results in an increased proof success rates due to reduced execution time, and more diverse tactic sets. The approach demonstrates scalability and effectiveness for deeper, complex proofs, addressing challenges faced by other models like DeepSeek-Prover-V1.5 and contributing to proof success beyond the generator's baseline. The experiments indicate that 3D-Prover outperforms traditional Top-K or Random tactic selection, when 3D-Prover filtering is applied. \\n\\nNotably, from Table 2, it is evident that for the Top-K baseline, the pass@1 metric i.e., the number of proofs found by ReProver after one attempt is 22.4%. And by incorporating filtering by 3D-Prover with K=8 (i.e., a stricter tactic filtering), pass@1 increases to 24.4%. However for K=32, the pass@1 metric increases minutely from 27.8% to 28.2% because of tactic filtering. A Random filtering, as expected, performs worse than Top-K baseline or the proposed 3D-Prover filtering. The authors also perform ablation study to demonstrate that the filtered tactics are indeed diverse. \\n\\nThe authors claim that 3D-Prover's design, which layers a diversity-driven tactic filter atop conventional LLMs, points to a promising direction for automated theorem proving, enabling faster, deeper proofs while controlling for computational resources.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The paper introduces a novel approach to tactic filtering that emphasizes both diversity and quality. The concepts of transition-aware tactic embeddings and the use of Determinantal Point Processes (DPPs) are innovative in the context of theorem proving, although DPPs themselves are a well-established probabilistic method and not a new contribution of this paper.\", \"With diversity-driven tactic filtering, the Best-First Search can explore deeper proofs, leading to a faster proof search overall. This is impressive. However, I did not find a comparison of execution times with and without filtering. Table 7 compares execution times with other baselines, but the paper does not clarify the improvement in proof search time specifically due to filtering.\", \"The tactic filtering method discards a significant portion (87.5%) of sampled tactics from the ReProver LLM for each tactic state, with the proposed 3D-Prover filtering approach outperforming baseline methods like Top-K and Random filtering. By effectively reducing the tactic space, this filtering approach enables more efficient proof searches with fewer resources. While this is a strong result, it would be even more compelling if the paper included a comparison against a no-filtering setup (i.e., considering all 64 tactics).\"], \"weaknesses\": [\"The preliminaries and transition models presented in Section 1.2 and 2.1 could have been written in a simplified manner. Some notations are hard to follow and expressed in a circuitous way.\", \"Why is the decoder taking as input a concatenation of e and g, when e is already an encoded and pooled version of t and g? What is the motive of having g twice?\", \"Also, e is an encoded version. How is concatenating an encoded string to a non-encoded string g, meaningful?\", \"The authors use metrics like BLEU, ROUGE-L-F1 and top-4 accuracy (proportion of samples which have one beam identical to the ground truth) to evaluate the \\\"Output\\\" in Figure 2. BLEU and ROUGE are similarity metrics. How is checking similarity with the ground-truth \\\"Output\\\" relevant? Because even if the generated \\\"Output\\\" and the ground-truth \\\"Output\\\" are very close according to these similarity scores, they may not be a syntactically and semantically correct next tactic state.\", \"In lines 408-418, I do not see the quoted values in the referred tables: \\\"\\u223c36% relative improvement (Table 1)\\\" and \\\"\\u223c6\\u20139% relative improvements (Table 7)\\\". It is not clear if these refer to the tables in the respective papers.\", \"I have concerns about the training dataset used for the Transition Model. Specifically, in lines 220-222, it is mentioned that \\\"We obtain the dataset D from a vanilla ReProver attempt on miniF2F-valid, which results in 498,236 transitions, which we split randomly into 95% training, 5% testing.\\\" i.e., 3D-Prover uses a transition model trained from miniF2F-valid transitions. In that case, doesn't that mean that in Table 2, the proof search results for miniF2F-valid, are trained and evaluated on the same dataset? And for this reason, the results for miniF2F-valid in Table 2 are significantly better than those for miniF2F-test?\", \"Further, the improvement in pass@1 values on the miniF2F-test is not that significant. For K=8, there is a 22.4% to 24.4% i.e. a 2% increase in the number of proofs found. And this improvement goes down to 0.4% (27.8% to 28.2%) for K=32.\", \"Also, the paper does not provide the results without any kind of filtering.\", \"One of the most concerning weaknesses of this paper is that it does not compare the ReProver (for tactic prediction) + 3D-Prover (for tactic filtering) framework against any state-of-the-art method like DeepSeek-Prover, InternLM-Math, etc. The filtering method is only compared with two baseline tactic filtering mechanisms i.e., Random filtering and top-k filtering. But the paper does not show whether the effect of this filtering is significant enough, such that it outperforms other state-of-the-arts that do not use filtering or use some kind of state-of-the-art filtering.\", \"Although the 3D-Prover introduces a novel design by layering a diversity-driven tactic filter on top of conventional LLMs, the paper falls short in evaluating and demonstrating its superiority over state-of-the-art theorem provers. Additionally, the results show only minimal improvement compared to baseline tactic filtering methods, such as Top-K and Random filtering.\"], \"questions\": \"Please address the issues raised in the Weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces 3D Prover, a framework that combines a representation learning algorithm to generate semantically aware encodings of proof states and tactics with a filtering algorithm for enhancing semantic diversity in LLM-generated tactics for proof synthesis. The proposed method was tested on top of ReProver, demonstrating its effectiveness. Additionally, ablation studies validate the contribution of each component in the design.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The proposed algorithm, 3D Prover, is well-motivated and practical. The paper presents an approach to augmenting proof search history using an existing LLM to enhance tactic selection, resulting in more efficient navigation within the search space.\", \"The semantic-aware representation learning of proof states and tactics is well-established and analyzed. The representations learned from proof history are useful for downstream tasks, including proof synthesis. The experimental design and results demonstrate the impact of each component of the proposed learning framework.\", \"The proposed filtering framework is straightforward and effective, with results outperforming ReProver without modifying the underlying model. This approach is thus beneficial for advancing proof synthesis in large language models.\"], \"weaknesses\": [\"Section 3.1 could benefit from greater clarity. Specifically, the motivation behind using this method could be expanded upon - why was DPP chosen over other sampling algorithms for balancing quality and exploration in this context?\", \"In the experiments, both $\\\\lambda_s$ and $\\\\lambda_{\\\\tau}$ are set to zero. Additionally, Section A.2\\u2019s hyperparameter sweep further suggests that these terms (or at least $\\\\tau_i$) do not significantly contribute to filtering performance. Although this result is not unexpected, it should be explicitly discussed in the experiments section to enhance the clarity of the findings.\"], \"questions\": \"I wonder what the intuition is for preferring shorter execution time for tactic. Tactic execution time can vary based on factors like the goal state; for example, `linarith` may require different durations depending on the complexity of the arithmetic expressions involved. Moreover, while tactics like `linarith` can take longer for more complex arithmetic, they are often the very effective in reducing the theorem down to its underlying logic.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"First Revision Details\", \"comment\": \"We have submitted our first revision, where we have made changes to reflect the suggestions of the reviewers. These suggestions have greatly improved the clarity and flow of the paper, and helped strengthen our results. We have used blue font to highlight the major updates to our manuscript, making it easier to compare. If the reviewers have any additional comments or concerns please let us know so we can address them before the end of the discussion period.\", \"a_summary_of_the_changes_is_as_follows\": [\"We have added results from No Filtering (Table 2,4,5,6,7) as an additional baseline, to make the results more compelling as suggested by reviewer 1vwm (Q.1 and Q.2). In all cases, our approach leads to an improvement.\", \"We move the section introducing DPPs to the Introduction (1.2), helping us better motivate and discuss our transition model in 2.1 (addressing HiN1 Q.1, 1vwm Q.4). We have also expanded the motivation of DPPs, in line with comments from reviewer xeEr (Q.1).\", \"We have moved our preliminaries section to Appendix A.1, and simplified the notation and introduction of our transition model in section 2.1 (1vwm Q.3).\", \"We include the LeanDojo results from comment G.2 as an appendix (A.6), where we show the improvement of our approach over a larger dataset.\", \"We include Pass@k up to k=4 in Appendix A.2, addressing HiN1 Q.3.\", \"We clarify the references from 408-418 (1vwm Q.6)\", \"We explicitly discuss the filtering performance of hyperparameters (xeEr Q.2, lines 419-421)\", \"We note the online learning setup of miniF2F-valid, addressing 1vwm Q.7 (lines 422 - 424)\", \"We discuss the motivation behind preferring faster tactics (xeEr Q.3, lines 517-521)\"]}", "{\"summary\": \"The search space of proofs grows exponentially with the depth of the proof but a number of branches in this space capture sequences that could be semantically similar or lead to errors. Pruning this search space is, thus, critical. The paper looks at using synthetic data from proof attempts to accomplish this pruning. Semantically-aware tactic representations are generated to capture the effect on proving environment and likelihood of success. DPPs are used to select semantically diverse and high quality tactics using these representations. The developed approach is called Diversity Driven Determinantal Point Process Prover (3D-Prover). The approach is evaluated on miniF2F-valid and miniF2f-test benchmarks by augmenting reProver LLM.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"Many generated proof paths are equivalent modulo variable renaming and other semantics-preserving transformations (such as variable renaming). Authors have found that 75% of the proof paths lead to execution error and thus, it is critical to factor them into exploration. Past work has considered sparse binary signal from the proof status of a goal, intrinsic reward for exploration for new nodes, etc but these do not factor in error likelihood, error messages and execution time. Further, addition of new nodes does not factor in node similarity.\\n\\n3D-Prover can augment proof search by filtering candidate tactics to generate diverse and high quality subsets. It is able to filter tactics based on their likely outcome. \\n\\nThe utility of the transition model representations is demonstrated using ablation study where the transition model Encoder is replaced by an Autoencoder of the same size. Augmenting the ReProver LLM on the standard miniF2F benchmark, the paper reports an improvement in the overall proof success rate.\", \"weaknesses\": \"Please see questions for some of the concerns of the reviewer. The reviewer is happy to raise the score if the concerns are addressed.\", \"questions\": \"For representing the transitions, the paper adopts a specific architecture - encode the goal and tactics followed by a decoder to get the outcome (next goals or error) and a predictor to determine whether the transition led to an error and the time taken. There is no intuition provided to why this is a better architecture. Why note just have a decoder predict the entire tuple? What other architectures were considered and why is the current one most promising? Even if there is no experimental evaluation of other baseline, it would be good to include discussion based on whatever experimentation was done to select this architecture.\\n\\nWhen splitting the 500K transitions into train and test, how did the authors ensure that it is not the case that the same goal, tactics, next-goal triplet in the test set does not show up in another proof and has been included in the training set. Since the transitions are being collected across several proofs, how is the train/test partitioning ensured to be non-overlapping? \\n\\nCould you include not just top-4 but also top-k (k= 1 to 4 or 5) in the Table 1?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you for your response and the analysis.\"}", "{\"title\": \"Thank you for your response.\", \"comment\": \"Thank you for your response. I still firmly believe that quantitative comparative experiments are necessary. As you also agreed in your response to *Weakness 2*, this paper fundamentally does not improve the *effectiveness* of language model-generated correct theorems but can at most enhance *efficiency* through pruning in tree search. The reason prior works did not provide search time comparisons is that this was not their core contribution. I believe this paper would still benefit from a quantitative comparative experiment. As I described in my response to the Associate Program Chairs, this experiment is very simple: it only requires comparing the execution time efficiency of the method proposed in this paper versus baseline methods. This result determines whether the improvements achieved by this paper stem from additional computations that were performed but not fairly compared in terms of computation budget.\"}", "{\"comment\": \"# Q.6\\n*In lines 408-418, I do not see the quoted values in the referred tables: \\\"\\u223c36% relative improvement (Table 1)\\\" and \\\"\\u223c6\\u20139% relative improvements (Table 7)\\\". It is not clear if these refer to the tables in the respective papers.*\\n\\nThank you for pointing this out, these are references are to tables in the respective papers, which we will make explicit in the revision.\\n\\n# Q.7\\n\\n*I have concerns about the training dataset used for the Transition Model. Specifically, in lines 220-222, it is\\nmentioned that \\\"We obtain the dataset D from a vanilla ReProver attempt on miniF2F-valid, which results in 498,236\\ntransitions, which we split randomly into 95\\\\% training, 5\\\\% testing.\\\" i.e., 3D-Prover uses a transition model trained\\nfrom miniF2F-valid transitions. In that case, doesn't that mean that in Table 2, the proof search results for\\nminiF2F-valid, are trained and evaluated on the same dataset?*\\n\\nYou are correct to observe that our miniF2F-valid results use a transition model trained on one attempt of the same\\ndataset,\\nas we specify in line 421. We note that this is a common paradigm, with many previous proof search methods (e.g. [1, 2, 3] )\\nusing previous attempts to improve subsequent approaches over the same dataset. This is referred to\\nas, for example, *Expert Iteration* in [2], or a *transductive* setup in [1], where we quote from section 6.4 in [1]:\\n\\n**This protocol is also sensible, as allowing the model to learn from a failed proof-search can lead to more focused exploration on the\\nnext attempt, proving more statements overall than a model that would not be trained online.**\\n\\nAs they discuss, this is a reasonable paradigm, analogous to online learning in an RL setup,\\nwhere the model learns from previous proof attempts to improve its subsequent performance.\\nWe will mention this setup in our revision, to make it clear.\\n\\n*And for this reason, the results for miniF2F-valid in\\nTable 2 are significantly better than those for miniF2F-test? Further, the improvement in pass@1 values on the\\nminiF2F-test is not that significant. For K=8, there is a 22.4\\\\% to\\n24.4\\\\% i.e. a 2\\\\% increase in the number of proofs found. And this improvement goes down to 0.4\\\\% (27.8\\\\% to 28.2\\\\%) for\\nK=32.*\\n\\nIt is not unexpected that when the transition model is trained over the same proof attempts it performs better,\\nwhich as above is a reasonable online learning setup used to evaluate proof search methods.\\nAlthough the improvement is less for miniF2F-test, it still shows the effectiveness of our approach in the\\nmore difficult scenario of unseen proofs.\\n\\nAs we discuss from line 403, it is reasonable to expect tree search to improve\\nperformance by a relatively small degree, when compared to an improved tactic generator such as DeepSeek-Prover or\\nInternLM-Math. When comparing to other search methods, we find that our approach is competitive.\\nFor example, the MCTS based search algorithm in DeepSeek-Prover-V1.5 (Figure 5 in [1]), over miniF2F-test,\\nincreases performance from 58.4% to 59.6% for the 4-pass setting, and 60.2% to 62.7% for the 16 pass setting (relative\\nimprovements of 2.1% and 4.2% respectively).\\nThe proofsize objective value function from [2]\\nincreases performance (on miniF2F-valid, Table 1 in [2]) from 28.4% to 28.5% for the 1-pass setting,\\nand 33.6% to 35.5% for the 8 pass setting (relative improvements of 0.04% and 5.7% respectively).\\nThey also train for 2 iterations (whereas we train for 1), with each iteration taking around 20,000 GPU hours (we take\\napproximately 100).\\n\\nOur results in Table 2 show strong improvements for deeper proof settings, which are more difficult to achieve.\\nOur experiment on the larger LeanDojo dataset (G.2) demonstrate this further, where we find more (and deeper)\\nproofs in comparison to no filtering, and the Top-K baseline.\", \"references\": \"[1] Lample et al., HyperTree Proof Search for Neural Theorem Proving, https://arxiv.org/pdf/2205.11491\\n\\n[2] Polu et al., Formal Mathematics Statement Curriculum Learning, https://openreview.net/pdf?id=-P7G-8dmSh4\\n\\n[3] Bansal et al., HOList: An Environment for Machine Learning of Higher Order Theorem Proving, https://arxiv.org/pdf/1904.03241\"}", "{\"comment\": \"# Q.5\\n\\n*The authors use metrics like BLEU, ROUGE-L-F1 and top-4 accuracy (proportion of samples which have one beam identical\\nto the ground truth) to evaluate the \\\"Output\\\" in Figure 2. BLEU and ROUGE are similarity metrics. How is checking\\nsimilarity with the ground-truth \\\"Output\\\" relevant? Because even if the generated \\\"Output\\\" and the ground-truth \\\"Output\\\"\\nare very close according to these similarity scores, they may not be a syntactically and semantically correct next\\ntactic state.*\\n\\nWe agree that a drawback of these metrics, as you point out, is that they only compare the lexical similarity of two\\nsequences.\\nThere may be similar sequences which have quite different semantics, or are not valid tactic states.\\nRegardless, they are still useful (although imperfect) metrics, as they demonstrate how well our model\\ncan predict the environment output without actually executing the tactic.\\nBeyond these metrics, we also demonstrate that our tactic representations effectively capture the semantics of tactics\\nin Appendix A.3.\\n\\nWe also note that these metrics are used in other domains where semantic similarity is desired, such as translation and\\ncode completion. For example, Figure 9 in the survey from [1] finds that BLEU, ROUGE and Top-k are among the most widely\\nused metrics for evaluating code completion.\\n\\nAs far as we are aware, there is no simple evaluation metric for capturing semantic similarity (we are open to test\\nother metrics, if you have suggestions).\\nGiven this, we did initially investigate an \\\"Autorater\\\" approach (discussed in [2]),\\nwhere we have a high performance LLM (Gemini-Pro) evaluate predictions in terms of their semantics. \\nWe didn't include these results in the paper as we thought it would distract from the main results, as it is somewhat \\nlengthy to cover.\\nTo help address your concern, we present an overview of the approach and results here, which we can include\", \"as_an_appendix_in_the_revision_if_you_find_it_useful\": \"We evaluated the Output predictions for the COMBINED vs SEPARATE models in Section 2 by asking the Gemini-Pro LLM to\\nscore both in terms of their semantics. The prompt used, along with an example being assessed, is given to LLM as below:\"}", "{\"comment\": \"Thank you for the detailed and helpful comments. We appreciate your willingness\\nto increase the score if we can address your concerns, which we believe are all reasonable and achievable with\\nsome additional discussion and minor updates. We will address your concerns in order below.\\n\\n# Q.1\\n\\n*For representing the transitions, the paper adopts a specific architecture - encode the goal and tactics followed by a\\ndecoder to get the outcome (next goals or error) and a predictor to determine whether the transition led to an error and\\nthe time taken. There is no intuition provided to why this is a better architecture. Why note just have a decoder\\npredict the entire tuple? What other architectures were considered and why is the current one most promising? Even if\\nthere is no experimental evaluation of other baseline, it would be good to include discussion based on whatever\\nexperimentation was done to select this architecture.*\\n\\nWe agree that an expanded discussion motivating our transition model architecture will help with the clarity of the\\npaper. Our general comment G.1 provides the intuition and motivation of our architecture, however we will expand this to\\naddress your specific questions.\\n\\nWe don't have the Decoder predict the entire tuple for two reasons. Firstly, we require our time\\nprediction to be real valued, and the status prediction to be in [0,1]. \\nThis is an unnatural prediction task for the decoder, which would require us to e.g. discretise the time into several buckets,\\ncomplicating the architecture further.\\nSecondly, we wish to have a fast and efficient Predictor, as it is used in the 3D-Prover algorithm to evaluate tactics.\\nUsing a small MLP for the predictor is therefore useful for speeding up 3D-Prover, as we can discard the much \\nlarger Decoder (only needing the Encoder and Predictor for 3D-Prover). As discussed in G.1, \\nwe will update the paper to clarify these points, as we agree it adds important context to motivate the architecture. \\n\\nWe hope that this clarifies the intuition and motivation behind our architecture, but please\\nlet us know if you have any further questions.\\n\\n# Q.2\\n\\n*When splitting the 500K transitions into train and test, how did the authors ensure that it is not the case that the same goal,\\ntactics, next-goal triplet in the test set does not show up in another proof and has been included in the training set.\\nSince the transitions are being collected across several proofs, how is the train/test partitioning ensured to be non-overlapping?*\\n\\nAddressing the concern of overlapping train and test sets,\\nwe note that every goal state (within and between proofs) is unique.\\nThe search tree is implemented to ensure that all nodes for a given proof have unique goal states,\\nwith tactics for a given node being all unique (so there is no overlap within proofs). \\nThe goal state includes a unique identifier for the proof attempt it belongs to (preventing overlap between proofs).\\nThese factors prevent any overlapping triplets within and between proofs,\\nas we only use transitions from a single proof attempt per goal.\", \"your_comment_did_however_raise_the_question\": \"If we ignore the unique identifier,\\nare there overlapping tuples between proofs as you suggest (where the goal, tactic and response are identical for different proof attempts)? \\nIn this case, the model might ignore the identifier and learn to predict the response based on a previously seen example. \\nTo address this concern, we examined our dataset and found 136 of these overlaps out of the 498,236 transitions.\\nAlthough such a small overlap would not impact our results in any noticeable way,\\nwe will update our data processing to detect and remove these instances.\\n\\nWe appreciate your identification of this, which will help improve the quality of our dataset.\\n\\n# Q.3\\n*Could you include not just top-4 but also top-k (k= 1 to 4 or 5) in the Table 1?*\\n\\nWe assume you refer to Table 8 in Appendix A1, for which we will add the Pass@k results for k=1 to 4 in the revision.\\n\\nFor reference, the Pass@k for our experiments are in the table below, where we report the Pass@k based on the order of\", \"execution_for_each_run\": \"| | 3D-Prover (K=8) | 3D-Prover (K=16) | 3D-Prover (K=32) | Random (K=8) | Random (K=16) | Random (K=32) |\\n|--------|-----------------|------------------|------------------|--------------|---------------|---------------|\\n| Pass@1 | 24.9% | 27.8% | 28.6% | 18.0% | 21.2% | 28.1% |\\n| Pass@2 | 26.1% | 29.4% | 29.0% | 22.9% | 28.6% | 29.0% |\\n| Pass@3 | 26.5% | 29.8% | 29.8% | 24.9% | 29.4% | 29.8% |\\n| Pass@4 | 28.6% | 31.0% | 29.8% | 25.7% | 30.2% | 29.8% |\"}", "{\"comment\": \"Thank you for the review and constructive feedback.\\nWe appreciate your comments that our approach provides\\na fresh perspective on tree search in theorem proving.\\nWe hope to address your concerns regarding the soundness of our approach, specifically\\nregarding comparative computational cost and the limitations of the underlying language model.\\nThese are valid and reasonable concerns, however we would like to highlight that both of these limitations are inherent\\nto much of the research in this area, as we will detail below.\\n\\n## Q.1\\n\\n*The paper filters tactics only after they have been sampled by a language model, so the computational cost of model\\nsampling remains unchanged. While the paper tries to reduce Lean prover calls with a specialized filtering algorithm, it\\ndoes not provide a comparative analysis on this aspect. Does 3D-Prover achieve better results than the case that these\\ncomputational resources were used to apply the Lean prover to all tactics the language model generated?*\\n\\nA comparison reallocating resources in this way would be quite difficult, given that the provers and the tactic generator/filters \\nare implemented as separate processes.\\nSimilar to [1], we have a set of tactic generators serving requests from a separate set of proving processes\\n(2 generators/filters serving 4 proving processes in our case).\\n\\nTo control for the underlying hardware setup, it is standard practice in the area to compare approaches based on the\\nnumber of proofs achieved with a fixed environment budget (e.g. [1, 2, 3, 4, 5]).\\nAdditional search time is not considered or reported in any of these approaches,\\nwhich all perform additional computations beyond the underlying tactic generator.\\nAs we discuss from line 408, our search method gives a similar magnitude of improvement to approaches \\nwhich use significantly more resources in their search algorithms.\\nOur embedding model is also significantly cheaper than an LLM call (as used for search in e.g. [3, 5]), \\nand we discuss this overhead in Appendix A.5.\\n\\nGiven the exponential growth of the search tree, the primary benefit of our approach is to enable deeper proofs\\nwhich are not feasible (under the same environment budget) for broader searches without filtering, while\\nmaintaining a high proof success overall. Our additional results in G.2 support this further, where we can see that \\napplying all tactics the language model generated (No Filtering) leads to significantly fewer proofs greater than length 3.\\nTo reach proofs of length 6 or 7, as achieved by 3D-Prover, would require a significantly larger environment budget\\nif all tactics were applied, considering the exponential growth of the search tree.\", \"references\": \"[1] Bansal et al., HOList: An Environment for Machine Learning of Higher Order Theorem Proving, https://arxiv.org/pdf/1904.03241\\n\\n[2] Xin et al., DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search, https://arxiv.org/pdf/2408.08152\\n\\n[3] Polu et al., Formal Mathematics Statement Curriculum Learning, https://openreview.net/pdf?id=-P7G-8dmSh4\\n\\n[4] Wu et al., TacticZero: Learning to Prove Theorems from Scratch with Deep Reinforcement Learning, https://arxiv.org/pdf/2102.09756\\n\\n[5] Lample et al., HyperTree Proof Search for Neural Theorem Proving, https://arxiv.org/pdf/2205.11491\\n\\n## Q.2\\n\\n*The paper emphasizes balancing quantity and diversity in tactic selection from the sampled tactics. However, given\\nlimitations in the language model\\u2019s capability, it may not generate tactics diverse enough to meet the problem\\u2019s\\nrequirements. This suggests that the proposed method may not fundamentally improve the success rate beyond the language\\nmodel\\u2019s own capacity, limiting 3D-Prover\\u2019s primary advantage to efficiency (though this, as noted in Weakness 1,\\nrequires further estimation).*\\n\\nYou are correct to point out that proof success will be limited by the underlying language model.\\nAs we discuss from line 408, this is a limitation of any search approach,\\nand would apply to all other search algorithms in the domain.\\nImproved search approaches do however have the advantage of being independent of the tactic generator,\\nso they can be used with newer models as they improve. As we mention above, the primary advantage of 3D-Prover is\\nenabling new and deeper proofs to be discovered.\"}", "{\"comment\": \"The example above shows that using a large LLM can parse and interpret the semantics of the two predictions,\\nallowing it to score the semantics better than what would be allowed by a lexical similarity such as BLEU.\\nOf course, there are cases where the LLM is incorrect, however previous work (e.g. related work in [2]) has shown that this approach\\nis still quite effective.\\n\\nWe ran this prompt with 1383 prediction comparisons for transitions from the LeanDojo Novel Premises benchmark.\\nWe ran it twice for each example, where we swap the\\norder of the predictions to remove ordering bias in the LLM. We then average the scores over the two orderings.\\nFor this, the 95\\\\% CI for the COMBINED model was (2.7, 2.9) while the 95\\\\% CI for the SEPARATE model was (2.0, 2.2)\\nThis shows a significant increase in the LLM scores for the COMBINED vs SEPARATE model, where it is prompted to\\nscore for semantics rather than syntax.\\n\\nWe hope that this gives you some more confidence in the predictions from our transition model,\\nand that it can capture the semantics of the resulting environment state.\", \"references\": \"[1] Husein et al., Large language models for code completion: A systematic literature\\nreview, https://doi.org/10.1016/j.csi.2024.103917.\\n\\n[2] Vu et al, Foundational Autoraters:\\nTaming Large Language Models for\\nBetter Automatic Evaluation, https://arxiv.org/pdf/2407.10817\", \"title\": \"Q5 Autorater summary\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": [\"This paper proposes a method to filter tactics generated by sampling a tree search policy based on diversity in formal theorem proving. To capture tactic diversity, it uses an encoder to embed both the tactic and the goal it is applied to, followed by Determinantal Point Processes (DPP) to select tactics based on these embeddings.\", \"The encoder used for embedding is trained on two tasks: (a) an auxiliary decoder that applies cross-attention on the embedding to predict the environment\\u2019s response after applying the tactic (yielding the next goal if successful, or error messages if unsuccessful); (b) a single-layer MLP that predicts both the success of the tactic and the time required by the Lean prover to verify it. In the final tactic selection step, embeddings are also weighted according to predictions from the MLP regarding tactic success and Lean prover verification time.\", \"Experimental results show that the tactics selected by 3D-Prover achieve a significantly higher pass rate compared to the baseline.\"], \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. This paper introduces an additional filtering mechanism within language model-based tactic tree search, providing a fresh perspective for research on tree search in theorem proving.\\n2. Experimental results demonstrate noticeable performance improvements even with a relatively small number of filtered tactics, suggesting that the diversity-based screening method is effective for scenarios with limited computational resources.\", \"weaknesses\": \"1. The paper filters tactics only after they have been sampled by a language model, so the computational cost of model sampling remains unchanged. While the paper tries to reduce Lean prover calls with a specialized filtering algorithm, it does not provide a comparative analysis on this aspect. Does 3D-Prover achieve better results than the case that these computational resources were used to apply the Lean prover to all tactics the language model generated?\\n\\n2. The paper emphasizes balancing quantity and diversity in tactic selection from the sampled tactics. However, given limitations in the language model\\u2019s capability, it may not generate tactics diverse enough to meet the problem\\u2019s requirements. This suggests that the proposed method may not fundamentally improve the success rate beyond the language model\\u2019s own capacity, limiting 3D-Prover\\u2019s primary advantage to efficiency (though this, as noted in *Weakness 1*, requires further estimation).\", \"questions\": \"Please refer to the first point in Weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the thorough and helpful review. We are glad that you found our approach well motivated and practical, and\\nuseful for advancing proof synthesis in large language models. We hope to address your concerns and questions in order\\nbelow.\\n\\n# Q.1\\n\\n*Section 3.1 could benefit from greater clarity. Specifically, the motivation behind using this method could be expanded\\nupon - why was DPP chosen over other sampling algorithms for balancing quality and exploration in this context?*\\n\\nWe chose DPP due to the inherent trade-off between diversity and quality, and its simplicity.\\nWe are unaware of other sampling algorithms which\\ndo this inherently (although we appreciate any alternative suggestions!).\\nGiven DPPs are used in a variety of areas to successfully increase diversity, such as \\ndocument summarization, pose estimation and video summarisation, they are a natural choice [1].\\nWe agree this context would help clarify the motivation behind DPP, so we will include this in the revision.\", \"references\": \"[1] Lample et al., HyperTree Proof Search for Neural Theorem Proving, https://arxiv.org/pdf/2205.11491\"}", "{\"metareview\": \"This paper concerns pruning proof search of neural theorem proving and proposes 3D-Prover, which is built on top of the previous work ReProver and consists of a representation learning component capturing the transition semantic of theorem proving, and a filtering mechansim for proof tactics using Determinantal Point Processes (DPPs). DPPs are used to select semantically diverse and high quality tactics. The experimental evaluation shows that 3D-Prover outperforms baselines like Top-K and random filtering. Using DPPs to filter proof tactics in the context of neural theorem proving is novel, however, its effectiveness is not significant. Reviewers and the AC share concerns regarding the evaulation setup (e.g., filtering vs no-filtering) and effectiveness (e.g., marginal improvement compared to simple baselines).\", \"additional_comments_on_reviewer_discussion\": \"During the rebuttal period, the authors shared examples and new results upon suggestions from reviewers (hTv9,1vwm,HiN1). The new information does help to resolve some concerns, but the main concern about the marginal improvement still remains.\"}", "{\"comment\": \"Thank you for the follow up response, and your active engagement in the discussion.\\nWe are unsure of what you mean by **effectiveness** vs **efficiency**, we focus on the standard metrics \\nof proof success (as defined by Pass@1), for which we show an improvement. \\nInterpreting efficiency as the number of proofs found per wall clock time,\\nyour proposed experiment could provide an additional perspective, and we give additional results below. \\nHowever, we disagree that the absence of such a comparison is a major weakness of the paper.\\nWe again emphasise that our approach was compared\\nagainst baselines using the same methodology (i.e. using a fixed environment budget)\\nas related work, which we detailed in our original response.\\n\\nWe firmly disagree on the point that **The reason prior works did not provide search time comparisons is that this\\nwas not their core contribution.**\\nFor example, the primary algorithmic innovation of Lample et al. [1] is their development of the MCTS inspired\\nHTPS algorithm, which is a novel search approach, as is our method.\\nSimilarly, Polu et al. [2] and Wu et al. [3] both purport improvements from\\nnovel search approaches, which they evaluate in comparison to baselines without controlling for additional search\\ncomputations.\\nThe fact that these papers had additional contributions does not impact the soundness of the evaluation of their search\\napproaches.\\n\\nAs we have a set of GPU tactic generators + filters serving requests from a separate set of CPU proving clients, which\\nthen\\nexecute the tactics in Lean, we cannot easily reallocate resources from one to the other.\\nIn Appendix A.5, we discuss the overhead of our filtering approach, which is less than the cost of an LLM call as is used by other search approaches (e.g. [1, 2]), but varies significantly based on the underlying\\nhardware setup.\\nFor example, given additional GPU memory, we could significantly speed up filtering by batching tactic\\nembeddings.\\nThis GPU memory would be of no benefit to a model without filtering, as it could not be used to speed up the tactic\\ngeneration or the proving clients.\\nCombined with the asynchronous setup of the provers and tactic generators, this makes comparisons based on execution wall time highly hardware dependent.\\n\\nWe do however acknowledge your strong preference for such a comparative experiment.\\nGiven this, we ran a single pass over miniF2F-valid for the Top-K baseline and 3D-Prover,\\nwhere we include the tactic generation and filtering time (in the case of 3D-Prover) in the 600 second budget, which\\ntook approximately 3 days to finish on our hardware.\\nThe number of proofs found from each approach, the Pass@1 percentage, and relative improvement is given in the table below:\\n\\n| K | Top-K | 3D-Prover | Gain |\\n|------|------------|----------------|-------|\\n| K=8 | 52 (21.3%) | **58 (23.8%)** | 11.5% |\\n| K=16 | 63 (25.8%) | **65 (26.6%)** | 3.2% |\\n| K=32 | 66 (27.0%) | **67 (27.5%)** | 1.5% |\\n\\nWe still see an improvement given by 3D-Prover, particularly for deeper searches.\\nThe magnitude of the improvement is decreased, however as discussed in A.5, the majority of the\\nfiltering time is used for tactic embeddings, so we expect these 3D-Prover results would\\nimprove further with batching if given more GPU memory (which would not help Top-K). As we also discuss in A.5,\\nthere could be further speed improvements with different transition model architectures.\\nAs a proof of concept, we used the most performant transition model architecture from Section 2 for 3D-Prover, evaluating it with respect to environment budget, in line with related work.\\nWe are however happy to include some of this discussion, as well as the results above, as an extension to A.5 if you\\nbelieve it will benefit the paper.\", \"references\": \"[1] Lample et al., HyperTree Proof Search for Neural Theorem Proving, https://arxiv.org/pdf/2205.11491\\n\\n[2] Polu et al., Formal Mathematics Statement Curriculum Learning, https://openreview.net/pdf?id=-P7G-8dmSh4\\n\\n[3] Wu et al., TacticZero: Learning to Prove Theorems from Scratch with Deep Reinforcement\\nLearning, https://arxiv.org/pdf/2102.09756\"}", "{\"comment\": \"# Q.8\\n\\n*One of the most concerning weaknesses of this paper is that it does not compare the ReProver (for tactic prediction) +\\n3D-Prover (for tactic filtering) framework against any state-of-the-art method like DeepSeek-Prover, InternLM-Math, etc.\\nThe filtering method is only compared with two baseline tactic filtering mechanisms i.e., Random filtering and top-k\\nfiltering. But the paper does not show whether the effect of this filtering is significant enough, such that it\\noutperforms other state-of-the-arts that do not use filtering or use some kind of state-of-the-art filtering.*\\n\\nWe emphasise that our approach serves to improve search given an\\narbitrary tactic generator, so it can be applied to new models as they are developed (which\\nis important, given the rapid release of new and improved tactic generators).\\nIt should therefore be compared to other methods improving search, rather than to the tactic generator itself, which we discuss above.\\nWe are also not aware of any state-of-the-art filtering approach in this area to compare to.\\n\\nDue to resource constraints, we were only able to evaluate our framework using the smaller ReProver model as the base\\ntactic generator (which we mention in line 348), where we improved the performance without modifying the base model.\\nCurrent state-of-the-art methods are significantly larger, and it was not feasible for us to run the scale of our\\nexperiments on our hardware using these as a tactic generator (ReProver is approximately 300M parameters compared to the\\n7B of DeepSeek-Prover or InternLM).\", \"references\": \"[1] Xin et al., DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search, https://arxiv.org/pdf/2408.08152\\n\\n[2] Polu et al., Formal Mathematics Statement Curriculum Learning, https://openreview.net/pdf?id=-P7G-8dmSh4\"}", "{\"comment\": \"I appreciate the authors' comprehensive response to my concerns. They have addressed all points raised in my review, particularly through their explanation of DPP's motivation with relevant references, and their discussion on tactic execution time effects. Given these thorough and satisfactory responses, along with the authors' commitment to incorporating these clarifications in their revision, I have increased my score to 8 (accept).\"}", "{\"comment\": \"No problem. Was our response able to address your concerns, or is there is anything else you would like addressed to raise your score?\"}", "{\"comment\": \"```\\nYou are an expert in Lean 3 theorem proving. Your task is to evaluate and rank the quality of two predictions, where each prediction is the result of applying a given tactic to a given goal. Your evaluation should be based on how close the prediction is to the true outcome, semantically. A closer prediction syntactically is not necessarily better. For example, if the prediction is a negation of the true outcome, it should be ranked lower than a prediction that is a conjunction of the true outcome. The input first contains the original goal, which is a list of premises, followed by the goal(s) to prove (these are lists of hypotheses, followed by the \\\"\\u22a2\\\" character and the goal itself). Then you are given the applied tactic, the true outcome, and two predictions. The output should be your reasoning, followed by a ranking of the two predictions, giving a score from 0 to 5 to each.\\n\\nYour reasoning should first explain the goal, the tactic, and the true outcome. Then, you should explain the two predictions, and how they relate to the true outcome. Finally, return your output as a list of two scores, where the first score is for the first prediction, and the second score is for the second prediction.\", \"goal\": \"case list.cons\\n\\u03b1 : Type u_1\", \"i\": \"\\u2115,\", \"j\": \"\\u03b1,\\nh' : i < [j].length,\", \"l\": \"list \\u03b1,\", \"h\": \"i < (j :: hd :: tl).to_buffer.size,\\nh' : i < (j :: hd :: tl).length\\n\\u22a2 (j :: hd :: tl).to_buffer.read \\u27e8i, h\\u27e9 = (j :: hd :: tl).nth_le i h'\\n```\", \"tactic\": \"`induction l with hd tl IH`\", \"true_outcome\": \"case list.cons, list.nil\\n\\u03b1 : Type u_1,\", \"j_hd\": \"\\u03b1,\", \"tl\": \"list \\u03b1,\", \"ih\": \"\\u2200 (h : i < (j :: tl).to_buffer.size), (j :: tl).to_buffer.read \\u27e8i, h\\u27e9 = (j :: tl).nth_le i h',\", \"prediction_1\": \"case list.cons, list.nil\\n\\u03b1 : Type u_1,\", \"prediction_2\": \"case list.cons, list.nil\\n\\u03b1 : Type u_1,\", \"the_llm_response_for_this_was\": \"```\\nThe goal is to prove that `(j :: l).to_buffer.read \\u27e8i, h\\u27e9 = (j :: l).nth_le i h'` where `j : \\u03b1`, `l : list \\u03b1`, `i : \\u2115`, `h : i < (j :: l).to_buffer.size`, and `h' : i < (j :: l).length`. In other words, we want to show that reading the `i`-th element of the buffer obtained by converting the list `j :: l` is the same as getting the `i`-th element of the list directly using `nth_le`.\\n\\nThe tactic `induction l with hd tl IH` applies induction on the list `l`. This will generate two goals:\\n\\n1. **Base Case (list.nil):** Prove the goal for `l = []`.\\n2. **Inductive Step (list.cons):** Assuming the goal holds for `l = tl`, prove it for `l = hd :: tl`.\\n\\nThe true outcome is the expected result of applying the `induction` tactic, generating the base case and inductive step with the correct assumptions and goals.\\n\\nLet's analyze the predictions:\\n\\n*Prediction 1:*\\n\\n**Base Case:** It's identical to the true outcome.\\n**Inductive Step:** It makes a subtle error in the inductive hypothesis (IH). The true outcome has `\\u2200 (h : i < (j :: tl).to_buffer.size) (h' : i < (j :: tl).length)`, which quantifies both `h` and `h'` universally. Prediction 1 uses `\\u2200 (h : i < (j :: tl).to_buffer.size), i < (j :: tl).length \\u2192`, making the dependency between `h'` and the conclusion implicit instead of explicit.\\n\\n*Prediction 2:*\\n\\n**Base Case:** It swaps the positions of `h` and `h'` in the context, which is semantically irrelevant and doesn't affect the proof.\\n**Inductive Step:** It makes a more serious error by completely omitting `h'` from the inductive hypothesis. This leads to a weaker IH that cannot be used to prove the inductive step.\\n\\n**Ranking:**\\n\\n**Prediction 1:** 4/5 - It gets the base case right and the inductive step almost correct. The error in the IH is subtle and could potentially be worked around.\\n**Prediction 2:** 2/5 - The base case is technically correct despite the irrelevant swap. However, the omitted `h'` in the inductive hypothesis significantly weakens the prediction and makes it much less useful.\\n\\n**Output:** [4, 2]\\n```\", \"title\": \"Q.5 Autorater example\"}", "{\"comment\": \"We thank you for the very detailed and constructive review.\\nWe appreciate your comment that our approach enabling deeper proofs is impressive,\\nand that we have a strong result in enabling more efficient proof search.\\nWe address your concerns in order below.\\n\\n# Q.1\\n\\n*With diversity-driven tactic filtering, the Best-First Search can explore deeper proofs, leading to a faster proof\\nsearch overall. This is impressive. However, I did not find a comparison of execution times with and without filtering.\\nTable 7 compares execution times with other baselines, but the paper does not clarify the improvement in proof search\\ntime specifically due to filtering.*\\n\\nTo clarify, the benefit of our approach is not necessarily to make proof search faster. It is to facilitate the\\ndiscovery of new proofs by improving the search algorithm of a base model, given the same environment budget. If\\ndesired, our approach can however optimise for execution time as seen in Table 7.\\n\\nWe agree that we should include a comparison of execution times to the no filtering setup, which we will add to the\\nrevision. The average execution time for tactics without filtering was 232 milliseconds (plus/minus 0.9), which was longer\\nthan any of the filtering methods. We will also include the no filtering numbers for the other ablations (i.e. Table 4,5,6),\\nas we agree that this is a useful comparison to make.\\n\\n# Q.2\\n\\n*While this is a strong result, it would be even more compelling if the paper included a comparison against a\\nno-filtering setup (i.e., considering all 64 tactics).*\\n\\nThank you for this suggestion, as it is a reasonable and important comparison to include, which will make our \\nresults more compelling.\\nWe will include this in the revised version, but to summarise here, the pass@1 results for no filtering (i.e. top-K=64) are 27.8\\\\% for miniF2F-test and\\n27.9\\\\% for miniF2F-valid. These happen to be the same results as for Top-K=32, for which we show an improvement in\\nperformance as seen in Table 2. As we mention above, we will also include the no-filtering results for our ablation studies.\\n\\nWe also show in G.2, for a larger dataset (LeanDojo Novel Premises), a 3.9\\\\% relative improvement over no filtering,\\nwith a particularly large increase in the number of deep proofs discovered. This further supports our approach\\nwhen compared to no filtering.\\n\\n\\n# Q.3\\n*The preliminaries and transition models presented in Section 1.2 and 2.1 could have been written in a simplified manner.\\nSome notations are hard to follow and expressed in a circuitous way.*\\n\\nWe can see how some of the notation might be difficult to follow, as we have tried to be as precise as possible.\\nWe believe that moving some of the more detailed notation to an appendix would help simplify this, which we will do for\\nthe revision.\\n\\n# Q.4\\n\\n*Why is the decoder taking as input a concatenation of e and g, when e is already an encoded and pooled version of t and\\ng? What is the motive of having g twice?*\\n\\nAs with reviewer HiN1, we agree that some additional motivation and explanation of the transition model architecture\\nwould be beneficial, which we cover in G.1. We expand on this here to address your specific question.\\n\\nTo ensure that we generate a useful tactic representation in e, the only tactic information we allow\\nthe Decoder is through e.\\nThe goal g is included in e as it allows for improved, goal-aware tactic representations.\\nThis approach gives tactic representations which can reflect the context they are applied in.\\nFor example, if the goal state has a lemma which is referenced by the tactic, then the encoder will benefit by having\\naccess to the goal when generating the encoding.\\nAs we show in Table 1, if we don't include the goal in the tactic encoding, then the performance is greatly reduced (\\nCOMBINED vs SEPARATE).\\n\\nAs we only pool the tactic tokens (after they have attended to the goal) to generate\\ne (Figure 2),\\nthere is a large loss of information about the goal in the encoding.\\nWe therefore also provide the Decoder with the original goal tokens, so that it has full access to the original goal for the output\\nprediction. We can run an ablation removing this, if you think it will be useful, although the NO TACTIC baseline in\\nTable 1 shows the performance if we give the Decoder g alone, without the (goal, tactic) representation.\\n\\n\\n*How is concatenating an encoded string to a non-encoded string g, meaningful?*:\\n\\nThe input to the Decoder will be the tactic encoding vector, concatenated with the token vectors for the original goal\\nfrom the embedding matrix of the Decoder. This is meaningful as the Decoder can now use information from the tactic (through\\nthe encoding e) as it attends to the tokens of the original goal, allowing it to better predict the output than if it has\\nno tactic information (as we show in the NO TACTIC baseline in Table 1).\"}" ] }
7fuddaTrSu
PACE: Physics Informed Uncertainty Aware Climate Emulator
[ "Hira Saleem", "Flora D. Salim", "Cormac Purcell" ]
Climate models serve as critical tools for evaluating the effects of climate change and projecting future climate scenarios. However, the reliance on numerical simulations of physical equations renders them computationally intensive and inefficient. While deep learning methodologies have made significant progress in weather forecasting, they are still unstable for climate emulation tasks. Here, we propose PACE, a lightweight 684K parameter Physics Informed Uncertainty Aware Climate Emulator. PACE emulates temperature and precipitation stably for 86 years while only being trained on emissions data. We incorporate a fundamental physical law of advection-diffusion in PACE accounting for boundary conditions and empirically estimating the diffusion co-efficient and flow velocities from concentrations data. PACE has been trained on 15 climate models provided by ClimateSet outperforming baselines across most of the climate models and advancing a new state of the art in a climate diagnostic task.
[ "Physics Informed Machine Learning", "Climate Modelling" ]
https://openreview.net/pdf?id=7fuddaTrSu
https://openreview.net/forum?id=7fuddaTrSu
ICLR.cc/2025/Conference
2025
{ "note_id": [ "u8OmO1GQJd", "gGgYsE0JCj", "V4fbSbzbrs", "UrMXxxdGnD", "SKaZZV2bMk", "Qc6mqv2GjV", "4pZPSNmLqS", "0ikSm6nbnL" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1729604346930, 1730577165617, 1730584023332, 1730390128886, 1730827658930, 1730719807728, 1730112667183, 1732773082098 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8434/Reviewer_qQui" ], [ "ICLR.cc/2025/Conference/Submission8434/Reviewer_gcZ7" ], [ "ICLR.cc/2025/Conference/Submission8434/Reviewer_N16f" ], [ "ICLR.cc/2025/Conference/Submission8434/Reviewer_mPNq" ], [ "ICLR.cc/2025/Conference/Submission8434/Reviewer_Ww3w" ], [ "ICLR.cc/2025/Conference/Submission8434/Reviewer_svZG" ], [ "ICLR.cc/2025/Conference/Submission8434/Reviewer_D1xx" ], [ "ICLR.cc/2025/Conference/Submission8434/Authors" ] ], "structured_content_str": [ "{\"summary\": \"The paper proposes an ODE/PDE-based climate simulator trained from climate models. The methodological contribution is adding diffusion and noise process to the dynamics.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"3\", \"strengths\": [\"The paper boasts outstanding results.\", \"The idea of adding diffusion, noise and greenhouse forcings to advection ODEs is excellent.\"], \"weaknesses\": [\"The language of the paper is weak, and hurts the paper\\u2019s clarity and message. Many sentences are oddly phrased, broken or difficult to understand. The language is imprecise and lacks citations. Notation is imprecise, there are hanging equations, poor formatting, etc. The figures are poorly formatted. Experiments and results are difficult to follow.\", \"I did not really understand what this paper even does. I think the model learns to match to output of existing climate simulator. I didn\\u2019t understand how the ODE operates, and what the neural networks are doing. I think the paper is claiming to do 86 year long ODE rollout, but I have really hard time believing this with pretty much all details missing. If this is really done, how can it be in any way accurate? Where are the GHG emissions coming from for 2090..?\", \"The method contributions of noise process, diffusion constant estimation, and boundary conditions are unconvincing. The noise process is not presented as proper Wiener, and I don\\u2019t see how it doesn\\u2019t lead to explosion or vanish. The constant estimation is lacks motivation, and makes little sense. I don\\u2019t think boundary conditions are implemented, despite claims.\"], \"questions\": [\"Eq 1 doesn\\u2019t have divergence. Is this intentional? There are also no sinks or sources, so this system is limited to div-free dynamics. Is climate div-free? There are also forcings or GHG here. f_theta is undefined.\", \"I don\\u2019t understand the overall dynamical system. The eq 1 says that we use an advection-diffusion equation. Ok, but then eq 6 says that we use a neural network instead of that (I assume). And then eq 2 says that we also use another neural network to predict states from GHG. I\\u2019m just lost on what happens. Having an algorithm box would, or complete model description would help. Fig2 isn\\u2019t helpful (quite the opposite).\", \"Why is D chosen as var(C)? I see no motivation, and no reason why it should be like this. I don\\u2019t really see a connection from GHG to diffusion in the first place. The GHG adds energy to the system, which should amplify the advection. Instead, in this model extra emissions are smoothing the weather.\", \"I don\\u2019t see why the embeddings would help with the boundary conditions. The state can still have arbitrary values at the borders.\", \"I did not understand the experimental setting: what do you do, what is data, what is known, what is unknown, what do you estimate, what do you predict, what do you simulate. I think that you are mapping emission inputs into temperatures, separately for each time snapshots. Some ODE simulations are inside this, but not sure what.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The authors present a novel model for regressing climate responses on to anthropogenic emissions, utilizing a 2-D advection diffusion based NeuralODE. They apply the model to the ClimateSet dataset and compare with appropriate baselines, finding improved accuracy for their approach.\", \"soundness\": \"1\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The authors use a standard dataset and compare against appropriate baselines. They include an ablation study which explores the contribution of their skill to different aspects of their model. The model appears to achieve good skill in comparison to these baselines, although I have serious misgivings about the approach and its motivation (outlined below).\", \"weaknesses\": \"This paper has a number of serious, and some more minor weaknesses which I will outline below.\", \"major_weaknesses\": \"1) The authors use a 2-D advection diffusion model as the basis for their NeuralODE \\\"considering that climate system evolves according to a 2D advection-diffusion process\\\", this is not true to any sensible approximation. Over weather time scales the atmosphere evolves according to Navier Stokes, and over the longer timescales described here there is minimal largescale advection. There is certainly no diffusion in the emissions (c.f. section 3.2.1). The emissions are prescribed from monthly varying estimates of large-scale industrial sources - any changes in location or magnitude are purely socio-economic, there is no effect of the atmosphere. Similarly, while the pattern of the temperature and precip emerge from Navier-Stokes over long time averages, the changes on monthly timescales are almost entirely due to changing boundary conditions. This undermines their rationale for the NeuralODE and renders their estimate of the coefficients in 3.2.1 as nonsense. It also represents a fundamental misunderstanding of the dataset and the task. \\n\\n2) The authors claim to frame climate emulation in a new setting due to inadequacies in autoregressive models (L34-51). This is not true. Their baseline dataset, ClimateSet uses such a framing, and is itself based on ClimateBench (Watson-Parris et al. 2022) for which there are many example emulators (such as Bouabid et al. 2024), and is itself is only an extension of years of literature in framing climate model emulation in this way (e.g. Castruccio et al., 2014; Holden & Edwards, 2010). Discussion around and claims of 'stability' (L17, L57, L103) are also nonsense, as these are regression based models and stable by construction. \\n\\n3) The discussion around periodic boundary conditions (3.2.3) is very confusing. The authors start by discussing the spatial periodicity of the Earth given the spherical surface they're modeling, but then model that using 'harmonic embeddings to learn seasonal variations and cyclical changes in climate data.' which models a different kind of periodicity that is inherently temporal. \\n\\n4) The evaluation of the models against the full 2015-2100 period of SSP245 is misguided, and the comparison against the first 10 years (shown in Figure 8) is wrong. The original ClimateBench (Watson-Parris et al. 2022) protocol which ClimateSet is based on, evaluates against the last 20 years of SSP245 because the first ~20-50 years are very similar to the other scenarios used for training and therefore not a good metric of skill.\\n\\nGiven the fundamental lack of understanding of the problem space shown by the authors (based on the above), I have little faith that the comparison models (ACE and LUCIE) have been faithfully transferred to their task and therefore have little faith in their comparisons. I also find it unlikely that the ConvLSTM performs worse than a UNet if they are of comparable size (but no details are provided for such a comparison). If the skill presented by the authors is real, I suspect it stems from their modeling of uncertainty and the (unphysical) regularization provided by the NeuralODE to avoid overfitting.\", \"references\": [\"Watson-Parris, D., Rao, Y., Olivi\\u00e9, D., Seland, \\u00d8., Nowack, P., Camps-Valls, G., et al. (2022). ClimateBench v1.0: A benchmark for data-driven climate projections. Journal of Advances in Modeling Earth Systems, 14, e2021MS002954. https://doi.org/10.1029/2021MS002954\", \"Bouabid, S., Sejdinovic, D., & Watson-Parris, D. (2024). FaIRGP: A Bayesian energy balance model for surface temperatures emulation. Journal of Advances in Modeling Earth Systems, 16, e2023MS003926. https://doi.org/10.1029/2023MS003926\", \"Castruccio, S., McInerney, D. J., Stein, M. L., Crouch, F. L., Jacob, R. L., & Moyer, E. J. (2014). Statistical emulation of climate model projections based on precomputed GCM runs. Journal of Climate, 27(5), 1829\\u20131844. https://doi.org/10.1175/jcli-d-13-00099.1\", \"Holden, P. B., & Edwards, N. R. (2010). Dimensionally reduced emulation of an AOGCM for application to integrated assessment modelling: Dimensionally reduced AOGCM emulation. Geophysical Research Letters, 37(21). https://doi.org/10.1029/2010gl045137\"], \"questions\": \"The authors are welcome to respond to the criticisms laid out above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a light-weight, physics-informed neural network, PACE, for learning the mapping from forcing scenarios (e.g. greenhouse gas concentrations; GHG) to atmospheric states (here, temperature and precipitation) in a diagnostic-type approach for climate model emulation.\\nPACE uses physics-informed features, extracted by a Neural ODE that solves the 2D advection-diffusion equation for the GHG global inputs maps, to predict the corresponding global temperature and precipitation maps for the given forcings timestep (month and year) with a lightweight neural block based on convolutions, pooling layers, and MLPs.\\nPACE achieves promising RMSE results on the ClimateSet benchmark dataset when compared to other neural network-based emulators.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"Efficient, accurate climate emulation is an important topic with the potential for significant impact in democratizing cheap climate projections under various greenhouse gas emission scenarios.\", \"Using a physics-informed featurization technique based on an advection-diffusion equation is an original idea.\", \"PACE generally is amongst the best, if not the best, performing models compared to the reported baselines, in terms of RMSE.\"], \"weaknesses\": [\"1. Grammatical errors and careless statements plague the manuscript. It should be carefully proofread. I'm including some of the grammar mistakes/typos at the end of the \\\"Weaknesses\\\" section. Here are some examples for the careless statements, just from the introduction and related work:\", \"\\\"The past decade has seen superior performing data-driven weather forecasting models\\\" -> \\\"The past **years** have seen superior performing data-driven weather forecasting models\\\"\", \"The sentence *\\\"the medium range forecasting ability makes them unstable for climate modelling several years into the future\\\"* doesn't make sense. Just because a model is able to perform medium-range forecasting doesn't make it unstable (see e.g. ACE).\", \"Citing Gupta & Brandstetter (2022) in the context of *\\\"Climate models are governed by temporal partial differential equations (PDEs)\\\"* doesn't make sense. If you choose to cite something, it would be better to cite a standard textbook on climate science or at least a climate science paper.\", \"I don't think that Table 1 shows anything related to data-efficiency, as claimed by the authors *\\\"making it data-efficient as shown in Table 1\\\"*. The authors might have meant computational efficiency.\", \"The claim that *\\\"To address this gap, we propose PACE, which treats climate emulation as a diagnostic-type prediction\\\"* is misleading without making clear that prior work (e.g. ClimateBench or ClimateSet) does exactly this.\", \"I don't think that *\\\"Nguyen et al. (2023a) accounts for multi-model training, however it is limited to medium range forecasting.\\\"* is true. ClimaX contains climate emulation experiments on the ClimateBench dataset.\", \"The citation format is sometimes off. Not using brackets for non-inline citations hurts the reading flow.\", \"2. Basic mistakes or imprecisions:\", \"Including ACE and LUCIE in Table 1 is unfair since they were designed for the \\\"autoregressive\\\" climate-dynamics emulation problem rather than the diagnostics-type emulation problem studied in this paper. The inputs-outputs are quite different between these emulation approaches. The relationship between them is more complex in the autoregressive case.\", \"Section 3.3.: The name of the Convolution Block Attention Module (CBAM) is misleading since it contains no attention layers. Similarly for the \\\"channel attention map\\\" and the \\\"spatial attention map\\\".\", \"The abstract mentions that *'While deep learning methodologies have made significant progress in weather forecasting, they are still unstable for climate emulation tasks\\\"*. In my opinion, this statement is wrong and misleading: i) ACE, LUCIE, or Spherical DYffusion [1] are counterexamples of pure deep learning methods that perform stable climate long-term climate emulation with reasonable weather forecasting skill; ii) The statement suggest to me that the paper deals with emulation of ***temporal*** climate dynamics (and producing stable, long-term rollouts). However, this is not true since the paper deals with diagnostic-type climate emulation where the mapping from forcings (e.g. GHG) to climate states (e.g. temperatures) are learned (climate dynamics are not being emulated).\", \"Adaptation of SFNO architecture (especially Appendix A.2) is not consistent with the configuration from LUCIE (nor is it with the one from ACE nor the original SFNO paper) as wrongly claimed by the paper. For example, the latent dimension is 72 for LUCIE and 256 for ACE, which are both much larger than the 32 used in this paper (similarly for the number of SFNO blocks). Lastly, it's not clear to me why the paper chooses to add *\\\"a 2D convolutional layer designed to handle inputs with 4 channels and produce outputs with 2 channels\\\"* rather than simply changing the number of input and output channels of the original SFNO architecture.\", \"3. The strength of the results is debatable\", \"I have doubts about the interpretation shown in Fig. 1. The climate models show clear increasing temperature trends, which are not properly emulated by PACE. In one case there's no clear increasing trend (is PACE simply learning the mean?), in the other case it's much smaller than the climate model one. As a side question, what SSP is this? Can you include that in the caption please?\", \"Fig. 4 shows that PACE's predictions are very pixelated. This is a problem in climate modeling, where high spatial resolutions are highly desirable. The climate models in CMIP6 are already relatively coarse, so it seems important to at least keep their granularity.\", \"No error bars are included. I strongly recommend re-training PACE (and the best baselines) with different random seeds, and reporting error bars on the corresponding RMSEs. Otherwise, it is hard to judge how significant the results are, especially since the main results (e.g. Fig. 3, 6, 7) don't seem to indicate a clear edge for PACE compared to the baselines.\", \"Diagnostic-type climate emulation, as studied in this paper, of temperature (and in some cases even for precipitation [2]) has been shown to work well with simple, non-neural ML approaches like Gaussian Processes (see ClimateBench and ClimateSet) and even linear regression (see [2]). Including these approaches would be crucial, given their simplicity. I appreciate the point of the authors that achieving good RMSEs on ClimateSet with a lightweight neural network is possible, but these non-neural approaches are important to include to carefully compare PACE to even more lightweight approaches.\", \"The title and model mention uncertainty aware climate emulation, but none of the experiments study this (e.g. ensembling and comparisons to the CMIP6 ensembles themselves).\", \"Can you share insights with respect to the training and inference runtimes of PACE? How does it compare to fully neural approaches that don't require ODE solvers?\", \"4. Some method details are unclearly presented/lack explanation.\", \"Can you elaborate on Eq. 14? The relationship to Eq. 13 and the rest of the paper is not clear to me. Is $y$ the climate model temperature/precip. target data? What do you use for $\\\\sigma^2$? How do you choose it?\", \"Information about the periodic boundary condition (PBC) is completely missing. Literally the only information that the manuscript gives is *\\\"We implement periodic boundary condition (PBC) to simulate the entire planet\\\"*. How this is implemented is not discussed.\", \"Similarly, it's not clear to me how/where the \\\"harmonic embeddings\\\" are used in PACE. The diagram in Fig. 2 doesn't show them and only their definition is stated in the manuscript itself. What do you use for $t$? Also, the section title says \\\"Harmonics Spatio-Temporal Embeddings\\\" but their definition suggests that they're temporal at most.\", \"Fig. 2 diagram indicates that a \\\"Adaptive Pooling\\\" module is used at the end of PACE. I could not find any information about this module anywhere else in the manuscript.\", \"I presume that including such a module is important because the \\\"spatial attention map\\\" outputs (which in the diagram comes just before the adaptive pooling module) are squished to (0, 1) by the sigmoid function, which does not seem to match the actual range of the standardized targets.\", \"How exactly is a Neural ODE used inside PACE? You say that you use the dopri5 ODE solver, which is a traditional method not based on neural networks. This seems to contradict the claim that a Neural ODE is used.\", \"A selection of typos (but note that there are many more that should be fixed):\", \"\\\"medium range\\\" -> \\\"**medium-range**\\\"\", \"\\\"two key phenomenon\\\" -> \\\"two key **phenomena**\\\"\", \"\\\"modelling key physical law\\\" -> \\\"modelling key physical **laws**\\\"\", \"Line 159: \\\"it's\\\" -> \\\"**its**\\\"\", \"Line 162: \\\"descritized\\\" -> \\\"**discretized**\\\"\", \"Also, the global maps in Figures 2 and 4 are \\\"upside-down\\\".\"], \"references\": \"[1] Probabilistic Emulation of a Global Climate Model with Spherical DYffusion (https://arxiv.org/abs/2406.14798; NeurIPS 2024)\\n\\n[2] The impact of internal variability on benchmarking deep learning climate emulators (https://arxiv.org/abs/2408.05288)\", \"questions\": [\"The advection-diffusion equation is inherently temporal, modeling atmospheric dynamics. However, the studied diagnostic-type emulation problem in this paper is not temporal in the sense that inputs and targets are from the same timestep. How do you explain this mismatch?\", \"Can the authors please give examples for the \\\"few\\\" climate emulators that \\\"incorporate GHG concentrations typically rely on autoregressive training regimes. These models predict climate variables at future time steps based on past states, but often fail to account for the projected emissions at those future times.\\\"?\", \"Can you include the PACE's results, with all its components, in Fig. 5, please? It seems to me that only its ablated versions are included there.\", \"What are the specifics of the Neural ODE (NODE) ablation? You say that *\\\"Neural ODE models the advection diffusion dynamics and extract features\\\"* in the methods section, but say in the ablations section that NODE corresponds to *\\\"remove the advection diffusion process and only parameterize the Convolution Attention Module using Neural ODE\\\"*. These statements seem to contradict each other?\", \"What does Fig. 8 show? What are the reference models? Why do you show averaged TAS? Is the average over time? If so, why not just show the full time series? Why are the values negative? Is it normalized TAS that you're plotting? Which SSP is it? Can you show similar plots for more future years (not only up to 2026)?\", \"Can the authors expand on what they mean by *\\\"Guan et al. (2024) proposed LUCIE (...) to account for the computational complexity of ACE.\\\"*?\", \"(Global) batch size of 1 is quite small. Did you accumulate gradients to alleviate the problem?\", \"What is the point of doing \\\"super-emulation\\\" if the resulting RMSEs are mostly higher than only training on the target climate model?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a hybrid physics- and machine-learning-based climate emulator model called PACE. The model combines numerical solving of the advection-diffusion equation with convolutional deep learning components to produce prediction of surface temperature and precipitation. The model takes as input at each time greenhouse gas emissions, and is tasked with learning their impact on the atmosphere. The authors experiment with using the PACE model to emulate multiple climate models, both independently and with one learned super emulator for multiple climate models. PACE achieves lower RMSE values for most climate models compared to baselines.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. Using machine learning to learn climate emulators is a highly interesting problem with potential for large impact. The authors have accurately identified that there are many open questions in the area, and in particular the ability of the machine learning models to incorporate more complex forcing from greenhouse gas emissions is a key property that needs further development.\\n2. The use of the hybrid approach with the advection-diffusion equations seems well-motivated and leads to an impressively stable model. The authors show stable rollouts for 86 years.\\n3. Compared to baseline models PACE achieves lower RMSEs for single-model evaluation. Also when training PACE as a super-emulator the errors achieved are generally lower than baselines.\\n4. A valuable ablation study is performed in section 6. This study clearly shows that all parts of the model are important for the performance of the model.\", \"weaknesses\": \"1. The presentation of the model throughout section 3 is incoherent and hard to understand. Because of this it is challenging to really understand how the different parts of the model fit together and the motivation behind them. There are different variables used in the different subsections and no clarifications on how they relate. Particularly unclear is:\\n * What exactly is the shape of $F$ in section 3.3, and what does it contain specifically? How does this relate to each $u$ being modeled in section 3.1?\\n * Section 3.2.1 explains how $D$, $v_x$ and $v_y$ are initialized, but after time 0 where do these values come from? While maybe the diffusion coefficient $D$ could be considered fixed, it is unclear where $v_x$ and $v_y$ come from in eq. 6 past the initial time.\\n * The description of the deep learning components of the model in section 3.3 is unclear. The dimensions of $F$ are never given, and it is ambiguous which dimensions the different components operate over (e.g. the pooling and MLPs). These components are described as \\\"attention map/module\\\", but this is not any form of attention mechanism.\\n2. The authors list as one of the main contributions of the work that the model respects the spherical geometry of the earth through periodic boundary conditions. However, periodic boundary conditions as described do not treat the earth as a sphere, but as a torus. Periodic boundary conditions are reasonable in the longitude direction, but not for latitude. Consider specifically the description in section 3.2.3, where this condition is described as $f(x, y) = f(x, y +L_x)$ (assuming here latitude is $y$). If we set $y = -90^{\\\\circ}$ (the south pole) and $L_y = 180^{\\\\circ}$ (length of the domain), then this becomes $f(x, -90^{\\\\circ}) = f(x, 90^{\\\\circ})$. This means that the values at the south pole are enforced to be the same as at the north pole, which is clearly not desirable.\\n3. Listed as a main contribution of the paper, and even in the title, is the fact that the method is uncertainty aware. The uncertainty estimation presented is however very unclear. Specifically:\\n * The uncertainty estimation part of the model, presented in section 3.2.2 leaves more questions than answers, making the soundness of this approach hard to judge. There is some Gaussian process $\\\\eta$ being added to the advection-diffusion equation. What is the covariance structure of this process? \\n * The model is trained with an NLL loss, but it is entirely unclear where the $\\\\mu$ and $\\\\sigma$ going into this loss come from. Going by notation, the $\\\\sigma$ is the same as for the Gaussian process, but this would be very strange as that is not the standard deviation related to any prediction. Or should this be understood as sampling an ensemble prediction from the model, using different realizations of $\\\\eta$, and then estimating $\\\\mu$ and $\\\\sigma$ from these samples and computing the NLL loss? In any of these cases the Gaussian assumption seems overly simplistic, capturing only the first two modes of the distribution.\\n * The capabilities of the model to accurately capture uncertainty are never evaluated in any experiments, as only RMSE is considered. This practically nullifies this contribution, as there is nothing in the paper that supports claims that the model is uncertainty aware.\\n4. It is questionable if RMSE w.r.t. the actual state of the atmosphere (temperature and precipitation) really is particularly interesting for a climate emulator. In climate modeling we generally do not care about exactly predicting the temperature on some specific day, as we are far beyond any predictability limits. Instead we care about capturing overall statistics and trends. So while a model might have very poor RMSE for every day, it might still be a useful climate model if it accurately captures the general climate (statistics of the temperature). Other works, such as Watt-Meyer et al. (2023) consider e.g. RMSE of temporal means, which seems to be of greater interest. The RMSE used in this paper is more akin to the RMSE computation used in weather forecasting, which is a different problem.\\n\\n**Minor:**\\n1. I am missing results for the full model for comparison in figure 5.\\n2. In the abstract the authors write \\\"PACE emulates temperature and precipitation stably for 86 years while only being trained on greenhouse gas emissions data.\\\". This is inaccurate, as the model clearly uses temperature and precipitation data for training as well. \\n3. Claims in the 4th paragraph in section 1 should be backed up by references.\\n4. There are a few highly related works that I miss a proper discussion of in the related work section: (Salva R\\u00fchling, et al., 2024), (Kochkov, et al., 2024). I would like to know how the proposed method relates and potentially differs to these.\\n5. For any description of the baseline models and the multihead decoder used in section 4.5 the authors rely fully on referencing ClimateSet (Kaltenborn et al., 2023). This makes the paper not feel self-contained and the experiments hard to follow without cross-checking the ClimateSet paper.\\n6. Equations should be properly typeset with non-variables in text font rather than math font, e.g. \\\\text{MLP} instead of $MLP$.\\n7. Text in plots in the paper is so small that it is nearly impossible to read.\\n\\n*References:*\\n\\n* Watt-Meyer, Oliver, et al. \\\"ACE: A fast, skillful learned global atmospheric model for climate prediction.\\\" arXiv preprint arXiv:2310.02074 (2023).\\n* Cachay, Salva R\\u00fchling, et al. \\\"Probabilistic Emulation of a Global Climate Model with Spherical DYffusion.\\\" arXiv preprint arXiv:2406.14798 (2024).\\n* Kochkov, D., Yuval, J., Langmore, I. et al. Neural general circulation models for weather and climate. Nature 632, 1060\\u20131066 (2024).\\n* Kaltenborn, Julia, et al. \\\"Climateset: A large-scale climate model dataset for machine learning.\\\" Advances in Neural Information Processing Systems 36 (2023): 21757-21792.\", \"questions\": \"1. What is the time step used in the numerical solver? What is the temporal resolution of the data?\\n\\nMy other questions are for context intertwined with the points under weaknesses listed above.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This article introduces PACE, a Physics Informed Uncertainty Aware Climate Emulator, based on Neural ODE solver focused on the advection-diffusion problem in 2D to compute the surface air temperature and precipitation.The model uses 684K parameters whereas other emulators use 107M or 200M parameters.\\n\\nThe authors claim to introduce Gaussian noise in order to take into account uncertainty quantification. They also claim to use periodic boundary conditions by considering Earth as a spherical domain.\\n\\nThe author show the results to show the generalisation capabilities on 15 climate model over 86 years.\\n\\nThe theoretical background is not sufficiently supported. There are many errors that lead the reader to confusion. The reader need to know more than what is written in the manuscript.\\n\\nRelying on 15 emulators results, the experiments shows better accuracy in terms of RSME for surface air temperature.\\n\\nThe paper should be viewed as a theoretical part that requires improvements to make it clear, and experiments that highlight improvements.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The authors shows a study of the RSME regarding surface air temperature and precipitation for single emulator among . PACE outperfoms the other emulators on 13 models on 15 for temperature and 9 models over 15 for precipitation. A box plot summarize the overall performances regarding these 15 models. PACE is interesting for surface air temperature. It also use less parameters and is focused on advection-diffusion.\\n\\nAblations studies are performed with four climate model to justify the use of advection, diffusion and Neural ODE.\\n\\nThe authors made some claim on Green computing. This is a good step for the environment.\", \"weaknesses\": \"Many incorrect statements lead to confusion in the manuscript. They should be corrected carefully.\\n\\nIn the abstract, the author claim that numerical simulations are computationally intensive and inefficient. The second term is imprecise, the authors claims in the conclusion that their model does not capture extreme events at regional level, the training having coarse resolution. Which means numerical simulations are still essential.\\n\\nIn the part 2.2, the authors are talking about the work of Chen et al 2018 which should be adapted for spherical mesh. This paragraph is unclear about if the improvements are valid for spherical use, and that the authors wants to adapt to their context.\\n\\nThe equation 1 is unclear, v and D depend on time and space.\\n\\nIn the equation 3, v and D depend on x,y and t. This should be clarified.\\n\\nUncertainty quantification is claimed in the title of the paper. In the results, it is showed on the figure 3 with box plot for a whole simulation and 15 models, and in the figure 4 to show the error of approximation. The uncertainty is not quantified in the remaining figures, especially in figure 1 and 8 where Gaussian noise is used.\\n\\nThe Gaussian noise hypothesis should be more justified. Does this comes from a Central Limit Theorem? What is uncertainties are the authors talking about?\\n\\nLine 217, 219 and 254 three \\\\sigma are used, I assume these are three different quantities and function.\\n\\nIn figure 4, the discretisation difference and the error are important. The authors claim that they want to improve this later.\\n\\nIn section 3.2.1, the scheme for the concentration C should be clarified. This is very difficult to understand.\\n\\nAs stated in the section 3, Earth is presented as a rectangle in section 3. and then Spherical Fourier Neural Operator and L(i) are introduced in section 4.. This is crucial and should be improved so the reader understand that the spherical geometry is well considered even with the use of section 3.\\n\\nThe figure 2 needs to be improved. I would use more math symbols to be consistent with the manuscript.\\n\\nReferences regarding the models which PACE is compared against would help to understand briefly what they are.\\n\\nRegarding the results of the super emulator, no reference is done to the table 2. The reader has to look for the table.\", \"here_are_some_limitations\": \"the training of PACE use simulated data, and not real observations. Authors want to extend their study to higher resolution and using physical constraint (energy, mass and water conservation in the loss function).\\n\\nThe authors claims regarding Green computed are too general. Quantifying and comparing the carbon footprint and energy consumption will be appreciated.\", \"minor_comments_that_do_not_impact_the_score\": \"Why did chose TAS for surface air temperature, instead of SAT?\\n\\nLine 100, Fourier is missing in Spherical Neural Operator\\n\\nAt line 134, the symbol \\\\in should be removed.\\n\\nLine 170, $t$ is necessary\\n\\nLine 183, co-efficient should be corrected\\n\\nLine 196-197, do v_x and v_y depend on x,y and t?\\n\\nLine 309, it is common to use either \\\\sum_i or \\\\sum_{i= 1}^N\\n\\nLine 316, $lat(i)$ and $i$-th should be used\\n\\nLine 440, FVM and FEM are not explained\", \"questions\": \"Why did the authors chose to a 86 years forecasting?\\n\\nIn the table 1, in the 'training resources' column hours are mentioned whereas not for the others emulator, why?\\n\\nLine 54, what do the authors mean by 'compute-efficient'?\\n\\nLine 101, what do the author mean by 'computational complexity'?\\n\\nAt line 123-124, what does continuous sequence to sequence means?\\n\\nAt line 133, the notation R^{x \\\\times y} in inappropriate since x and y. Is F(t) in 2D? does the notation \\\\mathbf{F} refers to a vector?\\n\\nAt line 137, is \\\\theta a vector or a scalar?\\n\\nCould you give a reference to justify the advection-diffusion process in part 3.2?\\n\\nIn the equation 6 and 7, what is f_\\\\theta? Should it be introduced before?\\n\\nIn the part 3.2.1, where does empirical estimation of the diffusion coefficient comes form?\\n\\nIn equation 8, do the authors implicitly assume the concentrations C_i are independent?\\n\\nWhere and how the Negative Log-Likelihood is used?\\n\\nAs stated in the line 233, the periodic boundary conditions are imposed on latitude and longitude. This mean that the authors are identifying the North and South pole. Could you clarify that?\\n\\nCould you clearly justify the use of equation 13 that account for uncertainty in the climate model?\\n\\nWhat does the authors means by 'global spatial dependencies' at line 246? Why are AvgPoll and MaxPool used? Can you give a reference or explain more?\\n\\nIn the figure 2, where do the author consider the uncertainty?\\n\\nWhat does SSP means, at line 299?\\n\\nIn section 4.3., are the L(i) part of SFNO?\\n\\nIn section 4.5, what does 'training a single model on all of 15 climate models' means? Is it what is stated after this sentence\\n\\nWhy the authors selected the 15 emulators AWI-CM-1-1-MR, ... , TaiEMS1 for the benchmark rather than other?\\n\\nFigure 6 and 7, the title is not finished 'namely :'. Why did the authors chose bars instead of table of numbers? Especially for precipitation, the results are difficult to quantify precisely.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a lightweight 684K parameter model called Physics informed Uncertainty Aware Climate Emulator (PACE). PACE predicts the temperature and precipitation from greenhouse gases by incorporating the fundamental physics PDE of Advection Diffusion with ML models. The paper trains the PACE model on 15 climate datasets and evaluates across several climate emulators.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is overall well-written and easy to follow.\\n2. Predicting uncertainty estimates is especially important for climate models, and previous SOTA methods are currently not performing UQ.\\n3. The proposed PACE model is significantly lightweight compared to SOTA baselines.\", \"weaknesses\": \"1. The motivation and description of CBAM is unclear.\\n2. The PACE model is only limited to surface air temperature and precipitation. Other climate models like ClimaX can predict a wide variety of target variables such as wind velocities, pressure, etc. \\n3. The paper does not compare against other recent SOTA climate models like FourCast [1] and GraphCast [2].\\n\\n[1] Pathak, Jaideep, Shashank Subramanian, Peter Harrington, Sanjeev Raja, Ashesh Chattopadhyay, Morteza Mardani, Thorsten Kurth et al. \\\"Fourcastnet: A global data-driven high-resolution weather model using adaptive fourier neural operators.\\\" arXiv preprint arXiv:2202.11214 (2022).\\n\\n[2] Lam, Remi, Alvaro Sanchez-Gonzalez, Matthew Willson, Peter Wirnsberger, Meire Fortunato, Ferran Alet, Suman Ravuri et al. \\\"GraphCast: Learning skillful medium-range global weather forecasting.\\\" arXiv preprint arXiv:2212.12794 (2022).\", \"questions\": \"1. Can the authors provide additional details on the motivation and design choices behind the CBAM layer.\\n2. While the paper proposes a novel method of predicting UQ using NLL, it lacks an evaluation of the uncertainty estimates. Computing simple metrics like log-likelihood and 95% confidence intervals would strengthen the work in my opinion.\\n3. It seems like PACE is based on the assumption that the climate is primarily driven by advection and diffusion. While it might be true on a high level, there may be more nuanced complex climate processes happening at finer scales. Can the authors comment on the limitations of modelling climate just using the advection-diffusion equation and how PACE can model these finer-grained interactions?\\n4. Line 240: Does Equation (15) start from $\\\\sin(2^i. t)$ or $\\\\sin(2^0. t)$?\\n5. Having an overview section after Section 3.4 to summarize the PACE architecture (in Figure 2) might improve the readability of the proposed section. \\n6. Table 2 should be mentioned in the paragraph starting from line 357. Further, can some more justification be given on why models like SFNO and Climax which were performing good on each individual datasets are not performing good during super emulation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents a physics-informed approach to climate modeling via incorporating a stochastic noise term for uncertainty estimation within the general advection-diffusion equation. By integrating physical priors, the method enhances benchmark metrics across a range of tasks.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"Incorporation of physical priors for climate modeling\", \"Improvements in downstream performance over various tasks\"], \"weaknesses\": [\"The paper is not well written.\", \"The paper misses out on a ton of related works which resolve around the same idea of utilizing physics priors to model climate and weather, such as ClimODE (https://arxiv.org/abs/2404.10024), Neural GCM (https://arxiv.org/abs/2311.07222), WeatherGFT (https://arxiv.org/pdf/2405.13796), etc all seem to be missing and comparisons to them are missing.\", \"Since, the forecasts are probabilistic (uncertainty), it is much better to use CRPS scores to compare to the ground truth as compared to RMSE\"], \"questions\": [\"Do you dynamically update the velocity predictions ($v_x$, $v_y$), or are they solely based on the initial estimates from the data?\", \"Can you clarify how you derive uncertainty estimates? It appears from Eq. 6 and 7 that you use $f_\\\\theta$ to evolve the system and obtain forecasts for the advection-diffusion equation, but then a stochastic term appears in Eq. 13. How does this fit together?\", \"Do you forecast the evolution of the forcing variables?\", \"Could you explain how you implement periodic boundary conditions? I found limited information in Section 3.2.3. Additionally, how did you determine the values of $L_x,L_y$ for the convolutional block?\", \"For how many time-points you unroll the trajectory to obtain the solution and forecasts, and how many time-points do you use to estimate the initial velocity?\", \"There is a missing square in Eq.19.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We would like to withdraw our submission. We thank the reviewers for their time and effort.\"}" ] }
7ftRWFUVLu
A lightweight Transformer guided by features from multiple receptive fields for few-shot fine-grained image classification
[ "Weichuan Zhang", "Jiale Wang", "Tao Lei", "Zicheng Pan", "Mohammad Aminul Islam", "Yongsheng Gao", "Changming Sun" ]
Convolutional neural networks (CNNs) and vision Transformers (ViTs) play key roles in few-shot fine-grained image classification (FSFGIC). One of the main challenges of FSFGIC is how to consistently learn high-quality feature representations from different very limited fine-grained datasets. CNNs struggle with long-range dependencies due to their inherent localized receptive fields, and ViTs might impair high-frequency information, e.g., local texture information. Furthermore, ViTs require a large number of training samples to infer feature properties such as translation invariance, locality, and the hierarchy of visual data, while FSFGIC's training samples are extremely limited. To address the problems mentioned, a new lightweight Transformer guided by features from multiple receptive fields (LT-FMRF) is proposed which has considered how to manage long-range dependencies and how to extract local features with multiple scales, global features, and fused features from input images for increasing inter-class differences and consistently obtaining high-quality feature representations from different types of limited training datasets. Furthermore, the proposed LT-FMRF can be easily embedded into a given few-shot episodic training mechanism for end-to-end training from scratch. Experimental results conducted on five widely used FSFGIC datasets consistently show significant improvements over twenty state-of-the-art end-to-end training-based methods.
[ "Convolutional Neural Networks (CNNs)", "Vision Transformers (ViTs)", "Few-shot Learning", "End-to-end training" ]
https://openreview.net/pdf?id=7ftRWFUVLu
https://openreview.net/forum?id=7ftRWFUVLu
ICLR.cc/2025/Conference
2025
{ "note_id": [ "mE2sP5PB79", "ZMxmrdTHpv", "CwKvLIuf3I", "6d0FzEe1OK", "4ZNahRGxUx", "0yfNcY6Sqb" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730643674595, 1730429036290, 1730300826323, 1729816120357, 1731454899357, 1730265217126 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission163/Reviewer_d7nK" ], [ "ICLR.cc/2025/Conference/Submission163/Reviewer_4kAq" ], [ "ICLR.cc/2025/Conference/Submission163/Reviewer_C2Hr" ], [ "ICLR.cc/2025/Conference/Submission163/Reviewer_ocDB" ], [ "ICLR.cc/2025/Conference/Submission163/Authors" ], [ "ICLR.cc/2025/Conference/Submission163/Reviewer_3wMw" ] ], "structured_content_str": [ "{\"summary\": \"The paper focuses on developing a new lightweight transformer for few-shot fine-grained image classification task (FSFGIC). The paper modifies the traditional vision transformers from two perspectives: (1) adding multi-scale local information, (2) fuse the local, global feature for increasing inter-class differences. The paper then proposes an LF-FMRF (lightweight transformer guided by features from multiple receptive fields) module, containing a self-attention module and multiple receptive field modules to extract global, multi-scale local features, respectively. The designed network contains four modules. Experiments are conducted on 5 different FSFGIC datasets.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The proposed method achieves consistent improvements in diverse FSFGIC datasets.\\n2. The ablation results verify the effectiveness of the proposed module.\", \"weaknesses\": \"1. Lack of novelty. The MRF module is the only new module in the proposed LT-FMRF Module, as self-attention based feature tensors can be obtained by the original ViT. Besides, adding multi-scale local features into the ViT is not a new idea.\\n2. The presentation is poor. The authors only list all the detailed processes of how the network extracts features rather than in an organized way.\", \"questions\": \"Please try to point out and emphasize the difference between the MRF module in this paper and other multi-scale local features extracted modules in other papers.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper addresses the Few-Shot Fine-Grained Image Classification (FSFGIC) task, which involves utilizing limited training samples to accurately classify images into fine-grained categories. To tackle the challenges of long-range dependency management in CNNs and high-frequency information impairment in vision Transformers (ViTs), the paper proposes an innovative LT-FMRF framework. This framework uses a Convolutional Neural Network (CNN) module to capture local feature information and a self-attention mechanism to handle long-range dependencies. A multi-scale feature fusion strategy is employed to extract global and local features from different receptive fields, and element-wise addition fuses these features to enhance the representation of fine-grained categories. Additionally, end-to-end training is supported, allowing for efficient learning from scratch with limited data.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"\\u2460 Extensive experiments on five benchmark datasets demonstrate that LT-FMRF outperforms several existing few-shot learning methods in both 5-way 1-shot and 5-way 5-shot tasks. The results highlight the proposed method's broad applicability and robustness across diverse datasets.\\n\\u2461 The use of various feature fusion techniques effectively amplifies inter-class differences, reducing confusion between classes. This is particularly crucial for fine-grained image classification, which often deals with small inter-class variance. The fusion of multi-scale features significantly contributes to improving classification accuracy.\", \"weaknesses\": \"\\u2460 The LT-FMRF primarily combines existing Transformers with multi-scale receptive\\nfield features. While this fusion method introduces some changes in architecture, it does not present fundamentally new technical approaches conceptually.\\n\\u2461 Although this method is termed a \\\"lightweight\\\" Transformer, the paper does not provide detailed explanations of its advantages in terms of parameter count, computational complexity, and inference speed compared to other models. Particularly in practical applications, lightweight typically means a significant reduction in computational and storage costs, and this aspect lacks detailed comparisons and empirical evidence.\", \"questions\": \"\\u2460 Regarding the authors' mention that deeper networks do not necessarily lead to better performance in few-shot fine-grained image classification (FSFGIC), could you further explain the reasons behind this? Specifically, does the performance decline of deep networks result from overfitting, or is it due to the large number of parameters that cannot be effectively optimized under few-shot conditions?\\n\\u2461 In the article, you mentioned that self-attention features (SA-FTs) are used to capture long-range dependencies in images, but you also noted that they may not effectively capture local textures. Have you considered modifying the design of the self-attention mechanism to enhance the handling of local information? For example, could you integrate the self-attention module more deeply with convolutional operations or introduce more complex hybrid mechanisms to address this limitation?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, a lightweight Transformer guided by features from multiple receptive fields is proposed for FSFGIC. The designed LT-FMRF has the capability to manage long-range dependencies and take advantage of different frequency bands. Furthermore, it can be embedded into any given few-shot episodic training mechanisms.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"The proposed method effectively addresses the issues of long-range dependencies and the insufficiency of few-shot data.\", \"weaknesses\": \"1. The entire model is divided into multiple modules and is quite complex, so a high-level overview of how the modules connect and interact may be beneficial.\\n2. The paragraph that begins with \\\"The designed LT-FMRF network contains four modules.\\\" has an issue with the expression. The first sentence of this paragraph is somewhat abrupt and does not serve as a good transition. Perhaps the first and second sentences of this paragraph could be swapped while maintaining their original meaning.\", \"questions\": \"1. In the section on Related Work, it is advised to discuss the key limitations of 1-2 key related works.\\n2. The paper mentions, \\\"The purpose of this is to maintain convolutional inductive bias, reduce the number of training samples required for ViT training itself, and have the capability to deal with long-range dependencies properly.\\\" However, what is the principle behind this? Could you elaborate further?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents a lightweight module called LT-FMRF, which uses convolutional layers with varying kernel sizes to manage long-range dependencies and extract multi-scale local features, global features, and fused features from input images. This approach aims to enhance inter-class differences and achieve high-quality feature representations. Experimental results across five datasets demonstrate its effectiveness.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"1.The paper is easy to follow.\\n\\n2.The experiment results looks good but it is not clear whether the pre-trained models were used or not.\", \"weaknesses\": \"1.This paper designs a very simple lightweight module, LT-FMRF, which uses convolutional layers with multiple kernel sizes to smooth initial features. However, using convolutional kernels of varying sizes to capture multi-scale features is a well-established approach (such as [1-4] and so on) that has been extensively explored in numerous models and visual tasks. Additionally, ResNet-12 and Conv-4 serve as the basic backbones for this task, and all compared methods utilize these architectures. As a result, the paper lacks novelty and significant contributions.\\n\\n[1] Jiang W, Huang K, Geng J, et al. Multi-scale metric learning for few-shot learning[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 31(3): 1091-1102.\\n\\n[2] Han M, Wang R, Yang J, et al. Multi-scale feature network for few-shot learning[J]. Multimedia Tools and Applications, 2020, 79: 11617-11637.\\n\\n[3] Askari F, Fateh A, Mohammadi M R. Enhancing Few-Shot Image Classification through Learnable Multi-Scale Embedding and Attention Mechanisms[J]. arXiv preprint arXiv:2409.07989, 2024.\\n\\n[4] Zhang Y, Sidib\\u00e9 D, Morel O, et al. Multiscale attention-based prototypical network for few-shot semantic segmentation[C]//2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021: 7372-7378.\\n\\n\\n\\n2.The author does not provide incremental experimental results for the proposed module, leaving it unclear how its accuracy compares to other existing public models. Consequently, the effectiveness of the proposed module cannot be truly verified.\\n\\n3.The author claims that the LT-FMRF module enhances inter-class differences; however, they provide no factual evidence, quantitative results, or any analysis to support this assertion.\\n\\n4. The paper lacks sufficient internal ablation studies (components of convolution layer of LT-FMRF) to truly validate the effectiveness of the proposed module LT-FMRF.\\n\\n5.Figure 3 compares the training process of this method with a very low-performing approach of FRN (with an accuracy difference of over 10%), which does not provide meaningful insights. A comparison should be made with similar or incremental models instead. In addition, the heatmap in Figure 2 is unconvincing in its comparison with FRN. A more effective approach would be to compare the LRF method with the LRF + LT-FMRF method, rather than contrasting the proposed method with a significantly lower-accuracy baseline.\\n\\n6.Table 5 presents experiments on the number of LT-FMRF modules but does not analyze why using more modules results in better performance for 1-shot results while leading to lower accuracy for 5-shot results. Furthermore, the paper contains several grammatical errors.\", \"questions\": \"Overall, this paper shows limited innovation and contribution, insufficient experimentation, and lacks theoretical analysis to support the authors' claims.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"The paper introduces a lightweight Transformer model, called LT-FMRF, designed to improve FSFGIC. By combining features from multiple receptive fields with self-attention, LT-FMRF aims to handle both local and global dependencies, which helps address the challenge of limited training data typical in few-shot scenarios. Experimental results across five benchmark datasets show that LT-FMRF achieves better accuracy than several state-of-the-art methods.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"1\", \"strengths\": \"The paper is well-organized, presenting a clear problem statement and relevant background on FSFGIC. The experimental results and tables are well-structured, aiding in understanding the model's comparative strengths. Some technical details in the LT-FMRF architecture and fusion mechanisms may require additional clarification, but the paper generally succeeds in communicating the key ideas and findings.\", \"weaknesses\": \"In general, the motivation for this submission is easy to understand, but the novelty is limited. In addition, there are still several weaknesses, as follows:\\n1. The proposed method largely combines existing techniques, such as integrating CNN and Transformer layers, multiple receptive fields, and self-attention mechanisms. For similarity measurement, it uses feature reconstruction, as in FRN. While these choices are effective, the approach introduces limited novelty in both network architecture and similarity measurement compared to recent state-of-the-art methods, especially those ViTs-based methods specifically designed for FSL.\\n\\n2. As a researcher in FSL, I find the motivations behind this submission unclear. The paper proposes a lightweight Transformer model for few-shot fine-grained image classification (FSFGIC), aiming to learn high-quality feature representations from limited data. However, this goal aligns with general few-shot learning objectives and does not address the unique characteristics of fine-grained images. In particular, the design lacks specific adaptations for fine-grained classification. Highlighting how the proposed modules differ from previous designs specifically in handling fine-grained distinctions would strengthen the paper.\\n\\n3. The paper explains its multi-receptive field and self-attention approaches for enhancing feature extraction, but it lacks a reasonable explanation for why these specific architectural choices (e.g., using exactly three receptive fields or this particular combination of CNN and Transformer features) are optimal for the task.\\n\\n4. While the authors conduct several ablation studies, these mostly focus on combinations of self-attention and receptive field features. Other aspects, such as the impact of different receptive field sizes, the effect of hyperparameter tuning on stability and accuracy, and the scalability across additional datasets, are not covered in depth.\\n\\n5. While the model is termed \\\"lightweight,\\\" the paper lacks concrete benchmarks on computational efficiency (e.g., inference time, FLOPs, or memory usage). More comparisons on computational resources would strengthen the argument that LT-FMRF is more efficient than CNNs-based or ViTs-based methods. \\n\\n6. Some architectural details, particularly within the LT-FMRF module and fusion mechanisms, are described in a challenging way for readers to understand. In addition, it would be beneficial to include more detailed visualizations, such as feature maps at different stages or t-sne visualization over training epochs, to give readers more insight into the model's behavior.\", \"questions\": \"Please refer to the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
7fjYy3TOPM
Zero-shot Object-level Out-of-distribution Detection with Context-aware Inpainting
[ "Quang-Huy Nguyen", "Jin Peng Zhou", "Zhenzhen Liu", "Khanh-Huyen Bui", "Kilian Q Weinberger", "Wei-Lun Chao", "Dung D. Le" ]
Detecting when an object detector predicts wrongly, for example, misrecognizing an out-of-distribution (ODD) unseen object as a seen one, is crucial to ensure the model’s trustworthiness. Modern object detectors are known to be overly confident, making it hard to rely solely on their responses to detect error cases. We therefore investigate the use of an auxiliary model for the rescue. Specifically, we leverage an off-the-shelf text-to-image generative model (e.g., Stable Diffusion), whose training objective is different from discriminative models. We surmise such a discrepancy would allow us to use their inconsistency as an error indicator. Concretely, given a detected object box and the predicted class label, we perform class-conditioned inpainting on the box-removed image. When the predicted object label is incorrect, the inpainted image is doomed to deviate from the original one, making the reconstruction error an effective recognition error indicator, especially on misclassified OOD samples. Extensive experiments demonstrate that our approach consistently outperforms prior zero-shot and non-zero-shot OOD detection approaches.
[ "out-of-distribution detection", "zero-shot", "generative model" ]
Reject
https://openreview.net/pdf?id=7fjYy3TOPM
https://openreview.net/forum?id=7fjYy3TOPM
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zb5tZE3COG", "yASMrFuNOV", "sqXjhoDtfW", "l7dvfHpTQq", "j5noPtO4vQ", "THIPRi4r2U", "Q6kXXD7nuP", "PMP0Jm1czl", "Jj9MXzCrOx", "JE88fnKlO6", "Gg1J7oNstu", "F1ZeyFjuyZ", "EGbbMolasd", "8Q7BR4L15m", "7qhvxv7bTQ", "4elWDg9jcU", "0fZBbDhw5K" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "decision", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730573277681, 1732727093125, 1732279303689, 1730369984606, 1732318631632, 1730536635830, 1732279542030, 1732318997818, 1732676278527, 1732536604778, 1730066413626, 1737524112752, 1732441659415, 1734386177600, 1732279477450, 1732279732711, 1732677450615 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11240/Reviewer_oHFd" ], [ "ICLR.cc/2025/Conference/Submission11240/Authors" ], [ "ICLR.cc/2025/Conference/Submission11240/Authors" ], [ "ICLR.cc/2025/Conference/Submission11240/Reviewer_os8e" ], [ "ICLR.cc/2025/Conference/Submission11240/Authors" ], [ "ICLR.cc/2025/Conference/Submission11240/Reviewer_9VVi" ], [ "ICLR.cc/2025/Conference/Submission11240/Authors" ], [ "ICLR.cc/2025/Conference/Submission11240/Authors" ], [ "ICLR.cc/2025/Conference/Submission11240/Reviewer_9VVi" ], [ "ICLR.cc/2025/Conference/Submission11240/Reviewer_h9zL" ], [ "ICLR.cc/2025/Conference/Submission11240/Reviewer_h9zL" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11240/Reviewer_os8e" ], [ "ICLR.cc/2025/Conference/Submission11240/Area_Chair_VuDe" ], [ "ICLR.cc/2025/Conference/Submission11240/Authors" ], [ "ICLR.cc/2025/Conference/Submission11240/Authors" ], [ "ICLR.cc/2025/Conference/Submission11240/Reviewer_oHFd" ] ], "structured_content_str": [ "{\"summary\": \"This paper presents RONIN for object-level out-of-distribution (OOD) detection. Given a detected object box and its predicted class label, RONIN first performs class-conditioned inpainting on the image with the object box removed. It then compares the similarities between the original detected objects and their inpainted versions to identify OOD instances.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. The concept of using inpainting and comparing similarity scores for OOD detection is both reasonable and straightforward.\\n2. The proposed method outperforms zero-shot baseline methods by noticeable margins.\", \"weaknesses\": \"1. The technical contribution is limited. The proposed method simply combines image inpainting with similarity comparison, both of which are straightforward and well-established techniques. Overall, the original technical contribution is insufficient.\\n2. The paper lacks comparisons with recent methods. The latest method cited in Table 1 is from 2022. Comparisons with more recent approaches, such as [1] and [2], are expected.\\n3. It seems from Table 4 the results are highly sensitive to the choice of $\\\\alpha$ and $\\\\beta$, suggesting that careful parameter fine-tuning may be required.\\n\\n[1] Li et al. \\\"Learning Transferable Negative Prompts for Out-of-Distribution Detection.\\\" CVPR 24\\n\\n[2] Wang et al. \\\"CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No.\\\" ICCV 23\", \"questions\": \"1. The authors did not conduct experiments on ImageNet-1K, a commonly used dataset for out-of-distribution (OOD) detection. Could the authors provide results on this dataset as well?\\n2. Instead of performing image inpainting, would generating a complete image of the predicted category using a pre-trained diffusion model work? Can the authors include comparisons with this simple baseline?\\n3. In Line 261, it is mentioned that \\\"one can use a thresholding method to identify OOD objects effectively\\\". What is the threshold used in the experiments?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"General response on a revised manuscript\", \"comment\": \"We sincerely thank all reviewers for their thoughtful comments and feedback. Based on their suggestions, we have revised our manuscript to address most, if not all, of their major concerns. The revised sections are highlighted in blue for clarity. Additionally, we will continue to address any remaining concerns promptly during the rebuttal period. If there are any remaining concerns that are not addressed, please let us know.\"}", "{\"title\": \"Response to Reviewer h9zL (1/2)\", \"comment\": \"Thank you for your detailed and constructive feedback. Below, we respond to most, if not all, of your comments at this stage. We apologize in advance if the response is long.\\n\\n## W1: Limited in technical novelty\\nThank you for recognizing the novelty of our framework for OOD detection based on image inpainting. We agree that, having a more sophisticated or dedicated integration of inpainting and classification models together would make our method appear even more novel and appealing. However, we kindly think that, by doing so, inadvertently degrades the generality of our framework. Overall, our goal is to develop a zero-shot approach where newly released conditional generative models and object detectors can be easily plug-and-play without any extra modification efforts. We respectfully believe keeping our framework simple is actually a strength, not a weakness.\\n\\n## W2: Potential weaknesses using inpainting\", \"we_would_like_to_discuss_the_weaknesses_as_follows\": \"#### **W2.1. Computation Time**\\n\\nThank you for raising this concern. We would like to note that it is often possible to construct shorter diffusion schedules that maintain comparable performance. Also, RONIN does not need extremely high-quality inpaintings, but rather good enough ones that make ID and OOD separable. Below we compare RONIN's performance using the original 20-step inpainting schedule vs. an optimized 2-step schedule. The 2-step schedule preserves most of the performance while running in near real-time. Finally, we respectfully emphasize that object detection is crucial in both online and offline systems, and certain application tasks favor accuracy over time performances. We will further clarify and elaborate on the time and resource computational in the final version of our paper.\\n\\n**OOD Detection Performance**\\n\\n| | RONIN with 20 steps (original schedule) | RONIN with 2 steps (improved schedule) |\\n|-------------------|----------------|----------------|\\n| | FPR@95 / AuROC | FPR@95 / AuROC |\\n| VOC - COCO | 27.42 / 91.09 | 37.11 / 87.54 | \\n| VOCO - OpenImages | 18.04 / 93.34 | 28.48 / 90.52 | \\n| BDD - COCO | 30.16 / 92.77 | 28.57 / 92.98 | \\n| BDD - OpenImages | 30.00 / 91.60 | 31.11 / 91.37 | \\n\\n**Runtime in Seconds per Object**\\n\\n|ID-OOD setting|RONIN with 20 steps |RONIN with 2 steps |\\n|-|-|-|\\n|VOC-COCO|0.35|0.034|\\n|VOC-OpenImages|0.42|0.041|\\n|BDD-COCO|0.19|0.018|\\n|BDD-OpenImages|0.18|0.017|\\n\\n#### **W2.2. Near-OOD performance.**\\nThank you for pointing out this. We recognize that near-OOD is indeed a very challenging case for the standard OOD detection algorithms. We indeed conducted a study about near-OOD in the appendix of our original paper, in particular, Appendix F (Line 820 - 858) of the original manuscript. What we found out is that using off-the-shelf generative models can actually have the strength to do near-OOD detection. Let\\u2019s say we have an implicated input as a photo consisting of OOD objects *\\u201czebra\\u201d*, incorrectly predicted as an ID label *\\u201chorse\\u201d*. Because the two classes are really similar, if we just inpaint the zebra to become a horse, it may result in something quite similar to the original zebra, making near-OOD detection ineffective. Rather than that, we can pre-anticipate several near-OOD classes of the ID label. So, even though the ID class is a horse, we have already roughly guessed what are the near-OOD classes, including zebra. Then, based on that, we deliberately condition the diffusion model to generate a horse, but it doesn't look like any near-OOD class-like objects, including zebra. This will make our inpainted results even more aligned with the predicted ID label, and by doing so enhance the way our framework is robust to the near-OOD. \\n\\nBased on your suggestion, we are investigating further near-OOD detection by extending what we conducted in our original appendix. In particular, we specifically look into some image samples that contain certain objects strongly similar to the ID labels in the VOC-COCO setting, stimulating the near-OOD samples to the PascalVOC object detector. We recognize that across these samples, our framework RONIN is able to tackle most of these cases effectively for near-OOD detection. We ensure that we will clarify this in detail in the main paper.\\n\\n#### **W2.3. Noise initialization for inpainting.**\\nThank you for your comment. We perform RONIN by giving 5 random starting noises and computing overall OOD measurements FPR@95 and AuROC across all samples. The small standard derivation suggests that RONIN does not suffer considerably under the noising of the diffusion model.\\n\\n| | FPR@95 | AuROC |\\n|------------------|----------------|---------------|\\n| VOC - COCO | 28.12% \\u00b1 1.38 | 91.12% \\u00b1 0.24 |\\n| VOC - OpenImages | 12.61% \\u00b1 1.04 | 92.99% \\u00b1 0.13 |\\n| BDD - COCO | 29.90% \\u00b1 0.62 | 92.23% \\u00b1 0.29 |\\n| BDD - OpenImages | 32.22% \\u00b1 3.06 | 90.40% \\u00b1 0.81 |\"}", "{\"summary\": \"This paper proposes a new object-level OOD detection method called RONIN, which leverages the off-the-shelf diffusion model to replace detected objects with inpainting. RONIN adopts a context-aware class-wise inpainting method and combinations of similarity assessment methods. Experimental results show that the proposed method outperforms existing methods in three of 4 settings on Pascal-VOC and BDD-100k.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"S1. Using diffusion inpainting strategy for object-level OOD detection is novel.\", \"S2. The proposed method outperforms existing methods in three of four settings.\", \"S3. This paper is well written.\"], \"weaknesses\": [\"W1. The inference times of comparison methods are not reported. I am concerned that this method may have a longer inference time than the comparison methods. Since the actual application of object-level OOD detection is in areas that require real-time inference, such as autonomous driving, I believe it is necessary to accurately report the inference time of comparison methods.\", \"W2. I wonder the reason why RONIN\\u2019s performance is lower than the comparison methods in BDD-100k vs. OpenImages. It might be better to include the reason for this result. This is because BDD-100k is a driving dataset and closer to the real-world application than VOC.\"], \"questions\": \"I would like to know the inference time of the comparison methods.\\n\\nI also wonder the reason why RONIN\\u2019s performance is lower than the comparison methods in BDD-100k vs. OpenImages.\\n\\n\\nI am willing to raise the score, taking into account discussions with the author and the opinions of other reviewers.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your constructive feedback. Please find our responses below:\\n\\n## W1: Technical contribution\\nWe respectfully believe that our method is actually not limited and insufficient. We would like to emphasize that our work proposes a *novel OOD detection framework* based on off-the-shelf generative models. Our framework is intentionally designed to be modular, keeping the object detector and the off-the-shelf generative model intact, while investigating how to combine them effectively for performing OOD detection in a zero-shot manner. By doing so, the framework can easily generalize to future object detectors and newly released generative models with minimal modification effort. We believe that prioritizing *simplicity* and *generalizability* enhances the impact and applicability of our work, rather than diminishing its contribution.\\n\\n## W2: Additional comparison with other baseline methods\\nThank you for your suggestion. Below we present some additional experiments with a newer baseline CLIPN [1] across 4 settings, where our framework RONIN still outperforms the baseline.\\n\\n| | CLIPN | RONIN |\\n|-------------------|----------------|----------------|\\n| | FPR@95 / AuROC | FPR@95 / AuROC |\\n| VOC - COCO | 43.09 / 85.45 | 27.42 / 91.09 |\\n| VOCO - OpenImages | 41.74 / 89.31 | 18.04 / 93.34 |\\n| BDD - COCO | 28.89 / 92.14 | 30.16 / 92.77 |\\n| BDD - OpenImages | 44.76 / 85.78 | 30.00 / 91.60 |\\n\\n## W3: The sensitivity of $\\\\alpha$ and $\\\\beta$ in Table 4\\n\\nThank you for your comment. We would like to clarify that performance fluctuations mainly occur when certain similarity measurements are entirely excluded (row 1-2 of Table 4). However, when all similarity measurements are included, the performance is more consistent (rows 3-6 of Table 4). We recognize that varying the values of $\\\\alpha$ and $\\\\beta$ can lead to performance differences. This can in fact be beneficial for having the flexibility for hyperparameter tuning to meet specific practical needs. Our empirical results indicate that $\\\\beta = 1$ and $\\\\alpha = 2$ perform robustly across diverse datasets, making them reliable default choices.\\n\\n## Q1. The experiments on ImageNet-1K\\nThank you for your suggestion. We would like to clarify that, ImageNet-1K is commonly used for *image-level* OOD detection (e.g. for image classification tasks), rather than *object-level* tasks, such as object detection. Our paper investigates the object-level OOD problem, and the four datasets we experiment on follow the existing object-level works [2,3]. \\n\\n## Q2. The simple baseline of generating an image instead of inpainting an image\\nThank you for your suggestion. In fact, our implementation of the KNN baseline is similar to the suggested simple baseline. For KNN, we first construct a dataset by directly generating images of the ID classes using stable diffusion, and at inference time, we compare the detected objects to images in the generated dataset. Results in Table 1 show that RONIN consistently outperforms this approach.\\n\\n## Q3. Clarification on the choice of threshold\\nThank you for pointing out this. We would like to clarify that in our evaluation, AuROC is calculated as the area under the curve of TPR vs. FPR at all possible decision thresholds; FPR@95 TPR is calculated at the threshold where the in-domain achieves 95% true positive rate. In practical use, the threshold can be selected based on criteria such as achieving a 95% TPR.\\n\\n\\n[1] Wang et al. \\u201cClipn for zero-shot ood detection: Teaching clip to say no.\\u201d ICCV 2023.\\n\\n[2] Du et al. \\u201cVOS: Learning What You Don't Know by Virtual Outlier Synthesis.\\u201d ICLR 2022.\\n\\n[3] Du et al. \\u201cSIREN: Shaping Representations for Detecting Out-of-Distribution Objects.\\u201d Neurips 2022.\"}", "{\"summary\": \"This paper introduces a zero-shot object-level out-of-distribution detection method using context-aware inpainting to improve the model's ability to identify unseen objects. By leveraging generative models for class-conditioned inpainting, the differences between detected and original objects serve as indicators of recognition errors. Experiments demonstrate that this approach outperforms existing zero-shot and non-zero-shot OOD detection methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. Proposes a novel approach by integrating generative models for OOD detection, offering a solution distinct from traditional methods.\\n2. Supported by data, showing exceptional performance across multiple benchmark datasets.\\n3. The method does not require modifications to pre-trained models, facilitating integration into existing systems.\", \"weaknesses\": \"1. High reliance on generative models may be limiting, especially if the models are inaccurate or under-trained.\\n2. The need for extensive image generation and comparison could lead to high resource consumption.\\n3. Further validation on a variety of datasets is needed to ensure broad applicability and robustness.\", \"questions\": \"Please refer to the Weaknesses box.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer os8e\", \"comment\": \"We thank your valuable feedback to improve our manuscript further. Below, we respond to most, if not all, of your comments at this stage.\\n\\n## W1: Inference time comparison\\nThank you for your suggestion. We have just come up with a new diffusion inpainting schedule, allowing us to perform our framework RONIN significantly faster by 10 times with only 2 denoising steps, while still maintaining an approximate performance compared to our official results in the manuscript. Despite the diffusion model can be costly, this new schedule allows RONIN to perform in near real-time. Furthermore, as discussed in Lines 042-048 and 474-476 of the original manuscript, we believe RONIN can excel in both online and offline object detection tasks, especially in scenarios where accuracy performance is favored over speed. Finally, with the simple and generalized framework, we believe newly released one-step diffusion models with real-time performance can be easily plug-and-play into our framework to make it more effective and efficient. We believe eventually RONIN can perform on par with other baselines in terms of inference time.\\n\\n**Runetime in Seconds per Object**\\n\\n|ID-OOD setting|RONIN with 20 steps (original schedule)|RONIN with 2 steps (improved schedule)|\\n|-|-|-|\\n|VOC-COCO|0.35|0.034|\\n|VOC-OpenImages|0.42|0.041|\\n|BDD-COCO|0.19|0.018|\\n|BDD-OpenImages|0.18|0.017|\\n\\n**OOD Detection Performance**\\n\\n| | RONIN (20 steps) | RONIN (2 steps) | \\n|-------------------|----------------|----------------|\\n| | FPR@95 / AuROC | FPR@95 / AuROC |\\n| VOC - COCO | 27.42 / 91.09 | 37.11 / 87.54 | \\n| VOCO - OpenImages | 18.04 / 93.34 | 28.48 / 90.52 | \\n| BDD - COCO | 30.16 / 92.77 | 28.57 / 92.98 | \\n| BDD - OpenImages | 30.00 / 91.60 | 31.11 / 91.37 | \\n\\n\\n## W2: Performance clarification on BDD vs OpenImages\\nThank you for your pointing out that, and we would like to clarify the performance as follows. In the main Table 1, across the 4 OOD settings, our proposed framework RONIN is the only method that performs competitively and consistently, while the other baselines fluctuate quite a bit. On the BDD vs. OpenImages setting, RONIN achieves 30.00 FPR@95TPR, and 91.60 AuROC. Although this is not a state-of-the-art performance, these numbers are still strong enough to allow RONIN to be among the top methods. On the other hand, BDD and OpenImages datasets are two significantly distinct dataset distributions. Under very effective visual embedding, such as SimCLR, metric-based learning methods like KNN or Mahalanobis perform extremely robustly to achieve peak performance. Despite that, they still underperformed RONIN in other scenarios.\\n\\nAgain, we appreciate your valuable feedback on improving our study. Please also kindly give us a chance to clarify any remaining unclear points.\"}", "{\"title\": \"General Response\", \"comment\": \"We thank all the reviewers for their time and valuable comments. We appreciate that the reviewers found our approach novel (reviewer 9VVi, os8e), our performance strong (reviewer oHFd, 9VVi, os8e), and our paper well-written (reviewer os8e, h9zL). Below, we address some commonly raised concerns:\\n\\n**[Technical Contribution]** We recognize that there are differing opinions on the novelty of our proposed method. We would like to emphasize that our work proposes a *novel OOD detection framework* based on off-the-shelf generative models. Our framework is intentionally designed to be modular, keeping the object detector and the off-the-shelf generative model intact, while investigating how to combine them effectively for performing OOD detection in a zero-shot manner. By doing so, the framework can easily generalize to future object detectors and newly released generative models with minimal modification effort. We believe that prioritizing *simplicity* and *generalizability* enhances the impact and applicability of our work, rather than diminishing its contribution.\\n\\n**[Runtime]** We report RONIN\\u2019s runtime in seconds per object. Additionally, we note that it is often possible to construct shorter diffusion schedules that maintain comparable performance. To illustrate the speed-performance tradeoff, we also present RONIN\\u2019s results using a 2-step inpainting schedule.\\n\\n**Number of Seconds Per Object Processed**\\n\\n|ID-OOD setting|RONIN with 20 steps (original schedule)|RONIN with 2 steps (improved schedule)|\\n|-|-|-|\\n|VOC-COCO|0.35|0.034|\\n|VOC-OpenImages|0.42|0.041|\\n|BDD-COCO|0.19|0.018|\\n|BDD-OpenImages|0.18|0.017|\\n\\n**OOD Detection Performance**\\n\\n| | RONIN (20 steps) | RONIN (2 steps) | \\n|-------------------|----------------|----------------|\\n| | FPR@95 / AuROC | FPR@95 / AuROC |\\n| VOC - COCO | 27.42 / 91.09 | 37.11 / 87.54 | \\n| VOCO - OpenImages | 18.04 / 93.34 | 28.48 / 90.52 | \\n| BDD - COCO | 30.16 / 92.77 | 28.57 / 92.98 | \\n| BDD - OpenImages | 30.00 / 91.60 | 31.11 / 91.37 | \\n\\n\\n**[Comparison with More Recent Methods]** We thank the reviewers for suggesting comparisons with more recent baselines. Below, we report a comparison with CLIPN [1]. Results show that RONIN consistently outperforms CLIPN.\\n\\n| | CLIPN | RONIN |\\n|-------------------|----------------|----------------|\\n| | FPR@95 / AuROC | FPR@95 / AuROC |\\n| VOC - COCO | 43.09 / 85.45 | 27.42 / 91.09 |\\n| VOCO - OpenImages | 41.74 / 89.31 | 18.04 / 93.34 |\\n| BDD - COCO | 28.89 / 92.14 | 30.16 / 92.77 |\\n| BDD - OpenImages | 44.76 / 85.78 | 30.00 / 91.60 |\\n\\nThanks again for all the insightful comments and suggestions. Please do not hesitate to let us know if there are additional concerns. We will do our best to address them in the remaining rebuttal period.\\n\\n[1] Wang et al. \\\"Clipn for zero-shot ood detection: Teaching clip to say no.\\\" ICCV 2023.\"}", "{\"comment\": \"After checking the responses to my questions, I believe that most of my concerns have been resolved. Considering the originality and quality of this article, I decided to maintain my rating as \\u201c5: marginally below the acceptance threshold.\\u201d\"}", "{\"comment\": \"I would like to thank the authors for providing responses to my questions. I found that most of the reviewers had similar concerns, including lack of technical novelty, concern about inference time, and lack of comparative experiments and analysis. I think the authors' responses addressed these questions partially, but not completely. More specifically,\\n\\n**W1**. I am satisfied with the authors' comment. Although I raised the lack of algorithmic novelty as a weakness, I would not put much emphasis on this point because I agree with the simplicity and interest of the idea, as I mentioned in the Strengths section.\\n\\n**W2**. After reading the authors' response, I think the computation time of the proposed method is a clear weakness; as Reviewer os8e also said, this cannot be ruled out due to the lack of comparison of inference times.\\nIn addition, the few-round version causes a clear drop in accuracy, and it cannot be determined at this stage that the proposed method achieves a good trade-off.\\n\\n**W3**. The authors have provided additional comparative experiments with one of the methods I (and Reviewer oHFd) have mentioned. However, the comparisons with the other three methods remain unknown.\\n\\nRegarding **W4** and **W5**, I cannot make any comments at this stage because no clear justification has been provided.\\n\\nAs above, the rebuttal has not been very convincing so far. But I still feel that the idea of detecting OOD by the similarity of images before and after inpainting is interesting, so I would retain my original rating.\"}", "{\"summary\": \"This paper addresses zero-shot out-of-distribution (OOD) detection in object detection. Since object detection is often overconfident, the OOD scores computed using only the confidence values of object detection are often unreliable.\\n\\nThe proposed method uses text-based image inpainting with Stable Diffusion; The proposed method first input the original image in which the regions of the detected bounding box are masked and the name fo the detected object class to Stable Diffusion to get the synthesized image in which the masked regions are inpainted. OOD detection is performed based on the similarity of the embeddings between the inpainted and the original images.\\n\\nExperiments using PascalVOC and BDD100k as ID datasets and MS-COCO and OpenImages as OOD datasets show the superiority of the proposed method over several existing OOD detection methods.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The idea of detecting OODs by the similarity of the images before and after inpainting is general, simple and interesting.\", \"The method of achieving inpainting that preserves the original context associated with the OOD classes by masking the areas that are r% smaller than the bounding box is also simple and interesting.\", \"The experiments and analysis are generally comprehensive, although there are some serious shortcomings, as discussed below.Although somewhat artificial, the analysis to improve misclassification of ID classes is also interesting.\", \"The paper is generally well organized and easy to follow.\"], \"weaknesses\": \"**W1.** The idea of using image inpainting for OOD detection is novel in itself, but its technical novelty is somewhat limited because it is basically a combination of existing pre-trained image inpainting and classifiers.\\n\\n\\n**W2.** OOD detection using image inpainting, while interesting, yields several weaknesses.\\n\\n*W2-1.* As the authors themselves discuss, high-quality inpainting is computationally time-consuming, as inference also takes a long time for each image. This can be a drawback since object detection has many applications where processing time is crucial.\\n\\n*W2-2.* If the ID class and OOD class are close, image inpainting with clear class distinctions and precise similarity evaluation may become difficult, resulting in less accurate OOD detection. It would be good to evaluate detection performance on datasets that include fine-grained categories and attributes, for example, LVIS and FG-OVD. This would also be relevant to the discussion around Line 076. It would also be good to discuss the object categories that are difficult to inpaint and their impact on performance.\\n\\n*W2-3.* The results of inpainting are dependent on the initial noise, which can lead to large variations in OOD detection performance. Discussion on this point would be appreciated.\\n\\n\\n**W3.** Comparisons with existing methods are somewhat lacking. In this paper, the problem is defined as the task of binary classification of each detected object bbox as to whether it is ID or OOD. According to the task definition, a simple baseline would be to apply a zero-shot OOD detection method to each bbox. In particular, the proposed method uses CLIP, but there are other CLIP-based zero-shot OOD detection methods (e.g., CLIPN [Wang et al., ICCV'23]) and few-shot OOD detection methods (e.g., LoCoOp [Miyai et al., NeurIPS'23] and NegPrompt [Li et al., CVPR'24]). Comparisons with these methods should be included.\\n\\n\\n**W4.** The analysis presented in Table 4 is not exhaustive and does not fully support the validity of the triplet similarity in Eq. 4. In general, adding more hyperparameters should help improve accuracy. To demonstrate the validity of Equation 4, it is necessary to show exhaustive results for sensitivity to alpha and beta, with similarity(ori, \\\\hat{y}) and similarity(ori, inp) each showing performance improvement over a wide range of alpha and beta.\\n\\n\\n**W5.** Modern object detectors are often open-vocabulary. Demonstrating that the proposed method is effective even in open-vocabulary scenarios is preferable.\\n\\n\\n**W6.** Other minor points\\n\\n- RONIN (the name of the proposed method) first appears in the caption of Fig. 1 and Line 059 in the introduction section, but without saying what it stands for. There should be the definition.\\n\\n- Bold b's appear in Lines 119 and 120 should be italic.\\n\\n- Fig. 1 and Fig. 2 have considerable overlap in information and could be combined into one.\", \"questions\": \"Since the use of image inpainting is the most important idea of the proposed method, its strengths and weaknesses should be fully discussed. In this regard, I would expect thorough responses on **W2**. I would also like to see answers to the lack of comparison with existing methods (**W3**) and the lack of analysis (**W4**), as these are critical issues to clarify the validity and effectiveness of the proposed method. Given the recent emphasis on open vocabulary scenarios for object detection, I would expect an answer for **W5** as well.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Authors' Rebuttal\", \"comment\": [\"Thank you for addressing my feedback. However, I would like to raise the following concerns regarding the revisions:\", \"In W1, I requested the reporting of inference times for comparison methods. However, the authors only reported the inference time of their proposed method and did not include the inference times for comparison methods. Therefore, this concern has not been addressed.\", \"The authors claim that faster diffusion models may be proposed in the future, which will resolve the efficiency issue. However, there is a trade-off between performance and speed, so it is unclear whether RONIN's performance is sufficient. Additionally, to demonstrate the model's scalability, it is necessary to actually verify it with a wider variety of diffusion models, but this point has not been sufficiently discussed in the paper.\", \"The 2-step diffusion model shows significantly lower performance on VOC dataset, falling behind other comparative methods. This suggests the improvement in performance may be limited.\", \"While they claim the method is effective in scenarios where accuracy performance is favored over speed, there are no experiments with such datasets. In the paper, examples like ecosystem conservation and manufacturing are mentioned at L475, and I believe conducting experiments with such datasets would make their claims more convincing. However, since they only evaluate performance on common benchmarks like PASCAL-VOC and BDD100K, I remain skeptical about applications in these areas.\"]}", "{\"metareview\": \"This paper proposes a novel zero-shot out-of-distribution detection method for object detectors based on inpainting object bounding boxes with stable diffusion. If class-conditioned inpainting deviates more from the original this is taken as an indicator of the wrong class being used for conditioning.\", \"the_reviewers_report_several_strengths\": \"interesting idea to use inpainting for OOD, improvements over baseline,\", \"weaknesses_include\": \"limited technical novelty, missing comparison to recent methods, sensitivity to hyperparameters, and inference time.\", \"additional_comments_on_reviewer_discussion\": \"In response to the reviews the authors provided a rebuttal as well as an updated version of the manuscript.\\nAfter taking into account the rebuttal and other reviews there are 3 (weak) negative recommendations and one (weak) positive recommendation. \\nWhile the idea of class-conditional diffusion-based inpainting for OOD detection is interesting, the paper has several weaknesses including limited algorithmic novelty and misses comparisons with several state-of-the-art OOD detection methods, and comparison in terms of run-time. Therefore the AC follows the majority recommendation for this paper.\"}", "{\"title\": \"Response to Reviewer h9zL (2/2)\", \"comment\": \"## W3: Comparison with additional baselines\\nThank you for your suggestion. Below we provide additional comparison with the most recent CLIPN [1]. The current results suggest that our framework, RONIN, still achieves superior performance. \\n\\n| | CLIPN | RONIN |\\n|-------------------|----------------|----------------|\\n| | FPR@95 / AuROC | FPR@95 / AuROC |\\n| VOC - COCO | 43.09 / 85.45 | 27.42 / 91.09 |\\n| VOCO - OpenImages | 41.74 / 89.31 | 18.04 / 93.34 |\\n| BDD - COCO | 28.89 / 92.14 | 30.16 / 92.77 |\\n| BDD - OpenImages | 44.76 / 85.78 | 30.00 / 91.60 |\\n\\n\\n## W4: Clarification on the ablation study of similarities (Table 4)\\nThanks for the constructive comment. We acknowledge that Table 4 could be improved to be more organized and supportive of our algorithm design (Equation 4). We will revise it in the final version and here is our plan.\\n\\nIn this new table, we will intergrade two similar ablation studies to support our design for the triplet similarity. In the first study, we prove that having all three types of similarity is important and necessary for effective similarity measurements. In the second study, given the triplet score with all three similarities, we empirically prove that focusing more on the vision-language similarity rather than the visual similarity, i.e. $\\\\alpha > \\\\beta$, supports our method RONIN to perform at its peak. \\n\\nAdditionally, we recognized that the performance of our framework RONIN can be sensitive given different choices of both $\\\\alpha$ and $\\\\beta$, making them effective hyperparameters if we want to tune our methods to tackle difficult settings. Initially, increasing $\\\\alpha$ and $\\\\beta$ does improve the accuracy, as you pointed out. However, our experiments suggest that keeping $\\\\beta = 1$ is actually the most stable choice, and if having $\\\\alpha \\\\geq 4$, the performances of our framework then do not make significant improvements anymore. For convenience and simplicity, we just simply select $\\\\beta = 1$ and $\\\\alpha = 2$ for our algorithm design as in Equation 4.\\n\\n## W5: The effectiveness of RONIN in open-vocabulary scenarios\\nThank you for raising this very interesting question. To our understanding, open-vocabulary means the object detector have the ability to detect any kind of object if someone provides a set of which objects will be detected. In other words, it requires the user to provide a set of candidate ID classes to let it operate. So based on this understanding, we see no blockage of applying our methods for open-vocabulary object detection. Here we conducted a very simple study by taking several image data from the VOC-COCO setting, containing horse (in-vocabulary) and zebra (out-of-vocabulary), and we ran GroundingDINO [2], a state-of-the-art open-vocabulary object detection. The object detector easily detects horses, but we found out there are a few false-detected zebras as horses. These false-detected can be identified with our framework, RONIN, which is particularly discussed in our response to W.2.2. \\n\\n## W6: Minor errors\\nThank you for pointing out that, and we will correct these errors in the final revision of our paper. We would like to explain that, the name of our framework **RONIN** is the abbreviation of _\\\"Ze**R**o-shot OOD C**ON**textual **IN**painting\\\"_.\\n\\nAgain, we appreciate your valuable and constructive feedback, which will significantly improve our study. We kindly hope that you find these responses satisfying and give us a chance to clarify any remaining unclear points.\\n\\n[1] Wang et. al. \\\"Clipn for zero-shot ood detection: Teaching clip to say no.\\\" ICCV 2023. \\n[2] Liu et. al. \\\"Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection.\\\" ECCV 2024.\"}", "{\"comment\": \"Thanks for your valuable feedback to improve our manuscript further. Below, we respond to most, if not all, of your comments at this stage.\\n\\n## W1: High reliance on the generative model\\nThank you for your comment, and we agree that our method, RONIN, does rely on an effective off-the-shelf diffusion model with well-trained, generalized performances for doing OOD detection in a zero-shot manner. This reliance is actually similar to several existing methods, including [1], [2], or [3] that also develop their own methods based on effective pre-trained generative models. We deliberately tried to keep the inpainting model intact to ensure the generalizability of our framework, as newly released off-the-shelf generative models can be conveniently plug-and-play without additional effort. However, to demonstrate the effectiveness of our framework without heavy reliance on the performance of the generative model, we conducted an additional experiment where we compared the default StableDiffusion 2 with two other sub-optimal generative models, StableDiffusion 1 and StableDiffusionXL, for our framework. Additionally, we further blurred the generated outcomes to stimulate an \\u201cunderperformed\\u201d diffusion model with poor inpainting results. The result suggests that our framework is still able to perform similarly without significant derivation. In fact, we found that RONIN can be effectively operated with an acceptable reconstruction, without the the need for high-quality or detailed generated outcomes.\\n\\n| **RONIN with different Diffusion Models** | **VOC vs COCO** | **VOC vs Open images** |\\n|:-----------------------------------------:|:---------------:|:---------------------:|\\n| | FPR / AuROC | FPR / AuROC |\\n| StableDiffusion 1 | 29.9 / 89.36 | 23.70 / 92.36 |\\n| StableDiffusion 1 + blur | 33.81 / 88.52 | 25.22 / 91.66 |\\n| StableDiffusion XL | 32.99 / 90.83 | 22.17 / 92.61 |\\n| StableDiffusion XL + blur | 31.96 / 90.64 | 21.96 / 92.09 |\\n| StableDiffusion 2 | 27.42 / 91.41 | 18.91 / 92.96 |\\n| StableDiffusion 2 + blur | 27.42 / 91.22 | 19.35 / 92.67 |\\n\\n[1] Zhao et. al. \\\"Unleashing Text-to-Image Diffusion Models for Visual Perception.\\\" CVPR 2024.\\n\\n[2] Luo et. al. \\\"Diff-Instruct: A Universal Approach for Transferring Knowledge From Pre-trained Diffusion Models.\\\" NeurIPS 2024.\\n\\n[3] Mou et. al. \\\"T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models.\\\" AAAI 2024\\n\\n## W2: Clarification on the high resource consumption\\n\\nThank you for raising the concern. We find out that operating our framework with both the image inpainting and similarity comparison is actually not costly, especially given the modern computational resources. As discussed in Session 4.2 (Line 191-211) of the original manuscript, we designed a \\u201cclass-wise inpainting\\u201d strategy to significantly reduce the number of inpainting per image, making RONIN highly efficient. Additionally, for the rebuttal, we have just designed a new inpainting schedule and denoising process that supports the generative models to perform image inpainting with only two steps but still approximates maintaining performance compared with our reported performance in the original paper, which resulted from 20 steps of inpainting. This means the new schedule allows RONIN to perform 10 times faster, approaching near real-time performance. Below we provide the time comparison of our framework RONIN, both with the original schedule of 20 steps and the improved schedule of 2 steps across the 4 main settings. We kindly ensure we will elaborate on this finding in the final version of our paper.\\n\\n**Runtime in Seconds per Object**\\n\\n|ID-OOD setting|RONIN with 20 steps (original schedule)|RONIN with 2 steps (improved schedule)|\\n|-|-|-|\\n|VOC-COCO|0.35|0.034|\\n|VOC-OpenImages|0.42|0.041|\\n|BDD-COCO|0.19|0.018|\\n|BDD-OpenImages|0.18|0.017|\\n\\n**OOD Detection Performance**\\n\\n| | RONIN (20 steps) | RONIN (2 steps) | \\n|-------------------|----------------|----------------|\\n| | FPR@95 / AuROC | FPR@95 / AuROC |\\n| VOC - COCO | 27.42 / 91.09 | 37.11 / 87.54 | \\n| VOCO - OpenImages | 18.04 / 93.34 | 28.48 / 90.52 | \\n| BDD - COCO | 30.16 / 92.77 | 28.57 / 92.98 | \\n| BDD - OpenImages | 30.00 / 91.60 | 31.11 / 91.37 | \\n\\n\\n## W3: Further validation\\nThank you for your suggestion. We are actively developing new OOD settings and validations, as preparing the dataset is time-consuming. We will update you in the next few days.\\n\\nAgain, we appreciate your valuable opinions and reviews, which will significantly improve our study. Please also kindly give us a chance to clarify any remaining unclear points.\", \"title\": \"Response to Reviewer 9VVi\"}", "{\"comment\": \"Thank authors for the rebuttal. My concern regarding the lack of comparisons with recent methods has been addressed. However, I am still not fully convinced about the technical contribution of the paper, and I share the concerns raised by other reviewers regarding the computational cost, which I consider to be a weakness. As such, I will maintain my original score.\"}" ] }
7f5hNhzVAe
Robust Domain Generalisation with Causal Invariant Bayesian Neural Networks
[ "Gael Gendron", "Michael Witbrock", "Gillian Dobbie" ]
Deep neural networks can obtain impressive performance on various tasks under the assumption that their training domain is identical to their target domain. Performance can drop dramatically when this assumption does not hold. One explanation for this discrepancy is the presence of spurious domain-specific correlations in the training data that the network exploits. Causal mechanisms, in the other hand, can be made invariant under distribution changes as they allow disentangling the factors of distribution underlying the data generation. Yet, learning causal mechanisms to improve out-of-distribution generalisation remains an under-explored area. We propose a Bayesian neural architecture that disentangles the learning of the the data distribution from the inference process mechanisms. We show theoretically and experimentally that our model approximates reasoning under causal interventions. We demonstrate the performance of our method, outperforming point estimate-counterparts, on out-of-distribution image recognition tasks where the data distribution acts as strong adversarial confounders.
[ "Causality", "Domain Generalisation", "Variational Inference" ]
Reject
https://openreview.net/pdf?id=7f5hNhzVAe
https://openreview.net/forum?id=7f5hNhzVAe
ICLR.cc/2025/Conference
2025
{ "note_id": [ "xluelT3kee", "taOAmIEKxm", "n4oJMKsqBR", "ggr3vqqW4n", "bqz5bDikx4", "ReUHoIBxgN", "Q2IjALDqAR", "LuLkrXtApL", "K5NH2Wgq3T", "HDeKX8iqg3", "GlPD2nKAXL", "CEPQYjYL8M", "AIADY0mQeN", "5s57qHBrCx", "54zfoe9YiB", "3VZ9BYDg5m", "1YBctI2YJ0" ], "note_type": [ "official_comment", "decision", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732476827149, 1737523422235, 1732616116044, 1732686808057, 1730507851669, 1730018652156, 1732358637624, 1732358169958, 1732616103117, 1734938887405, 1732358743221, 1732358204223, 1732616244519, 1730580387175, 1732584031195, 1732358274269, 1733284316321 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission902/Area_Chair_FVEK" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission902/Authors" ], [ "ICLR.cc/2025/Conference/Submission902/Reviewer_Ee74" ], [ "ICLR.cc/2025/Conference/Submission902/Reviewer_GRc5" ], [ "ICLR.cc/2025/Conference/Submission902/Reviewer_uvMu" ], [ "ICLR.cc/2025/Conference/Submission902/Authors" ], [ "ICLR.cc/2025/Conference/Submission902/Authors" ], [ "ICLR.cc/2025/Conference/Submission902/Authors" ], [ "ICLR.cc/2025/Conference/Submission902/Area_Chair_FVEK" ], [ "ICLR.cc/2025/Conference/Submission902/Authors" ], [ "ICLR.cc/2025/Conference/Submission902/Authors" ], [ "ICLR.cc/2025/Conference/Submission902/Authors" ], [ "ICLR.cc/2025/Conference/Submission902/Reviewer_Ee74" ], [ "ICLR.cc/2025/Conference/Submission902/Reviewer_uvMu" ], [ "ICLR.cc/2025/Conference/Submission902/Authors" ], [ "ICLR.cc/2025/Conference/Submission902/Authors" ] ], "structured_content_str": [ "{\"comment\": \"Dear Reviewers,\\n\\nThis is a gentle reminder that the authors have submitted their rebuttal, and the discussion period will conclude on November 26th AoE. To ensure a constructive and meaningful discussion, we kindly ask that you review the rebuttal as soon as possible and verify if your questions and comments have been adequately addressed.\\n\\nWe greatly appreciate your time, effort, and thoughtful contributions to this process.\\n\\nBest regards, AC\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"Regarding your second question: this is a good point! A majority of the previous work assumes that only domain-invariant information should be learned by the network (e.g. [1] learns $P(Y|Z_R)$). However, recent work has pointed out that domain-specific information remains helpful for the task and that the issue lies in the entanglement of the two mechanisms (e.g. [2]). As an example, if we train a classifier on two domains $\\\\mathcal{D}_1$ and $\\\\mathcal{D}_2$ that have different and very imbalanced label distributions (e.g. $\\\\mathcal{D}_1$ largely favours label 1 and $\\\\mathcal{D}_2$ largely favours label 2), keeping domain information is useful for the classification. Interventional queries effectively disentangle prior domain knowledge from inference mechanisms, taking advantage of all the available information [12]. We also show using transportability theory [13] (Section 3, transportability paragraph) that, from the causal structure induced by supervised learning, some components of the inference network cannot be made domain-invariant so we have to use domain knowledge in the computation.\\n\\nWe hope this answers your question on the motivations behind our problem formulation. We will make it clearer in the paper. This is an interesting point and we are happy to discuss this topic further.\\n\\n\\n[1] Liu, Chang, Xinwei Sun, Jindong Wang, Haoyue Tang, Tao Li, Tao Qin, Wei Chen, and Tie-Yan Liu. \\\"Learning causal semantic representation for out-of-distribution prediction.\\\" NeurIPS, 2021.\\n\\n[2] Lv, Fangrui, et al. \\\"Causality inspired representation learning for domain generalization.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.\\n\\n[7] Ouyang, Cheng, et al. \\\"Causality-inspired single-source domain generalization for medical image segmentation.\\\" IEEE Transactions on Medical Imaging 42.4 (2022): 1095-1106.\\n\\n[8] Derakhshani, Mohammad Mahdi, et al. \\\"Bayesian prompt learning for image-language model generalization.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[9] De Haan, Pim, Dinesh Jayaraman, and Sergey Levine. \\\"Causal confusion in imitation learning.\\\" NeurIPS, 2029.\\n\\n[10] Zhang, Amy, Clare Lyle, Shagun Sodhani, Angelos Filos, Marta Kwiatkowska, Joelle Pineau, Yarin Gal, and Doina Precup. \\\"Invariant causal prediction for block mdps.\\\" ICML, 2020.\\n\\n[11] Bica, Ioana, Daniel Jarrett, and Mihaela van der Schaar. \\\"Invariant causal imitation learning for generalizable policies.\\\" NeurIPS, 2021.\\n\\n[12] Pearl, J. (2009). Causality. Cambridge university press.\\n\\n[13] Jalaldoust, K., & Bareinboim, E. (2024, March). Transportable representations for domain generalization. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 11, pp. 12790-12800).\"}", "{\"comment\": \"Thank you for your response. It solves some of my questions. However, I remain unconvinced about the motivation for using BNN. Could you elaborate further on why BNN is particularly well-suited for learning interventional queries and why it is only applied in the inference component? Additionally, the experimental evaluation part is insufficient as it is limited to ResNet-18. Finally, the loss function includes five terms, requiring four hyperparameters to be tuned, which makes it challenging to generalize the method to different settings. Based on these concerns, I will maintain my current score.\"}", "{\"summary\": \"This paper proposes an intervention mechanism that can be added to a bayesian inference pipeline to solve out of distribution tasks. The method improves in distribution and out of distribution performance in an unsupervised manner. Intervention is made by leveraging contextual information from within dataset using Mixup strategies. The inference network is a partially stochastic bayesian neural network. Theoretical result is derived for when intervention is made taking into account conditioning on the training dataset. This result is then used to show performance improvement on CIFAR10 and OFFICEHOME datasets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The novelty of the paper is to apply causal interventions in a bayesian inference pipeline and show improvements on out of distribution data compared to point estimate based networks.\\n2. Another novel aspect compared to baseline used is considering datasets in the causal graph and deriving the theoretical result and its execution with partially stochastic networks.\\n3. The paper provides a methodology to learn causal representations in unsupervised manner.\\n4. The underlying concepts for bayesian networks, causality are adequately explained.\\n5. The authors provide an anonymized code repository for review.\", \"weaknesses\": \"1. Experiments - The authors show improvements on dataset with translation of CIFAR10 and OFFICEHOME dataset. Results on datasets with other types of commonly seen o.o.d. variations like different backgrounds would make a more convincing point.\\n2. Visualizations for results would be helpful to understand the domain gap and analyze improvements with proposed method.\\n3. While the architecture is well described in the paper, the training and evaluation algorithm could be described with more clarity and details for how the theoretical result is used.\", \"questions\": \"1. How much is the performance improvement due to proposed intervention design vs from the use of bayesian inference over point-estimate methods?\\n2. While other files are present, the trainer.py file in anonymized code repo is empty. Would appreciate access to understand the method better.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper exploits the invariance of causal mechanisms under distribution changes to improve out-of-distribution generalisation.\\nIt proposes the Causal-Invariant Bayesian (CIB) neural network. The CIB architecture combines a variational encoder, an inference module, and a Bayesian neural network,\\naiming to learn domain-invariant representations. Theoretically, the model is shown to approximate reasoning under causal interventions.\\nExperimentally, the model improves out-of-distribution generalisation in image recognition tasks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1. It is interesting to exploit the causal mechanism invariance to improve out-of-distribution generalisation.\\n2. Both theoretical and empirical evidence are provided to validate the proposed architecture. \\n3. The formulations are clearly presented and illustrated.\", \"weaknesses\": \"1. It is not a new idea to exploit the causal mechanism invariance to improve out-of-distribution generalisation.\\nMany works have already used this idea to improve out-of-distribution generalization, e.g., [1] [2] [3] [4].\\nThe paper does not discuss these related works sufficiently and compare the proposed method to these approaches.\\n2. The experiments only compare a single method for out-of-distribution generalization, which does not even appear to outperform the vanilla baseline (Table 1).\\nThe comparison is insufficient to prove the superiority of the proposed method.\\n3. The proposed architecture seems to be an integration of several existing methods.\\n\\n\\n\\n[1] De Haan, Pim, Dinesh Jayaraman, and Sergey Levine. \\\"Causal confusion in imitation learning.\\\" NeurIPS, 2029.\\n\\n[2] Zhang, Amy, Clare Lyle, Shagun Sodhani, Angelos Filos, Marta Kwiatkowska, Joelle Pineau, Yarin Gal, and Doina Precup. \\\"Invariant causal prediction for block mdps.\\\" ICML, 2020.\\n\\n[3] Bica, Ioana, Daniel Jarrett, and Mihaela van der Schaar. \\\"Invariant causal imitation learning for generalizable policies.\\\" NeurIPS, 2021.\\n\\n[4] Liu, Chang, Xinwei Sun, Jindong Wang, Haoyue Tang, Tao Li, Tao Qin, Wei Chen, and Tie-Yan Liu. \\\"Learning causal semantic representation for out-of-distribution prediction.\\\" NeurIPS, 2021.\", \"questions\": \"1. What are the novelty and the superiority of the proposed method over other methods that enhance out-of-distribution generalization through causal mechanism invariance?\\n2. What are the novel aspects of the proposed architecture beyond integrating existing techniques?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank all the reviewers for their extensive and constructive feedback. The reviewers agree that in its current stage, our paper does not provide sufficient comparison with other domain generalisation baselines. We are currently working on improving this part and we provide an in-depth comparison between our work and related studies on domain generalisation as suggested by the reviewers. We will add numerical comparison on standard benchmarks in a second general comment once the additional experiments are finished.\", \"regarding_the_conceptual_differences_between_our_method_and_existing_work\": \"The Causal Semantic Generative (CSG) model [1] is a method for o.o.d generalisation relying on causal invariance. It differs from our work as it uses different assumptions on the causal generative mechanisms, i.e. CSG considers the input $X$ and output $Y$ to be caused by shared latent factors but not to be directly causally related, leading to a different derivation of the quantity to estimate. We aim to compute the interventional quantity P(Y|do(x)) while CSG optimises P(Y|$Z_R$). It means that CSG attempts to use domain-invariant information only while we take advantage of contextual domain-specific information (by learning to separate it from domain-invariant mechanisms). The method also assumes the prior knowledge on the distribution to be a factorisation of the domain-invariant and domain-specific variables, while we do not require this assumption. In addition, we explicitly model the domain shift using transportability theory and, as a result of the different derivation, we include marginalisations on the input domain and the inference weights to estimate interventional queries. \\n\\nThe Causality Inspired Representation Learning (CIRL) [2] is a similar method, taking advantage of a similar causal graph. The method considers that non-causal factors are important for classification and proposes to build contextual images by applying modifications of the input in the frequency domain (using Fourier transform). We similarly use contextual images to learn domain-specific information but we use sampling from the distribution to increase the diversity of the representation and not be conditioned on the instance label. CIRL encode input and contextual images simultaneously, similarly to our work, and uses a factorisation loss to induce the dimensions of the latent representations to represent causal factors and be independent. However, no guidance is provided as of how the causal factors should be disentangled (unsupervised learning settings), hereby providing no guarantees that the factors are disentangled or causal since it was shown by [3] that unsupervised disentanglement is generally impossible. Our work differs by learning the parameters of latent distributions instead of point estimates to regularise the latent space (both for the image embeddings and for the classifier weights). We also do not attempt to directly disentangle the causal factors (following the results of [3]) but only separate the causal and non-causal factors, relaxing CIRL assumptions.\\n\\n[4] is a different approach to domain generalisation not relying on causal principles. Similarly to our work, the method learns the parameters of a domain-invariant distribution but does not take advantage of the domain-specific representation. The objective function also differs as the method does not aim to optimise an interventional query. Furthermore, in our work, we justify the use of Bayesian layers as part of the derivation of the quantity P(Y|dox(X)) to estimate (to block a backdoor path between the input domain and the output class).\"}", "{\"comment\": \"Thank you for your review, we are sorry that you find that our work lacks novelty and experiments and has unclear motivations. We hope that our explanations below can help clear out any confusions you may have about the paper. Regarding the lack of references, thank you very much for pointing it out, we acknowledge that our works lacks comparisons with other domain generalisation methods. We have integrated this feedback in our paper and added comparisons between our method and the existing, highlighting the conceptual differences between the methods and conducted comparative experiments on standard benchmarks.\\n\\n\\nExperiments are currently running but we hope to add the results before the end of the discussion phase. In the meantime, please find below the answers to your other concerns:\\n\\nWe would like to clarify in the strengths section that we compare out model with a base ResNet-18 corresponding to the backbone architecture to which we apply our method and not with a VAE. The second CT baseline contains a VAE as part of its architecture but it is also based on ResNet-18 as its backbone.\", \"regarding_your_questions_on_the_motivation_of_our_work\": \"1. We aim to train a model that computes interventional queries P(y|do(x)). Our work is motivated by the observation that existing studies do not consider spurious correlations arising by the conditioning on the training set (as shown in Figure 1). From this observation, we derive a quantity that allows learning interventional queries while taking this bias into account. This derivation includes a marginalisation into the set of possible weights to block a backdoor path, justifying the use of Bayesian neural networks to perform this marginalisation. The rest of the architecture directly follow from this derivation. The combination of the input and contextual embeddings allows the model to differentiate domain-specific information (about the current distribution, provided by the context) and domain-invariant knowledge (learned from the input).\", \"regarding_your_comments_on_the_experiments\": \"1. We provide ablation studies where we modify the amount of sampling from BNN and inference component in Figures 5 and 6. These results show that using a BNN has little impact on the final performance but helps the learning to be more sample efficient.\\n\\n2. & 3. We have added comparison with the missing references. Please, refer to the general comment above to see the details and additional experiments. We are currently running experiments on the datasets of the DomainBed benchmark.\\n\\n4. & 5. Similarly to the domain generalisation baselines, our model is tailored for image classification tasks with CNNs. While our proposed method can theoretically be transferred to another backbone like the VIT and to other tasks, it would require many adaptations specific to these domain and architecture. Moreover, the domain generalization baselines are all based on ResNet, allowing comparison of the proposed methods for the same backbone model. Transferring our method to other backbones and domains is an interesting direction that we will consider as part of our future work.\", \"regarding_your_additional_questions\": \"1. Our theoretical work shows that a reconstruction loss is not needed for inference. Intuitively, this can be explained by the fact that a reconstruction loss incites the model to learn details that are domain-specific. We aim to learn domain-invariant functions by putting the domain information into the contextual image embeddings (from the marginalisation over the input domain).\\n\\n2. We indeed use multiple KL divergence terms in our objective function but we only use them to regularise parametric distributions with the standard normal distribution. So the quantity can be simplified as follows: $KL(\\\\mathcal{N}(\\\\\\\\nu,\\\\sigma)||\\\\mathcal{N}(0,1)) = \\\\frac{1}{2} \\\\sum\\\\limits_{i=1}^D (\\\\sigma_i^2 + \\\\nu_i^2 -1 - \\\\ln \\\\sigma_i^2)$, which has a linear complexity with the dimension size D ($\\\\mathcal{O}(D)$). Does this answer your question?\\n\\n3. We run hyperparameter grid search on the validation set of CIFAR-10 to find optimal hyperparameter values.\\n\\nPlease, let us know if our response answers your concerns and if you would consider raising your score in the light of additional experiments and baseline comparisons. Otherwise, let us know what else we should improve.\"}", "{\"comment\": \"Thank you for your response. We understand that our initial comment did not provide enough information on the differences between our work and [7,8,9,10,11].\", \"regarding_the_scope_of_our_work\": \"although the theoretical foundations of our work are not specific to image classification, the implementation we propose is tailored for this task. Extending it to other tasks is a challenging open problem due to the specificities of the possible modalities involved. For example, the causal variables $X$ and $Y$ are not elucidated in our theorical section but their modalities have a high impact on the downstream architecture: the marginalisation over the domain of $X$ can be estimated by sampling because instances of $X$ are images. Other modalities may have better strategies, e.g. inputs of smaller dimensions may not require sampling but could be directly estimated. Similarly, the probability $P(Y|do(X))$ can be directly estimated because $Y$ is a discrete class variable. Other modalities (e.g. continuous values or image segmentations) would require additional research to create different adaptations. Therefore, we follow the approach by previous work on o.o.d generalisation and focus on domain shifts with image classification.\", \"regarding_the_specific_differences_between_our_work_and_existing_approaches\": \"[7] augments input images using randomly initialised convolutional networks and trains a segmentation model to make the same prediction on the augmented version of the input image. While our approach differs as we use in-domain sampling, this augmentation approach is similar to that of [2], although this augmentation technique only modifies image intensity and texture and does not consider changes in shape or translations/rotations. This is well suited for medical images and for segmentation tasks as the augmentation does not alter the ground-truth segmentation mask, but not for the more general case considered in our work.\\n\\n[8] proposes a prompt learning method for image-language models. The aim of the paper is not to train a parametric model to solve a task but to optimise the input prompt given to a language model to boost its performance. To regularise training and reduce overfitting, the authors formulate the problem as estimating the prompt space distribution and use a Bayesian method to solve it. The problem settings are very different as the target distribution is a general prior over the possible prompts while we aim to compute an (interventional) posterior probability $P(Y|do(X))$.\\n\\n[9] tackles the problem of causal misidentification in imitation learning. The proposed method attempts to learn the true causal model behind the policy of experts acting in an environment. This problem differs from ours as it aims to discover the complete causal structure underlying a policy while we aim to elucidate one causal query given a known causal structure (see Figure 1). More importantly, this work assumes access to interventional data from the environment, either by querying an expert or by directly acting in the environment, while we are in a supervised settings and only have access to a fixed set of observations. [10] and [11] are similar reinforcement learning methods and share the same differences. \\n\\nThank you for pointing out our lack of clarity, we hope that this second comment elucidates the differences with our work. We added this information to our general comment above.\"}", "{\"metareview\": \"The paper proposes a theoretical framework combining causality principles with Bayesian neural networks. While the theoretical contribution and integration of recent advances in partially-stochastic Bayesian networks was appreciated, the paper has several critical limitations. The empirical validation is limited to just two datasets with only modest performance improvements, which don't convincingly demonstrate the practical value of the added model complexity. Finally, the paper does not justify architectural design decisions, limited tasks, and has insufficient comparisons against other recent domain generalization methods.\", \"additional_comments_on_reviewer_discussion\": \"See above for salient concerns. The authors did not provide sufficient motivation and empirical evidence to justify the proposed method (reviewer Ee74) and establish clear differentiation with prior works (reviewer uvMu) in this space. These are the primary grounds for rejection of the work. The former especially is significant for the work to have long-term impact in this area.\"}", "{\"comment\": \"The Information theory iNspired diSentanglement and pURification modEl (INSURE) [5] is another method not relying on causality theory (although, information theory is arguably very closely linked [6]). It simultaneously learn a class representation and a domain representation, trained on domain and label classification tasks. A Mutual Information loss is used to incite the representations to be independent and a mixup strategy is used to further disentangle the two concepts by training the model on two images with inverted domain representations. Our work differs by learning the parameters of the latent distributions instead of point estimates to regularise the latent space and by representing domain-specific information implicitely. INSURE uses a domain prediction loss to disentangle class information from domain information while we use implicit regularisation to avoid overfitting to this auxiliary task and make sure this representation still contains useful information for class prediction, even out-of-distribution.\\n\\n[7,8,9,10,11] are studies applying causality theory and Bayesian methods to different problems than the one considered in our work: [7] uses causality to improve image segmentation on medical data and [8] uses a Bayesian method to learn model prompts for language models. [9,10,11] are causal reinforcement learning methods.\\n\\nWe will include these explanations in the next version of the paper.\\n\\n\\n[1] Liu, Chang, Xinwei Sun, Jindong Wang, Haoyue Tang, Tao Li, Tao Qin, Wei Chen, and Tie-Yan Liu. \\\"Learning causal semantic representation for out-of-distribution prediction.\\\" NeurIPS, 2021.\\n\\n[2] Lv, Fangrui, et al. \\\"Causality inspired representation learning for domain generalization.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.\\n\\n[3] Locatello, F., Bauer, S., Lucic, M., Raetsch, G., Gelly, S., Sch\\u00f6lkopf, B., & Bachem, O. (2019, May). Challenging common assumptions in the unsupervised learning of disentangled representations. In international conference on machine learning (pp. 4114-4124). PMLR.\\n\\n[4] Xiao, Zehao, et al. \\\"A bit more bayesian: Domain-invariant learning with uncertainty.\\\" International conference on machine learning. PMLR, 2021.\\n\\n[5] Yu, Xi, et al. \\\"INSURE: an Information theory iNspired diSentanglement and pURification modEl for domain generalization.\\\" IEEE Transactions on Image Processing (2024).\\n\\n[6] Arjovsky, M., Bottou, L., Gulrajani, I., & Lopez-Paz, D. (2019). Invariant risk minimization. arXiv preprint arXiv:1907.02893.\\n\\n[7] Ouyang, Cheng, et al. \\\"Causality-inspired single-source domain generalization for medical image segmentation.\\\" IEEE Transactions on Medical Imaging 42.4 (2022): 1095-1106.\\n\\n[8] Derakhshani, Mohammad Mahdi, et al. \\\"Bayesian prompt learning for image-language model generalization.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[9] De Haan, Pim, Dinesh Jayaraman, and Sergey Levine. \\\"Causal confusion in imitation learning.\\\" NeurIPS, 2029.\\n\\n[10] Zhang, Amy, Clare Lyle, Shagun Sodhani, Angelos Filos, Marta Kwiatkowska, Joelle Pineau, Yarin Gal, and Doina Precup. \\\"Invariant causal prediction for block mdps.\\\" ICML, 2020.\\n\\n[11] Bica, Ioana, Daniel Jarrett, and Mihaela van der Schaar. \\\"Invariant causal imitation learning for generalizable policies.\\\" NeurIPS, 2021.\"}", "{\"comment\": \"Thank you for the positive and very detailled review! We are happy that you find our paper of interest. The lack of experiments and comparisons has been pointed out by the other reviewers so we are currently running additional experiments. We hope to add them before the end of the discussion phase.\", \"regarding_your_questions\": \"1. We conduct ablation studies in Figures 5 and 6 that show that the use of BNNs has little impact on the final performance but helps improving sample efficiency during training.\\n\\n2. Thank you for pointing this out! Fortunately, the trainer.py file is an empty legacy file unused in the latest version of the code. It will be removed in the future. The full code of the method is available in the provided repository.\"}", "{\"comment\": \"As requested, we provide additional details on the differences between our work and studies [7,8,9,10,11] below:\\n\\n[7] augments input images using randomly initialised convolutional networks and trains a segmentation model to make the same prediction on the augmented version of the input image. While our approach differs as we use in-domain sampling, this augmentation approach is similar to that of [2], although this augmentation technique only modifies image intensity and texture and does not consider changes in shape or translations/rotations. This is well suited for medical images and for segmentation tasks as the augmentation does not alter the ground-truth segmentation mask, but not for the more general case considered in our work.\\n\\n[8] proposes a prompt learning method for image-language models. The aim of the paper is not to train a parametric model to solve a task but to optimise the input prompt given to a language model to boost its performance. To regularise training and reduce overfitting, the authors formulate the problem as estimating the prompt space distribution and use a Bayesian method to solve it. The problem settings are very different as the target distribution is a general prior over the possible prompts while we aim to compute an (interventional) posterior probability $P(Y|do(X))$.\\n\\n[9] tackles the problem of causal misidentification in imitation learning. The proposed method attempts to learn the true causal model behind the policy of experts acting in an environment. This problem differs from ours as it aims to discover the complete causal structure underlying a policy while we aim to elucidate one causal query given a known causal structure (see Figure 1). More importantly, this work assumes access to interventional data from the environment, either by querying an expert or by directly acting in the environment, while we are in a supervised settings and only have access to a fixed set of observations. [10] and [11] are similar reinforcement learning methods and share the same differences.\"}", "{\"summary\": \"This paper proposes a Bayesian neural architecture that disentangles learning of the data distribution from inference during the learning process. It outperforms its counterparts based on point estimates.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"It outperformed the VAE on two datasets of image classification.\", \"weaknesses\": \"Overall, this paper lacks novelty, motivation is unclear, experimentation is insufficient, and there is a lack of references concerning domain generalization.\\n\\n### Novelty:\\n\\n1. Using causality or Bayesian for domain generalization is not novel and already investigated in [1,2,3]. Specifically, [1,2] proposed causality inspired method to domain generalization. [3] proposed Bayesian learning to extract domain invariant features.\\n\\n2. The proposed only works on image classification task and only CNN based architectures.\\n\\n### Motivation:\\n\\n1. The motivation is unclear. Could you elaborate on your rationale for using Bayesian Neural Networks for disentanglement? What are the benefits of combining domain-invariant and specific parts during inference? \\n\\n### Experiments:\\n\\n1. Adding ablation studies for KL divergence term of weights to demonstrate the necessary Bayesian network. Or only use domain-invariant part in the inference time.\\n2. Comparison methods are insufficient. Please compare all the method in the \\\"Missing reference\\\", and highlight the difference.\\n3. Only two simple datasets CIFAR10 and Office Home. Please add more datasets. e.g., DomainBed benchmark, which including five datasets (PACS, VLCS, OfficeHome, Terra, DomainNet) and is the most widely used benchmark for domain generalization.\\n\\n4. Only CNN-based architecture. Please using other widely used backbone like Visual Transformer such as Vit-B-16.\\n\\n5. Only test on image classification task, please test on time series task as well.\\n\\n### Missing References:\\n\\n[1] Lv, Fangrui, et al. \\\"Causality inspired representation learning for domain generalization.\\\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.\\n\\n[2] Ouyang, Cheng, et al. \\\"Causality-inspired single-source domain generalization for medical image segmentation.\\\" IEEE Transactions on Medical Imaging 42.4 (2022): 1095-1106.\\n\\n[3] Xiao, Zehao, et al. \\\"A bit more bayesian: Domain-invariant learning with uncertainty.\\\" International conference on machine learning. PMLR, 2021.\\n\\n[4] Derakhshani, Mohammad Mahdi, et al. \\\"Bayesian prompt learning for image-language model generalization.\\\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.\\n\\n[5] Yu, Xi, et al. \\\"INSURE: an Information theory iNspired diSentanglement and pURification modEl for domain generalization.\\\" IEEE Transactions on Image Processing (2024).\", \"questions\": \"1. Without a reconstruction loss, how can it ensure that no information is lost (like domain-invariant information) in achieving disentanglement?\\n\\n2. Please add computation complexity in the table, since the objective function contains two KL terms.\\n\\n3. How to choose the hyper-parameter in front of each terms?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response.\\n\\nI remain unconvinced about the novelty of the work. On the one hand, the general comment states \\\"[7,8,9,10,11] are studies applying causality theory and Bayesian methods to different problems than the one considered in our work\\\"; on the other hand, it looks like the work aims to propose a general learning method rather than focus on a certain problem. In this light, the current discussion of related work does not sufficiently establish how the proposed method is novel compared to those cited.\\n\\nAdditionally, the work claims to rely on an interventional quantity rather than domain-invariant information. However, isn't domain invariance the best we can hope for in achieving robust domain generalization?\"}", "{\"comment\": \"Thank you for your review and for pointing out the lack of references on existing domain generalisation studies! We acknowledge that our works lacks comparisons with other methods. We have integrated this feedback in our paper and added comparisons between our method and the existing, highlighting the conceptual differences between the methods and conducted comparative experiments on standard benchmarks.\", \"regarding_your_questions\": \"1. We have added comparison with the missing references. Please, refer to the general comment above to see the details. We are currently running the experiments and hope to add them before the end of the discussion phase.\\n\\n2. Our work builds upon causal representation learning techniques for inducing domain-invariant mechanisms in neural networks. As such, our work aims to train the model to compute interventional queries P(y|do(x)). It is motivated by the observation that existing studies do not consider spurious correlations arising by the conditioning on the training set (as shown in Figure 1). From this observation, we derive a quantity that allows learning interventional queries while taking this bias into account. This derivation is the main contribution of our work and our proposed architecture is a direct result from it. Additional differences with existing work are highlighted in the general comment at the top.\\n\\nPlease, let us know if our response answers your concerns and if you would consider raising your score in the light of additional experiments and baseline comparisons. Otherwise, let us know what else we should improve.\"}", "{\"comment\": \"Thank you for your response and for your time. The inclusion of a BNN is motivated by the derivation of the interventional query. This query, by construction, removes the causal factors affecting the input $X$ and potential biases arising from spurious correlations or unbalanced distributions, thus representing a domain-invariant quantity. However, this quantity cannot be trivially estimated: our derivatoin choses that a frontdoor and a backdoor paths must be blocked to compute it from observations (by removing the do operation). The frontdoor path is bloked by the double marginalisation over $X$ and $R$ and the backdoor path is blocked by the third marginalisation over $W$ (using frontdoor and backdoor criteria). A BNN represents a distribution of weights instead of point-estimates and can be used to perform this marginalisation. This finding is supported by previous work that showed the effectiveness of BNNs to differentiate aleatoric and epistemic uncertainty [1,2]. We want the inference component to handle aleatoric/model uncertainty only and not be biased by the input distribution, thus we model it with a BNN. Following previous work on partially-stochastic BNNs [3], we only model part of the inference module to preserve efficiency.\\n\\nRegarding the experiments, we agree with your comment. We have duly noted the missing experiments and are currently working on them. We will include results on more benchmarks and models in the next version of the paper.\\n\\nRegarding the number of hyperparameters in the loss function, we conducted hyperparamer tuning experiments to evaluate the impact of the hyperparameter choice. We found that the three KL divergence terms are not correlated with performance for values between $10^{-5}$ and $10^{-7}$ and that the choice of the final hyperparameter has limited impact if set below $0.01$. these results show that these hyperparameters do not affect generalisation to different settings. We will include this analysis in the next version of the paper.\\n\\n\\n[1] Kendall, A., & Gal, Y. (2017). What uncertainties do we need in bayesian deep learning for computer vision?. Advances in neural information processing systems, 30.\\n\\n[2] Magris, M., & Iosifidis, A. (2023). Bayesian learning for neural networks: an algorithmic survey. Artificial Intelligence Review, 56(10), 11773-11823.\\n\\n[3] Sharma, M., Farquhar, S., Nalisnick, E., & Rainforth, T. (2023, April). Do bayesian neural networks need to be fully stochastic?. In International Conference on Artificial Intelligence and Statistics (pp. 7694-7722). PMLR.\"}" ] }
7eoN0PpKtc
FreeMorph: Tuning-Free Generalized Image Morphing with Diffusion Model
[ "Yukang Cao", "Chenyang Si", "Jinghao Wang", "Ziwei Liu" ]
We present **FreeMorph**, the first tuning-free method for image morphing that accommodates inputs with varying semantics or layouts. Unlike existing methods, which rely on fine-tuning pre-trained diffusion models and are limited by time constraints and semantic/layout discrepancies, FreeMorph delivers high-fidelity image morphing without extensive training. Despite its efficiency and potential, tuning-free methods still face challenges in maintaining high-quality image morphing due to the non-linear nature of the multi-step denoising process and bias inherited from the pre-trained diffusion model. In this paper, we introduce FreeMorph to address this challenge by integrating two key innovations. **1)** We first propose a **guidance-aware spherical interpolation** design that incorporates the explicit guidance from the input images by modifying the self-attention modules, addressing identity loss, and ensuring consistent transitions throughout the generated sequences. **2)** We further introduce a **step-oriented motion flow** that blends self-attention modules derived from each input image to achieve controlled and directional transitions that respect both input images. Our extensive evaluations demonstrate that FreeMorph outperforms existing methods with training that is 10X - 50X faster, establishing a new state-of-the-art for image morphing. The code will be released.
[ "image morphing", "diffusion model", "tuning-free method" ]
Reject
https://openreview.net/pdf?id=7eoN0PpKtc
https://openreview.net/forum?id=7eoN0PpKtc
ICLR.cc/2025/Conference
2025
{ "note_id": [ "u5mjgHNSaS", "sDZkcZMA5P", "s7EkXjI5Vd", "ofzwHQkua7", "ndZKXnMoAs", "lbj6cBVsOV", "f3eSwE04mf", "dr9CBPIvCA", "ZJ3ocorCOq", "YUdEjuTsDb", "WqSKSqMUhe", "PZYXcU5G0Q", "OWs6SeZI7T", "Cfnf9UW3Yk", "9SFFNqU1xj" ], "note_type": [ "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "decision", "meta_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1732795103897, 1733106908304, 1730667296478, 1733203517683, 1732794998782, 1730669389606, 1733069195392, 1730298914138, 1732795135185, 1732795036915, 1737523437342, 1734936721674, 1733195648156, 1733087024176, 1730297098734 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission1127/Authors" ], [ "ICLR.cc/2025/Conference/Submission1127/Reviewer_NcSG" ], [ "ICLR.cc/2025/Conference/Submission1127/Reviewer_JPFT" ], [ "ICLR.cc/2025/Conference/Submission1127/Authors" ], [ "ICLR.cc/2025/Conference/Submission1127/Authors" ], [ "ICLR.cc/2025/Conference/Submission1127/Reviewer_XAhp" ], [ "ICLR.cc/2025/Conference/Submission1127/Authors" ], [ "ICLR.cc/2025/Conference/Submission1127/Reviewer_NcSG" ], [ "ICLR.cc/2025/Conference/Submission1127/Authors" ], [ "ICLR.cc/2025/Conference/Submission1127/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission1127/Area_Chair_oWy7" ], [ "ICLR.cc/2025/Conference/Submission1127/Authors" ], [ "ICLR.cc/2025/Conference/Submission1127/Reviewer_XAhp" ], [ "ICLR.cc/2025/Conference/Submission1127/Reviewer_qmRm" ] ], "structured_content_str": [ "{\"comment\": \"# Response to reviewer `#NcSG`\\n\\n> This paper is apparently not well prepared for publication.\\n\\nThanks for the notification from the reviewer. We have rectified the identified typos and meticulously proofread the paper to ensure it is more refined and polished.\\n\\n> In equation (7), since m has the same size as z, what does it mean by \\u201cm=1\\u201d?\\n\\nIt means that the value of each item in m is equal to 1.\\n\\n> The image captions of Fig.2 and Fig.3 is too simple. Reader have to move to the main text to try to understand the figures.\\n\\nThanks for the suggestion. We have carefully added explanation for Fig.2 and Fig.3 in their captions.\\n\\n> In the introduction, it mentions a comprehensive evaluation system, including a new dataset and specialized metrics. But in Sec 4.1, it says \\u201cfollowing IMPUS and DiffMorpher, we ... using the following metrics\\u201d. Where are the proposed specialized metrics?\\n\\nWe have rectified the claim in the introduction. In this paper, we present an evaluation benchmark featuring a novel dataset comprising four unique sets of image pairs classified based on their semantic and layout similarities.\\n\\n\\n> Without a human perceptual study, it\\u2019s difficult to get a conclusion when comparing the two methods.\\n\\nWe have carried out user studies involving 30 volunteers, encompassing a diverse group of participants such as animators, AI researchers, and gaming enthusiasts, within the age range of 20 to 35. The results are provided below, showcasing the effectiveness of our proposed method through a subjective evaluation.\\n\\n|IMPUS|DiffMorpher|Slerp|Ours|\\n\\n|Preference|17.16%|14.89%|7.82%|**60.13%**\\n\\n> In my understanding, the diffusion process of DDIM is to directly add noise without using the trained network.\\n\\nIndeed, throughout the training process, the diffusion process will first generate random noise and then iteratively denoise the image to produce the clearer images.\\n\\nHowever, during the reverse diffusion steps, the network operates conversely by commencing with the image itself and utilizing the pre-trained UNet to progressively obtain the noise pattern corresponding to that image.\\n\\n\\n> For the experiment \\u201cOurs(Var-A)\\u201d in line 469, what does it mean to \\u201comit the original attention mechanism\\u201d? Completely remove the attention?\\n\\nIn line 469, the term \\\"Ours (Var-A)\\\" denotes the approach that solely employs spherical interpolation (as in Eq. (3)) in both the reverse diffusion and forward denoising steps. To prevent any potential confusion, we will provide further clarification on this in the revised version.\"}", "{\"comment\": \"Some of my concerns remain unaddressed after reading the authors' responses.\\n\\nFirst, there should be a convincing explanation about the inconsistency between the qualitative and quantitative comparison of \\\"DiffMorpher\\\" and \\\"spherical interpolation\\\".\\n>This paper presents both the qualitative and quantitative evaluations. In the figures, the \\u201cspherical interpolation\\u201d looks good than the existing method DiffMorpher. But based on the quantitative evaluation in Table 1, \\u201cspherical interpolation\\u201d performs worse than DiffMorpher. Therefore, either the visual results are cherry-picked, or the evaluation metrics cannot adequately measure the results. Without a human perceptual study, it\\u2019s difficult to get a conclusion when comparing the two methods.\\n\\nSecond, there should be more information about the user study, i.e. how many results are shown to the volunteers?\\n\\nThird, I still don't understand the reverse diffusion steps. How does the pre-trained UNet obtain noise pattern based on the original natural image itself? The UNet is trained to predict noise from noisy images.\"}", "{\"summary\": \"This paper presents FreeMorph, a tuning-free method for image morphing that accommodates inputs with varying semantics and layouts. Several modules are proposed to enhance the quality of image interpolants, ensuring a smooth transition that remains consistent with the given images at both ends.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"1. This method does not require training, making it more accessible to regular users.\\n1. The authors have conducted extensive experiments to identify the best combination of modules and hyperparameters.\\n1. Although this is a task that is difficult to evaluate, I can appreciate the authors\\u2019 efforts to provide both quantitative and qualitative results.\", \"weaknesses\": \"Related to the task\\n1. There are some unverified claims at the core of this task. For example, see lines 254-255.\\n1. What constitutes good image morphing is somewhat underdefined. The intended effect that this paper seeks to achieve feels too vague and abstract, making it challenging to identify a clear definition of effective image morphing.\\n1. Even with quantitative metrics like PPL, high scores do not necessarily indicate results that align with human preferences, which can be highly subjective.\\n\\nRelated to the proposed method\\n1. While the authors have introduced several modules, their use appears to have been determined empirically. The selection of hyperparameters and the final module composition could benefit from clearer justification, as they seem somewhat arbitrary at times.\\n1. Given that the method involves multiple components, the presentation could be more streamlined. It can be difficult to understand the specific problem each module aims to address. (Please see questions below for further clarification.)\\n1. The generated samples have pretty visible ghost artifacts. To me, that would not be considered as good image morphing.\", \"questions\": \"1. What's the intuition behind Equation 5. How does it relate to identity preservation?\\n1. How does Equation 6 connect to motion flow? What do you mean by \\\"step-oriented\\\"?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We sincerely thank the reviewer for the response and suggestions.\\n\\nRegrettably, due to the rush at the time, we inadvertently submitted the incorrect version of the revised edition. We are sincerely sorry for this. To rectify this, we have now included the comparisons on an anonymous webpage at https://anonymous-freemorph.github.io/. If feasible, we kindly invite the reviewer to explore our anonymous website for a visual representation of the comparisons.\"}", "{\"comment\": \"# Response to reviewer `#XAhp`\\n\\n> Attention Interpolation for Text-to-Image Diffusion (He et al., 2024) is cited in the related work section but never compared to, despite the methods seeming quite similar.\\n\\n> Similarly, Smooth Diffusion (Guo et al., CVPR 2024) is quite closely related and could also be compared to, although it requires additional tuning and thus seems a lot less relevant as a baseline.\\n\\nThank you for the reviewer's suggestions. We have conducted experiments to compare our approach with the two methods mentioned and find that these method can only handle images with similar layout and semantics. Instead, our method can efficiently handle this case. Additionally, it's important to note that \\u201cAttention Interpolation for Text-to-Image Diffusion\\u201d relies on the IP-Adapter for image morphing, which compromises training efficiency. Furthermore, \\u201cSmooth Diffusion\\u201d necessitates tuning, making it slower and less efficient than our method. We will add more discussion and comparisons with these methods in the revised edition\\n\\n> Eq. 3 looks wrong to me. Defining it with j as an integer in the range [1, 5] does not result in a slerp, as far as I know.\\n\\nIn our experiments, we configured the pipeline to generate five transition images. That's the reason why we compute five sampling points (with j as an integer in the range [1, 5]) in Eq. (3), each corresponding to a transition image. Moreover, our approach allows for setting j to any value. As the number j increases, Eq. (3) will increasingly resemble a more smooth slerp operation.\\n\\n> I think using a DCT instead of an FFT could help improve quality by getting rid of the correlation between the pixels on the left and right (and top and bottom respectively) edge of the image.\\n\\nThank you for the suggestion. We have carried out experiments to contrast the DCT with the FFT and observe that DCT perform on par with FFT. This situation might because our Gausian noise inject doesn't help improve a lot for the result.\\n\\n> The number of decimal digits in tables is excessive. Please reduce it to a reasonable number (more than 3 or 4 digits should not be necessary in most cases) to aid readability.\\n\\nThanks for the suggestion from the reviewer. We have made refinements to the decimal digit precision in Tab. 1 and Tab. 2.\\n\\n\\n> There are a bunch of small mistakes. I'd recommend going over the paper and fixing them for a potential camera-ready version, as the current state looks a bit sloppy. Some examples: l. 122: \\\"ur\\\" -> \\\"Our\\\", l. 130: \\\"Recently, Recently,\\\"; l. 132: \\\"wang2023interpolating\\\"; l. 194: wrong citation for LDM, l. 211: I think DiffMorph should be attributed to different people.\\n\\nThanks for the suggestion from the reviewer. We have diligently proofread the paper and rectified those minor errors in the revised edition.\"}", "{\"summary\": \"The paper proposes a method for image interpolation with off-the-shelf T2I diffusion models. By modifying the attention modules using two proposed methods, the continuousness of the interpolations is improved, without having to fine-tune the diffusion model. The authors also introduce auxiliary tricks to further improve interpolation performance. The contribution of the various proposed components to improved interpolation performance is validated using an extensive ablation study, and the overall performance is compared qualitatively and quantitatively with some alternative approaches.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The quantiative and qualitative experiments, especially the ablation study seem well-executed and thorough, with the exception of one important method missing (see weaknesses). I really appreciate the authors thoroughly evaluating across different levels of challenging interpolations and proving a large number of qualitative comparisons in the appendix.\\n\\nThe proposed method seems reasonable in construction and effective in practice.\\n\\nIn the comparisons performed by the authors, their proposed method seems to provide a clear improvement over the methods they compare against.\\n\\nBesides small mistakes (see questions), the paper is generally well-structured, reasonably well-written and easy to follow.\", \"weaknesses\": \"Attention Interpolation for Text-to-Image Diffusion (He et al., 2024) is cited in the related work section but never compared to, despite the methods seeming quite similar. Given that the authors were aware of this work at the time of submission, I would expect the authors to discuss how their and this method relate and incorporate it into quantitative and qualitative comparisons.\\n\\nSimilarly, Smooth Diffusion (Guo et al., CVPR 2024) is quite closely related and could also be compared to, although it requires additional tuning and thus seems a lot less relevant as a baseline. Demonstrating that the proposed method performs similarly to one requiring extra fine-tuning would strengthen the paper substantially.\", \"questions\": \"Eq. 3 looks wrong to me. Defining it with j as an integer in the range [1, 5] does not result in a slerp, as far as I know.\\n\\n3.4: I think using a DCT instead of an FFT could help improve quality by getting rid of the correlation between the pixels on the left and right (and top and bottom respectively) edge of the image.\\n\\nThe number of decimal digits in tables is excessive. Please reduce it to a reasonable number (more than 3 or 4 digits should not be necessary in most cases) to aid readability.\\n\\nThere are a bunch of small mistakes. I'd recommend going over the paper and fixing them for a potential camera-ready version, as the current state looks a bit sloppy. Some examples: l. 122: \\\"ur\\\" -> \\\"Our\\\", l. 130: \\\"Recently, Recently,\\\"; l. 132: \\\"wang2023interpolating\\\"; l. 194: wrong citation for LDM, l. 211: I think DiffMorph should be attributed to different people.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank all reviewers for their time and effort in reviewing our paper!\\n\\nBelow, we summarize the changes made according to the reviews:\\n\\n1. We compare our method with \\\"Attention Interpolation for Text-to-Image Diffusion\\\" and \\\"Smooth Diffusion\\\" to demonstrate our superiority (`#XAhp`).\\n\\n2. We discuss more about Eq. (6) for slerp (`#XAhp`).\\n\\n3. We show the difference between DCT and FFT and illustrate their influence on the generated image transitions (`#XAhp`).\\n\\n4. We update the decimal digits in the tables and rectify the writing typos (`#XAhp`, `#NcSG`).\\n\\n5. We discuss the nonlinearity of the multi-step denoising process (`#JPFT`).\\n\\n6. We clarify what is a good image morphing (`#JPFT`).\\n\\n7. We conduct user studies to evaluate different methods in a subjective manner (`#JPFT`, `#NcSG`, `#qmRm`).\\n\\n8. We provide further explanation and rationale behind the design of our proposed modules, offering a detailed justification (`#JPFT`).\\n\\n9. We explain the positive influence of Eq. (5) on identity preservation (`#JPFT`).\\n\\n10. We clarify the reason for the name of our modules. We also modify their names for more accuracy (`#JPFT`).\\n\\n11. We improve the caption for Fig.2 and Fig.3 (`#NcSG`).\\n\\n12. We clarify the components of our introduced benchmark and improve the writing (`#NcSG`).\\n\\n13. We clarify the process of reverse diffusion steps and forward denoising steps (`#NcSG`).\\n\\n14. We clarify the design of our experiments \\\"Ours (Var-A)\\\" (`#NcSG`).\\n\\n15. We illustrate the importance of smooth transition and identity preservation for image morphing (`#qmRm`).\\n\\n16. We clarify the difference between smooth transition and directional transition (`#qmRm`).\\n\\nWe sincerely thank all reviewers and the AC(s) again for their valuable suggestions, which have greatly helped strengthen our paper.\\n\\nIf you have any further questions, we would be happy to discuss them!\"}", "{\"summary\": \"This paper presents a novel tuning-free image morphing approach, which utilizes the pre-trained stable diffusion and manipulates its self-attention to achieve high-fidelity morphing. Specifically, it proposes a guidance-aware spherical interpolation and step-oriented motion flow, and respectively apply them in different steps of the reverse diffusion/forward denoising processes. Based on the qualitative and quantitative results, the proposed approach outperform the existing methods.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The proposed tuning-free morphing approach is straightforward and effective. The modified interpolation of the self-attention is able to produce more natural transition between morphing sequences.\", \"A new dataset, Morph4Data, is presented and used in the evaluation. This dataset contains four categories, which helps in a detailed analysis on the image morphing methods.\", \"This paper presents and analyzes many baseline settings in the ablation study. This helps in understanding the specific role of each key design.\"], \"weaknesses\": [\"This paper is apparently not well prepared for publication. For example:\", \"There are many errors in spelling (\\u201cur\\u201d in line 122, \\u201cowever\\u201d in line 133, etc.) and formulations (missing \\u201c{\\u201c and \\u201c}\\u201d in line 179, etc.).\", \"In equation (7), since m has the same size as z, what does it mean by \\u201cm=1\\u201d?\", \"Both \\u201cDeepMorpher\\u201d and \\u201cdeepmorpher\\u201d appear in the paper.\", \"The image captions of Fig.2 and Fig.3 is too simple. Reader have to move to the main text to try to understand the figures.\", \"In the introduction, it mentions a comprehensive evaluation system, including a new dataset and specialized metrics. But in Sec 4.1, it says \\u201cfollowing IMPUS and DiffMorpher, we ... using the following metrics\\u201d. Where are the proposed specialized metrics?\", \"This paper presents both the qualitative and quantitative evaluations. In the figures, the \\u201cspherical interpolation\\u201d looks good than the existing method DiffMorpher. But based on the quantitative evaluation in Table 1, \\u201cspherical interpolation\\u201d performs worse than DiffMorpher. Therefore, either the visual results are cherry-picked, or the evaluation metrics cannot adequately measure the results. Without a human perceptual study, it\\u2019s difficult to get a conclusion when comparing the two methods.\", \"The proposed algorithm contains a reverse diffusion process and a forward denoising process. In my understanding, the diffusion process of DDIM is to directly add noise without using the trained network. But in Algorithm 1, the modified attention mechanism is applied in the diffusion process. There should be more details to explain how to implement this diffusion process. In addition, why the forward denoising steps also have \\u201cfor t=1 to T\\u201d?\", \"For the experiment \\u201cOurs(Var-A)\\u201d in line 469, what does it mean to \\u201comit the original attention mechanism\\u201d? Completely remove the attention?\"], \"questions\": \"My main concerns are the evaluation metrics and the technical details about the reverse diffusion process. I'd like to increase my rating if there are convincing explanations or evidences in the rebuttal.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"# Response to reviewer `#qmRm`\\n\\n\\n> This paper targets at two issues related to the image morphing, identity loss and directional trasition. However, I do not see the importance of these two issues. Why they are important and worthing efforts for this task.\\n\\nFor instance, (1) For the second case in Fig. 5, IMPUS exhibits identity loss, where the third generated image deviates from the original identity; (2) In Fig. 16, IMPUS shows a lack of similarity between the second and third generated images, leading to unsmooth transitions.\\n\\n> I understand that the smooth transition is important, is it the same as the meaning of directional transition?\\n\\nWe believe that these two terms are different. A smooth transition implies a gradual shift from the left image to the right image, maintaining continuity throughout. On the other hand, a directional transition signifies a progression where there is no loss of identity or regression to a previous state.\\n\\n> What is the successful rate, are the results cherry-picked\\n\\nOur successful rate is very high and the results are not cherry-picked. To reinforce this, we conducted user studies where 30 volunteers, comprising animators, AI researchers, and gaming enthusiasts aged between 20 and 35, participated. The outcomes of the user studies can be found below, showcasing the effectiveness of our proposed method through a subjective evaluation.\\n\\n|IMPUS|DiffMorpher|Slerp|Ours|\\n\\n|Preference|17.16%|14.89%|7.82%|**60.13%**\"}", "{\"comment\": \"# Response to reviewer `#JPFT`\\n\\n> There are some unverified claims at the core of this task. For example, see lines 254-255.\\n\\nIn lines 254-255, we aim to emphasize that the multi-step denoising procedure is notably nonlinear, potentially leading to inconsistencies in the generated image morphing.\\n\\nTo demonstrate the nonlinear nature of the multi-step denoising process, we would like to refer the reviewer to Eq. (9) in the DDIM paper[1]\\n\\n> What constitutes good image morphing is somewhat underdefined. The intended effect that this paper seeks to achieve feels too vague and abstract, making it challenging to identify a clear definition of effective image morphing.\\n\\nAn effective image morphing outcome should exhibit gradual transitions from the initial (left) image to the final (right) image while preserving the original identities. For instance, (1) In the second example depicted in Fig. 5, IMPUS displays an identity loss where the third generated image deviates from the original identity; (2) Within Fig. 16, the lack of similarity between the second and third generated images results in abrupt transitions, detracting from smoothness.\\n\\n> Even with quantitative metrics like PPL, high scores do not necessarily indicate results that align with human preferences, which can be highly subjective.\\n\\nWe concur with the reviewer's observation that existing quantitative metrics face challenges in accurately assessing result quality. In our research, we follow DiffMorpher and IMPUS to conduct PPL, LPIPS, and FID evaluations.\\n\\nTo enhance our comparative analysis and include human preferences, we further conducted user studies. Specifically, we invited 30 volunteers encompassing animators, AI experts, and gaming enthusiasts aged between 20 and 35 to choose their most favorable results. The results are presented below, demonstrate the effectiveness of our proposed approach from a subjective standpoint.\\n\\n|IMPUS|DiffMorpher|Slerp|Ours|\\n\\n|Preference|17.16%|14.89%|7.82%|**60.13%**\\n\\n\\n> While the authors have introduced several modules, their use appears to have been determined empirically. The selection of hyperparameters and the final module composition could benefit from clearer justification, as they seem somewhat arbitrary at times.\\n\\nIn Fig. 2, our experiments involve substituting the key and value features within the attention mechanism. Through this analysis, we notice the pivotal role played by these features in facilitating seamless transitions and maintaining the inherent identity within the images. Consequently, the core enhancement in our design stems from the modifications made to the key and value features.\\n\\nThe design outlined in Sections 3.2 and 3.3 is crafted to handle their respective issues, aiming to achieve optimal performance. Indeed, our pipeline's final iteration is a product of iterative experimentation. Despite its simplicity, our strategy proves effective and logical. Moreover, our approach surpasses existing techniques without necessitating extensive training.\\n\\n> What's the intuition behind Equation 5. How does it relate to identity preservation?\\n\\nIn Figure 6, we observe that the absence of Eq. (5) can lead to challenges in generating smooth transitions in extreme scenarios. Instead, relying on the key and value features generated through spherical interpolation, as described in Eq. (3), leads to gradual changes (see Fig. 2). Therefore, the key lies in determining the optimal method for this replacement of key and value features.\\n\\nThrough our experiments, we find that by averaging the results of spherical interpolations, we are able to achieve the best performance.\\n\\n> How does Equation 6 connect to motion flow? What do you mean by \\\"step-oriented\\\"?\\n\\nIn our \\\"step-oriented motion flow\\\" process, we compute the gradual transition from the initial (left) image to the final (right) image. Initially, we opted to term it \\\"motion flow\\\" to signify this progression. However, upon reflection, we recognize that \\\"motion flow\\\" typically refers to changes in motion within 2D or 3D spaces, potentially causing confusion. As a result, we will adjust this terminology in the updated version.\\n\\nThe alterations in attention features evolve progressively based on the step j. This is precisely why we coined it as \\\"step-oriented.\\\"\\n\\n[1] Denoising Diffusion Implicit Models. ICLR 2021.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"metareview\": \"This paper proposes a tuning-free method for image morphing that accommodates significantly different semantics and image layouts. The authors introduce two key techniques\\u2014guidance-aware spherical interpolation and step-oriented motion flow\\u2014that yield visually pleasing transitions between source and target images. However, in addition to multiple spelling and typesetting issues, the reviewers identified several technical concerns, particularly regarding comparisons with closely related research (e.g., T2I Diffusion and Smooth Diffusion), subjective evaluations, and the rationale behind key components. The authors provided feedbacks, yet three of them remained somewhat hesitant to recommend acceptance. The Area Chair also notes that external links were provided for some experimental results, but the risk revealing reviewers' identities may exist. Although the fourth reviewer\\u2019s assessment (Reviewer qmRm) was more positive, the Area Chair observed discrepancies between the review\\u2019s tone and the final rating, reducing its overall weight in the decision. Taking all feedback into account, especially the reservations of the first three reviewers, the Area Chair has decided to reject the manuscript. The authors are encouraged to resubmit to a future venue, including all necessary supportive materials within the official review package rather than relying on external websites.\", \"additional_comments_on_reviewer_discussion\": \"During the review process, Reviewer XAhp raised concerns about the potential risk of identity disclosure when accessing third-party links. This concern could not be independently verified by the reviewers or ACs. Subsequently, the authors provided an alternative link with additional experimental results. However, the perceived risk may have discouraged reviewers from exploring these materials, which limited the authors' opportunity to address critical concerns and potentially sway reviewer opinions.\\n\\nThe AC strongly advises the authors to include all supportive materials directly within the review package in their next submission. Furthermore, Reviewer qmRm was unable to respond to the AC's request to double-check the review report and score. Consequently, the AC relied more heavily on the assessments of the first three reviewers when making the final recommendation.\"}", "{\"title\": \"Response to Reviewer NcSG\", \"comment\": \"Thank you gain for your constructive feedback.\\n\\n***Q1: The explanation about the inconsistency between the qualitative and quantitative comparison of \\\"DiffMorpher\\\" and \\\"spherical interpolation\\\".***\\n\\na) In Table 1, LPIPS, FID, and PPL are computed by extracting features of adjacent image pairs through the deep neural networks. While the results from \\\"spherical interpolation\\\" appear visually clearer than those from \\\"DiffMorpher,\\\" \\\"spherical interpolation\\\" often generates semantically irrelevant content. For instance, as illustrated in Fig. 5, the images generated by \\\"spherical interpolation\\\" lose the semantic essence of the motorcycle. Additionally, there are significant changes between adjacent images, including both foreground and background elements, which result in larger feature variations between adjacent image pairs. In contrast, despite introducing noticeable image artifacts, \\\"DiffMorpher\\\" effectively preserves the semantic consistency between the source and target images. Furthermore, the transitions between adjacent images exhibit smoother background changes and maintain similar semantic content. These factors contribute to \\\"DiffMorpher\\\" outperforming \\\"spherical interpolation\\\" in Table 1.\\n\\nb) In the Supplementary Material, we provide additional generated results. These results show that for cases with similar semantics and layouts, \\\"DiffMorpher\\\" produces superior outputs compared to \\\"spherical interpolation\\\". MorphBench, an evaluation dataset constructed by \\\"DiffMorpher,\\\" is composed of images with similar semantics and layouts, which aligns well with the strengths of \\\"DiffMorpher\\\". Consequently, \\\"DiffMorpher\\\" achieves better performance than \\\"spherical interpolation\\\" on MorphBench. On the other hand, Morph4Data, our newly proposed evaluation dataset, includes cases with differing semantics and layouts. On Morph4Data, \\\"spherical interpolation\\\" achieves a better FID score than \\\"DiffMorpher\\\". This also demonstrate that our proposed evaluation datasets are more comprehensive.\\n\\n***Q2: more information about the user study.***\\n\\nWe provided a total of 50 examples to 30 volunteers for evaluation. Each example included the results of our method alongside the results generated by three other comparison methods.\\n\\n***Q3: The reverse diffusion steps.***\\nDiffusion inversion is a technique used in image edition and image inpainting. Its goal is to start from a given image $x_0$ and progressively \\\"add noise\\\" to revert it back to the initial noise state $x_T$.\\n\\nHere, we use the DDIM as the example for illustration. Instead of generating $x_{t-1}$ from $x_t$ via the DDIM inference formulation: \\n\\n$$\\nx_{t-1} = \\\\sqrt{\\\\bar\\\\alpha_{t-1}}(\\\\frac{x_t-\\\\sqrt{1-\\\\bar\\\\alpha_{t}}\\\\epsilon_\\\\theta(x_t)}{\\\\sqrt{\\\\bar\\\\alpha_{t}}})+ \\\\sqrt{1-\\\\bar\\\\alpha_{t-1}-\\\\sigma_t^2}\\\\epsilon_{\\\\theta}(x_t) + \\\\sigma_t\\\\epsilon _t,\\n$$\\n\\n\\nwe generate $x_t$ from $x_{t-1}$ using the reversed version of the above formula: \\n\\n$$\\nx_t = \\\\sqrt{\\\\frac{\\\\bar\\\\alpha_t}{\\\\bar\\\\alpha_{t-1}}}x_{t-1}+\\\\sqrt{\\\\alpha_t} (\\\\sqrt{\\\\frac{1}{\\\\alpha_t}-1 } - \\\\sqrt{\\\\frac{1}{\\\\alpha_{t-1} } -1})\\\\epsilon_{\\\\theta}(x_{t-1}).\\n$$\\n\\n\\nHence, by inputting $x_0$ into the pre-trained U-Net, we can obtain the predicted noise $\\\\epsilon_{\\\\theta}(x_{0})$. Using the above formula, we can then compute \\n$x_1$. By iteratively applying this process, we can derive the initial noise state \\n$x_T$ from a given image using the pre-trained U-Net. For more detailed information, please refer to [1, 2].\\n\\n\\n[1] Edict: Exact diffusion inversion via coupled transformations. CVPR 2023 \\n\\n[2] Prompt-to-prompt image editing with cross attention control. ICLR 2023\"}", "{\"comment\": \"Thank you for the response.\\n\\nRegarding the main concerns I mentioned, you mention that you performed additional experiments, but I cannot locate them either in the revised main PDF or in the supplementary material. Could you please point me to where you show the quantitative/qualitative results from these experiments?\"}", "{\"summary\": \"This paper propose FreeMorph, a tuning-free pipeline that can generate transitions between two images within 30 seconds. It proposes two components, guidance-aware spherical interpolation, and Step-oriented motion flow, which yields smooth transitions between source and target images, even at examples of cross semantics. Both visual and quantitative evaluations are provided.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"I think this paper is above the bar, with good results and reasonable techinques. Experiments are well designed, limitations are discussed.\", \"weaknesses\": \"Some motivation parts are not that clear to me, see below\", \"questions\": \"This paper targets at two issues related to the image morphing, identity loss and directional trasition. However, I do not see the importance of these two issues. Why they are important and worthing efforts for this task.\\n\\nFor example, when not preserving the identity, what results would be like. Similarly, what is the issue when miss preserve directional transition. I understand that the smooth transition is important, is it the same as the meaning of directional transition?\\n\\n These are important questions to me when reading the introduction. I can see cool examples in the intro, but cannot figure out clearly the core issues that this paper handles. I try to solve this by see examples of compared other methods, and some of them are still OK during the transition. \\n\\nWhat is the successful rate, are the results cherry-picked\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }