id
stringlengths
10
10
title
stringlengths
3
179
track
stringclasses
1 value
status
stringclasses
3 values
keywords
stringlengths
2
2.39k
primary_area
stringclasses
21 values
author
stringclasses
501 values
authorids
stringclasses
501 values
aff
stringclasses
1 value
aff_domain
stringclasses
1 value
position
stringclasses
1 value
rating
stringclasses
355 values
confidence
stringlengths
0
19
soundness
stringclasses
642 values
contribution
stringclasses
596 values
presentation
stringclasses
782 values
rating_avg
float64
0
9
confidence_avg
float64
0
5
soundness_avg
float64
0
4
contribution_avg
float64
0
4
presentation_avg
float64
0
4
corr_rating_confidence
float64
-1
1
project
stringclasses
1 value
github
stringclasses
1 value
Review
listlengths
2
10
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
main
Active
iterative data generation;llm agent;lifelong learning
foundation or frontier models, including LLMs
5;6;6;8
4;3;4;4
2;2;4;3
3;3;4;3
3;3;2;2
6.25
3.75
2.75
3.25
2.5
0.132453
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- Is it possible to implement a random-policy baseline where you randomly chose a set of (naturally collected) datapoints from a data pool? The no-state baseline has flavor of this baseline but LLM-informed decisions could be biased. \n- Is it possible to compare this approach with active learning, in which instead of doing data generation, you do data *selection* and ask models to generate only synthetic labels, but not synthetic inputs?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- Tackle a timely and interesting problem. \n- Provide the necessary infrastructure for the community to study the problem, opening up opportunities for future contributions. \n- Consider various data generation strategies,\n- Well-desgined experiments which demonstrate the effectiveness of the proposed approaches and conduct insightful analyses." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Gym environments for data synthesis, framing the problem as sequential decision-making. In these environments, actions correspond to data-generation plans, and states represent the performance summary of a student model. The paper implements environments for three tasks: visual question answering (VQA), math, and code generation. Each environment offers three state representations: open-ended, skill-list, and skill-tree. Additionally, it proposes an LLM-based policy for data generation. Experimental results demonstrate that the LLM can make strategically effective choices based on environment-state information." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The paper is currently dense and difficult to follow. The introduction includes excessive implementation details, which detract from providing a simple, high-level intuition. Using a specific task example to guide readers through the core concepts would make the paper more accessible.\n\n* The paper focuses solely on the data generation plan rather than a full, end-to-end data generation process. It relies on a fixed, off-the-shelf data-generation engine that cannot be modified. The authors should admit this limitation and discuss potential strategies for overcoming it.\n\n* The quality of the data-generation engine can impact both student performance and the data-generation plan itself. Current approaches do not take into account the data-generation engine capabilities in the design of the policy or the evaluation of the student. For instance, poor student performance might result from the engine producing low-quality data on a specific skill, which could prompt the policy to avoid querying the engine for that skill.\n\n* The learning procedure can be resource-intensive. The authors should report the time, cost, and computing resources used for the experiments." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "N/A" }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "In the Experiments section, the authors mention that the baseline student model should not have been heavily post-trained so that there are rooms for further improvements. However, it would be beneficial to provide additional evidence and details to support the claim that the student's performance is improved due to the added data points rather than insufficient training. For instance, the training protocol involved a fixed 10-epoch training period; it remains unclear whether the model had reached convergence within this timeframe or if the introduction of new data points accelerated convergence. Further clarification on this aspect would enhance the overall validity of the results.\n\nAlso the result would be more sound if more quantitative and qualitative results for skill discovery is reported in this paper." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper presents a novel and insightful perspective on the autonomous data generation problem, leveraging principles from reinforcement learning to conceptualize it as a sequential decision-making process. The authors provide a thorough explanation of this approach, the motivations behind and the underlying mechanics.\n\nThis paper proposed a modular framework/testbed that can be easily adapted to various tasks, showcasing its versatility and potential for widespread applicability. The authors demonstrate the effectiveness of their approach through experiments on 3 tasks of multiple modalities, including text, image, and code generation, yielding promising early results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper presents a modular system for automated data generation, designed to minimize the need for human annotations. The proposed approach employs a reinforcement learning-inspired methodology, decomposing the process into a sequence of action predictions (data generation policy) based on state information (feedback from model errors) in an iterative manner. The effectiveness of this approach is demonstrated through three diverse tasks, encompassing text, image, and code generation across different modalities." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The experiment part should be conducted more thoroughly: specifically, creating a test set that incorporates newly generated data points from the data generation agent and reporting evaluation results for each retrained model over successive iterations would provide more comprehensive insights into the system's performance." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- How does the performance of the data generation agents change over longer iterations? The paper truncates experiments when performance increases, but it would be insightful to explore whether performance plateaus or continuously increase over extended training.\n- Is the total training data allocation fixed in each environment, or does it vary dynamically? The methodology mentions rebalancing but lacks clarity on how these allocations adjust adaptively based on feedback." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Good contribution to automated data generation for model improvement.\n- Clearly written with structured sections explaining each environment type and experimental results." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents DataEnvGym, a framework designed to simulate environments for data generation agents. These agents iteratively generate synthetic data to address weaknesses in student models, aiming to improve model performance across tasks like mathematics, programming, and visual question answering. DataEnvGym provides various structured environments (Open-Ended, Skill-List, and Skill-Tree) where data generation agents create targeted training examples based on feedback from the student model, offering a dynamic approach to automated model improvement." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The paper should clarify early on that the focus is on synthetic data generation for training purposes, as this underpins the motivation for the approach.\n- Important related works on algorithms using feedback from training to generate the next training environments are missing [1, 2, 3, 4].\n- Lines 460 - 465, I believe there is a typo whereby it says that “each experiment is truncated once the performance consistently decreases for multiple iterations”. Should it be “increases”?\n- Repeated runs of experiments without confidence intervals will be valuable, especially since the variance of performance seems to be very high.\n\n[1] Sudhakaran, S., González-Duque, M., Freiberger, M., Glanois, C., Najarro, E., & Risi, S. (2024). Mariogpt: Open-ended text2level generation through large language models. Advances in Neural Information Processing Systems, 36.\n[2] Todd, G., Earle, S., Nasir, M. U., Green, M. C., & Togelius, J. (2023, April). Level generation through large language models. In Proceedings of the 18th International Conference on the Foundations of Digital Games (pp. 1-8).\n[3] Zhang, J., Lehman, J., Stanley, K., & Clune, J. (2023). Omni: Open-endedness via models of human notions of interestingness. arXiv preprint arXiv:2306.01711.\n[4] Faldor, M., Zhang, J., Cully, A., & Clune, J. (2024). OMNI-EPIC: Open-endedness via Models of human Notions of Interestingness with Environments Programmed in Code. arXiv preprint arXiv:2405.15568." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "Yes, Discrimination / bias / fairness concerns", "Yes, Potentially harmful insights, methodologies and applications" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "Limited Evaluation of Agent Architectures: The paper primarily focuses on introducing the DataEnvGym environment, but the evaluation of data generation agents is limited to relatively simple baseline policies. Exploring more sophisticated agent architectures, such as reinforcement learning agents (e.g., using policy gradient methods, Q-learning) or agents incorporating larger language models for planning and decision-making (similar to the approaches used in Shimabucoro et al. (2024), would substantially strengthen the paper. A systematic comparison of different agent architectures in terms of their effectiveness in improving student models, their sample efficiency, and their computational cost would provide valuable insights and contribute to a better understanding of the challenges and opportunities in automated data generation.\n\nLimited Analysis of Skill Discovery Quality: The paper briefly discusses the impact of oracle skills on student performance but doesn't delve deeply into the quality of the skills discovered by the proposed LLM-based method. A more thorough analysis is needed to understand the strengths and limitations of the skill discovery module. This could involve quantitative measures of skill quality, such as measuring their coherence, coverage, and relevance to the target task, or qualitative analysis by human experts. Investigating how the quality of the discovered skills affects the performance of the data generation agents and the resulting student models would strengthen the paper's contribution. Exploring alternative skill discovery methods (e.g., clustering-based approaches, topic modeling) and comparing their effectiveness with the proposed method would further enhance the analysis.\n\nLack of Comparison with Existing Methods: The paper positions DataEnvGym as a novel approach for model improvement, but it lacks a direct comparison with existing methods like curriculum learning (Bengio et al., 2009) or active learning (Settles, 2009). Evaluating how DataEnvGym compares to these established techniques in terms of student model performance, data efficiency, and computational cost would provide valuable context and highlight the advantages of the proposed framework. This would also clarify the specific niche and contribution of DataEnvGym within the broader landscape of model improvement techniques.\n\nLimited Discussion of Scalability: The experiments in the paper are conducted with relatively small datasets and models. It's essential to address the scalability of DataEnvGym to more realistic scenarios involving larger datasets, more complex models, and a broader range of skills. Discussing the computational challenges and potential optimizations for scaling the framework to more demanding settings would strengthen the paper's practical relevance. For instance, how can the computational cost of LLM-based data generation be reduced while maintaining data quality? How can the skill discovery and agent training processes be optimized for larger datasets? Addressing these questions would provide valuable insights for future research and practical applications." }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "Novel Problem: Automating data generation to improve models is a significant challenge with practical applications. This work directly addresses this problem with a novel approach.\n\nWell-Defined Framework: DataEnvGym is presented as a well-defined framework with clear components (trainer, evaluator, data generation policy, data generation engine) and different levels of structure (open-ended, skill-list, skill-tree). This structure makes the problem tractable and facilitates modular development and testing.\n\nMultiple Tasks and Domains: The inclusion of experiments across diverse tasks (mathematics, programming, visual question answering) and with different student models demonstrates the generalizability of the framework.\n\nPromising Results: The initial results showing improved student model performance across tasks and environments are encouraging and suggest the potential of this approach. The analysis of difficulty/rarity and training dynamics adds value.\n\nOpen-Source Release: The commitment to publicly releasing the code and leaderboard promotes reproducibility and encourages further research in this area." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces DataEnvGym, a novel testbed of teacher environments for developing data generation agents that iteratively improve student models by generating targeted training data. DataEnvGym frames data generation as a sequential decision-making task where an agent, comprising a data generation policy and engine, interacts with an environment that provides feedback from a student model. The agent's goal is to improve student model performance by generating training data based on student feedback (errors or weak skills). DataEnvGym offers multiple instantiations of teacher environments across three levels of structure: open-ended, skill-list, and skill-tree, each with varying levels of scaffolding support. Experiments across text and image-based tasks (mathematics, programming, and visual question answering) demonstrate that example agents within DataEnvGym can iteratively improve student model performance. Furthermore, the authors analyze the impact of state information, environment structure, and skill discovery quality on agent performance and student learning. The paper concludes that DataEnvGym, with its modular design and support for diverse tasks and student models, provides a valuable platform for developing and evaluating data generation agents, engines, and feedback mechanisms for automated model improvement. The code and leaderboard will be publicly released." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Limited Evaluation of Agent Architectures: The focus is primarily on the environment itself, with less emphasis on the architecture and training of the data generation agents. While baseline agents are provided, more sophisticated agent designs (e.g., reinforcement learning agents, agents leveraging larger language models) and their systematic evaluation would significantly strengthen the paper. How do different agent architectures compare in terms of effectiveness and efficiency? Are there specific architectural choices that are particularly well-suited for this task?\n\nOver-Reliance on LLMs for Data Generation: While using LLMs for data generation is a reasonable starting point, it raises concerns about the quality and diversity of the generated data. Exploring alternative data generation methods (e.g., data augmentation techniques, programmatic data generation) and comparing their effectiveness with LLM-based generation would be valuable. How robust is the framework to the quality of the generated data?\n\nLimited Analysis of Skill Discovery Quality: While the paper briefly touches upon the impact of skill discovery quality, a more thorough investigation is needed. How does the quality of the discovered skills affect the performance of the data generation agents and the student models? What are the limitations of the current skill discovery method, and how can it be improved? Quantitative analysis of skill quality (e.g., measuring coherence, coverage, and relevance) would strengthen the paper.\n\nLack of Comparison with Existing Methods: While related work on knowledge distillation and model weakness discovery is discussed, there's no direct comparison with existing methods for model improvement. How does DataEnvGym compare to techniques like curriculum learning or active learning in terms of effectiveness and efficiency? Including such comparisons would better contextualize the contributions and highlight the advantages of the proposed approach.\n\nLimited Discussion of Scalability: The experiments are conducted with relatively small datasets and models. How does DataEnvGym scale to larger datasets and more complex models? What are the computational challenges associated with training data generation agents in more realistic settings? Addressing these scalability concerns is crucial for practical applications." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024dataenvgym,\ntitle={DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=00SnKBGTsz},\nnote={under review}\n}" }, "abstract": { "value": "The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in three diverse tasks covering both text and images (mathematics, programming, and visual question answering) and test multiple student models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. We will publicly release our code and leaderboard." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "iterative data generation", "llm agent", "lifelong learning" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/2fabe224ce80b58518b3e21579a58af4d807e6d7.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
00ezkB2iZf
MedFuzz: Exploring the Robustness of Large Language Models in Medical Question Answering
main
Active
large language model;adversarial machine learning;automatic red teaming
foundation or frontier models, including LLMs
3;3;5;6
4;4;5;3
3;2;3;3
2;2;4;3
3;2;3;3
4.25
4
2.75
2.75
2.75
-0.272166
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The authors need to provide further experiments and analyses to demonstrate the reliability of the questions generated by this method, such as incorporating the performance of human experts or introducing relevant methods for quality control of the questions in the methods section.\n\n2. Also, more analysis of the evaluation results should be included. For example, what are the main types of errors introduced by attacks across different turns? Which specific diseases or problem types is the target LLM less robust against? By supplementing these analyses, further insights can be provided for the development of medical LLMs." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "+ This paper examines the robustness of LLMs in the clinical decision-making process, a critical aspect of their application in the medical domain.\n\n+ The evaluation results demonstrate that current LLMs lack robustness in the clinical decision-making process, offering valuable insights for the development of medical LLMs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper investigates the robustness of large language models in handling medical QA tasks by introducing a new evaluation method, MedFuzz. For each multiple-choice question in the original benchmarks, MedFuzz uses an LLM (referred to as the attacker LLM) to reformulate questions by adding patient characteristics that may introduce social bias without affecting the clinical decision-making process. If the target LLM answers correctly, the attacker LLM is prompted to generate additional distracting questions based on the target LLM’s feedback. Additionally, a non-parametric statistical significance test was developed by prompting the attacker LLM to create questions with patient characteristics that avoid social bias. Using this evaluation method, the authors tested seven LLMs and found a significant performance drop across all models. Moreover, they observed that when current LLMs answer incorrectly, they tend not to reference the added biased information, indicating inconsistency in faithfully adhering to the clinical decision-making process." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "+ A major weakness of this paper is the faithfulness of the reformulated questions. The proposed MedFuzz method relies solely on prompt engineering with the attacker LLM (GPT-4) to modify original MedQA questions, making the attack process difficult to control. The attacker LLM may potentially alter critical information in the original questions, resulting in less reliable reformulated questions. The example in Section 3.1 also demonstrates that the attacker LLM added extensive information about the patient’s family medical history, consultation history, and medication history. These details are highly relevant in real clinical diagnosis and can significantly influence a doctor’s assessment of the patient’s condition.\n\n+ Moreover, although the authors propose a non-parametric statistical significance test, they do not provide the full distribution of p-values across the MedQA benchmark. In line 485, they note that for the successful attacks they selected, the p-values are <1/30, 0.1, 0.16, 0.5, and 0.63. Here, the p-value represents the probability that a control fuzz is more challenging than the original fuzz. Therefore, cases with p-values of 0.5 and 0.63 suggest that the performance decline in the target LLM is due to the perturbations themselves, rather than social bias.\n\n+ For the study of target LLM's faithfulness, it is important to also study the proportion of CoT that mentions the critical information in the original MedQA benchmark for comparison with the results provided in Figure 2B. Additionally, the authors should provide more information to help readers understand the specific process of this study. For example, how many cases were analyzed? Was the determination of whether fuzzed information was included made manually, or was an automated algorithm used?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "NA. Authors have provided an ethics statement in the draft as well." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "•\tThe authors can clarify how their approach to adversarial attacks differs from the misinformation approach in [1].\n\n•\tThe authors can clarify why unfaithfulness of generated responses is a crucial dimension to consider.\n\n•\tSection 2.2 Lines 104: The authors mention “two ways” in which MedFuzz differs from other adversarial ML approaches, though only one distinction is clear in the draft. I’m assuming the second way is the use of semantically coherent changes to the text. These few lines can probably be rephrased to add clarity.\n\n•\tThe authors have conducted their experiments on the MedQA dataset and taken advantage of a constraint imposed in the curation of this dataset. The authors could potentially add broad guidelines to expand on the fuzzing idea for other medical datasets. \n\n•\tHow can the authors ensure that the GPT-4 generated attack retains the same answer as the original QA pair being perturbed? Is there a possibility to evaluate this with the help of domain experts?\n\n•\tHow is the value of K set in Algorithm 1? This can be elaborated on in the Appendix section.\n\n•\tDoes the finding that LLM CoT does not mention the fuzzed information provide a way forward to identify adversarial inputs?\n\n•\tAnother interesting avenue would be to examine how different kinds of LLMs perform when used as the attacking/ target LLM. For example, can a smaller model generate adversarial inputs faster than a larger model like GPT-4?\n\n•\tMinor Comment: Is line 10 a duplicate of line 11 in Algorithm 1?\n\n[1] Han T, Nebelung S, Khader F, Wang T, Müller-Franzes G, Kuhl C, Försch S, Kleesiek J, Haarburger C, Bressem KK, Kather JN. Medical large language models are susceptible to targeted misinformation attacks. npj Digital Medicine. 2024 Oct 23;7(1):288." }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "•\tClarity: The paper is well written and easy to follow along. The authors have given adequate and clear examples at appropriate locations in the draft to aid readability. Good use of illustrations after consultation with domain experts (clinical collaborators in this case). The authors have also acknowledged the limitation of using contaminated training data.\n\n•\tOriginality: The idea to use social biases a clever way to incorporate real life information into the MedQA dataset.\n\n•\tQuality: The evaluation involves the use of proprietary vs open source and general purpose vs domain specific models. The experiment settings for reproducibility like temperature have been provided. The approach should be easy enough to reproduce. \n\n•\tSignificance: The authors have tackled a relevant problem that needs to be addressed, given the rapid pace of the domain." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes an automated red teaming approach to attack LLMs. They attempt this in the medical context by modifying medical Q&A datasets (specifically on MedQA), by violating assumptions that do not hold good in real life settings. The goal of MedFuzz is to make LLMs provide the wrong answer while ensuring that clinicians can still provide the right answer. The authors have identified a crucial problem with the evaluations of LLMs in the medical domain and provided a way to generate a more realistic dataset to aid subsequent LLM evaluation. The novelty lies in the proposed dataset from MedFuzz and the statistical evaluation used to check if the attack was successful." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "•\tIn the case of MedQA dataset, the authors have identified a social bias which may be present in real life situations, which are removed in the original benchmark. It is unclear how easy it is to identify and exploit such peculiarities in other medical benchmarking datasets like MedMCQA[1], PubMedQA[2] etc.\n\n•\tThe authors create the adversarial questions by an iterative multi-turn approach. Although the authors allude to the target LLM forgetting about previous Q&A attempts, would the approach be better validated if the evaluation is done in a single-turn manner?\n\n•\tThe authors, in step 4, only validate the statistical significance of 4 individual interesting cases. How would this change if considered for all successful cases?\n\n[1] Pal A, Umapathi LK, Sankarasubbu M. Medmcqa: A large-scale multi-subject multi-choice dataset for medical domain question answering. InConference on health, inference, and learning 2022 Apr 6 (pp. 248-260). PMLR.\n\n[2] Jin Q, Dhingra B, Liu Z, Cohen WW, Lu X. Pubmedqa: A dataset for biomedical research question answering. arXiv preprint arXiv:1909.06146. 2019 Sep 13." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "* Why was MedQA the only dataset used? There are a few other multiple choice medical QA ones liked MedMCQA, PubMedQA, and MMLU Clinical topics. Why MedQA?\n* Why was only GPT-4 used as the attacker LLM? Seemingly there are other open source ones that have just as much medical knowledge especially looking at the fine-tuned example. \n* The workflow for the Step 2 is quite a few iterative turns. Are they all necessary to generate grounded ones? Is this workflow generalizable to other LLMs? Or is it GPT-4 specific?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The idea of the paper is interesting -- existing medical QA datasets are fairly simplified and may not appropriately represent real-world clinical settings. Thus, there is a need to understand how safe LLM usage is for the medical domain via robustness analysis.\n* The intuition for the adversarial biasing comes from medical domain understanding of the benchmark constructions.\n* Authors benchmark 3 closed LLMS and 4 open-source, medically fine-tuned LLMs." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes an adversarial method for evaluating LLM performance on medical question-answering benchmarks to assess their robustness in real-world clinical settings. The idea is to automatically generate new question-answer pairs from the existing benchmark such that they still represent realistic scenarios (e.g., including additional patient information) but the answers remain the same. The experiment results demonstrate that various baseline LLMs can be tricked into providing incorrect answers." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* One of the major claims of the method is that it will generate new questions that are semantically coherent and will not fool clinicians. However, there is no empirical proof that this is the case other than the analysis of a handful of case studies (one is presented in the main text). The prompt contains instructions for the attacker LLM it should not change the default answer but GPT-4 is not always guarenteed to follow the instructions or have all the correct medical knowledge appropriate.\n* Is there a reason why general domain adversarial prompting wasn't shown to be sufficient? A few studies are listed in 2.2 (first sentence) but no preliminary studies or experimental studies are shown to support this.\n* GPT-4 is chosen as the attacker LLM, but the question is why aren't other open-source models explored? In looking at OpenBIOLLM-70B performance, this also looks like a reasonable comparison to try and might even generate harder cases with less of the computation cost.\n* One of the comments in the introduction was the that existing benchmarks are not challenging enough including reducing real-life clinical situations to canonical multiple choice questions. Is there a reason why only one dataset was included and it was a multiple-choice one?\n* The statistical test is proposed to identify the significance of a successful attack using control fuzzes and to select the case studies, but what about the general distribution for the MedQA dataset? How stable is it broadly in identifying how significant a successful attack is? I understand this can be computationally intensive and costly but that also raises a bit of questions regarding the applicability of the method if it can't be done at scale. \n* The presentation could have been improved to provide some intuition at the beginning with potentially a simpler case study where less was added to make the LLM response change. Similarly, some of the text is written in a less digestible format. For example, the introduction of the test statistic could be improved by introducing notation first and then how you might compute it to understand what the statistic is looking to capture.\n* The citation format is incorrect, please use \\citep instead of \\cite as it detracts from readability." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "In the MedFuzz study, patient characteristics (PC) such as age, gender, race, and socioeconomic factors are added as perturbations to induce confusion in LLMs. One specific example presented by the authors is the use of “excessive hospital service usage by low-income patients.” This type of information could inadvertently reinforce social biases or perpetuate negative perceptions about certain demographic groups, rather than reflect clinical validity or fairness.\n\nWhen such characteristics are introduced as confusion-inducing factors, there is a risk that essential background information—critical for accurate diagnosis and treatment—could lead to biased outcomes. Therefore, further clarification and evaluation are needed to ensure that MedFuzz’s inclusion of such data as perturbations aligns with clinical relevance and fairness, and to mitigate any potential reinforcement of harmful social biases in the model." }, "flag_for_ethics_review": { "value": [ "Yes, Discrimination / bias / fairness concerns" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. It would be helpful to have specific examples illustrating the risks posed by the simplified assumptions in traditional benchmarks within clinical settings. For instance, if omitting certain patient characteristics or clinical contexts could lead to diagnostic errors, providing these examples would clarify the importance of this study for readers and highlight its relevance.\n\n2. I am curious whether the patient characteristics (e.g., age, gender) and social bias information added as perturbations in MedFuzz genuinely act as confusion factors within actual clinical environments. These details often serve as crucial data points in clinical decision-making, so further explanation on how these elements were deemed appropriate as confusion-inducing factors would enhance the clinical validity of this study.\n\n3. A clear explanation regarding the rationale for setting the perturbation iteration count to K=5 would be beneficial. For instance, do you have experimental results comparing the initial attack (K=1) with subsequent attacks (K=5) to illustrate how the LLM maintains performance with increasing perturbation levels? Such a comparison could provide a more reliable basis for evaluating the impact of iteration count on robustness in this study." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. This paper introduces MedFuzz, a novel approach for testing the robustness of large language models (LLMs) in clinical contexts, which addresses the simplifications found in traditional benchmarks. MedFuzz is distinct in its approach by adding specific patient characteristics and social bias information to simulate the complexity of real-world clinical scenarios. This innovative framework offers a new direction for assessing LLM robustness by examining potential vulnerabilities in medical question-answering settings.\n\n2. The paper clearly explains the concept of MedFuzz and its application, particularly in using patient characteristics and bias elements to test model robustness. The experimental procedures and components are consistently described, making the study's objectives and methodology easy for readers to follow.\n\n3. MedFuzz presents a significant contribution as it provides a framework to evaluate how LLMs may perform in real clinical settings, beyond simplified benchmarks. This work has high practical relevance for the safe implementation of LLMs in healthcare by strengthening robustness assessment and reducing potential errors. It contributes an essential tool for enhancing LLM applicability in clinical practice, highlighting the importance of robustness in medical AI." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper proposes MedFuzz, a novel approach designed to evaluate the robustness of large language models (LLMs) in medical question-answering contexts. MedFuzz introduces controlled perturbations in input text by adding patient characteristics (PC) and social bias information to simulate real-world variability and challenges encountered in clinical settings.\n\nThe authors highlight the limitations of traditional medical benchmarks that often simplify clinical scenarios and position MedFuzz as an advancement towards “beyond-the-benchmark” evaluations. Specifically, the paper presents experiments assessing LLMs' responses to MedFuzz perturbations and evaluates the consistency of chain-of-thought (CoT) explanations under these conditions. The study offers a new perspective on testing LLM robustness by addressing potential risks in clinical decision-making when assumptions of canonical benchmarks do not hold." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The paper defines robustness as the model’s ability to maintain performance in varied scenarios, which may lead to confusion with the concept of “generalization.” Typically, robustness refers to a model's resilience to perturbations or intentional adversarial attacks. To clarify the core aim of this study, a more explicit definition of robustness in the context of MedFuzz is recommended, particularly regarding how MedFuzz is designed to evaluate LLM robustness beyond generalization. Explaining how robustness is measured and differentiated from generalization could provide readers with a clearer understanding of the intended contribution.\n2. MedFuzz incorporates specific patient characteristics (e.g., age, gender, race, family history, background) as perturbations to assess LLM robustness; however, this approach may not accurately reflect clinical settings. Patient background information typically aids diagnostic decisions rather than introducing confusion. For instance, a patient’s age or medical history often plays a crucial role in diagnosis and would rarely be considered extraneous. Thus, further justification on why these characteristics are appropriate for simulating robustness under MedFuzz is recommended. Clarifying which patient data might clinically support decisions versus truly confuse the model would strengthen the study’s validity.\n3. The scale of text modification applied in MedFuzz risks excessive deviation from the original context, potentially impacting the robustness assessment. In section 3.1, for instance, added text can exceed 40% of the original passage, potentially leading to unintentional confusion beyond MedFuzz’s intended perturbation. A more focused perturbation approach—such as limiting changes to key sentences or reducing the proportion of added text—could provide a more accurate robustness assessment. This adjustment would align MedFuzz’s modifications closer to realistic conditions while still effectively evaluating LLM robustness.\n4. After applying MedFuzz, the Chain-of-Thought (CoT) explanations produced by the LLM were noted to omit important information, suggesting reduced fidelity. However, it is unclear whether this reduction in fidelity is due to MedFuzz’s perturbations or the LLM’s inherent limitations. It is recommended to first assess the fidelity and consistency of CoT explanations on the original benchmark without MedFuzz to identify the root cause of CoT discrepancies. Such an analysis would clarify whether the fidelity issues stem from MedFuzz or from the model itself, providing clearer insights into the reliability of the CoT explanations in real-world scenarios." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "An automatic redteaming method for testing the robustness of LLMs in medical question answering" }, "_bibtex": { "value": "@inproceedings{\nanonymous2024medfuzz,\ntitle={MedFuzz: Exploring the Robustness of Large Language Models in Medical Question Answering},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=00ezkB2iZf},\nnote={under review}\n}" }, "abstract": { "value": "Large language models (LLM) have achieved impressive performance on medical question-answering benchmarks. However, high benchmark accuracy does not imply robust performance in real-world clinical settings. Medical question-answering benchmarks rely on assumptions consistent with quantifying LLM performance but that may not hold in the open world of the clinic. Yet LLMs learn broad knowledge that could help the LLM perform in practical conditions regardless of unrealistic assumptions in celebrated benchmarks. We seek to quantify how robust LLM medical question-answering benchmark performance is to violations of unrealistic benchmark assumptions. Specifically, we present an adversarial method that we call MedFuzz (for medical fuzzing). MedFuzz attempts to modify benchmark questions in ways aimed at confounding the LLM. We demonstrate the approach by targeting unrealistic assumptions about patient characteristics presented in the MedQA benchmark. Successful \"attacks\" modify a benchmark item in ways that would be unlikely to fool a medical expert but nonetheless \"trick\" the LLM into changing from a correct to an incorrect answer. Further, we present a non-parametric test for calculating the statistic significance of a successful attack. We show how to use calculate \"MedFuzzed\" performance on a medical QA benchmark, as well to find individual cases of statistically significant successful attacks. The methods show promise at providing insights into the ability of an LLM to operate robustly in more realistic settings." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "large language model", "adversarial machine learning", "automatic red teaming" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/9086aab30bbc4180cbbf3c113e82c12eecdff119.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "MedFuzz: Exploring the Robustness of Large Language Models in Medical Question Answering" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
main
Active
Large Language Models;Time-series Prediction;Multi-modal;Instruction-following
learning on time series and dynamical systems
3;5;5;5
3;4;3;3
2;2;2;3
2;2;3;3
1;3;3;2
4.5
3.25
2.25
2.5
2.25
0.333333
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. For table 4, can you provide the same results, but for your model instead of only for TimeLLM? It would make it more obvious whether your model succeed on those tasks with incorrect textual information.\n2. For real world dataset, was the textual information always constant (as shown in section B.3) for each dataset? This would allow a finetuned model to fully ignore it, since it could bake said information in its weights anyway." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. It is good that zero shot examples of descriptions which have not been provided in the training set have been tested with. Without those, the narrow set of possible descriptions could have made it impossible to check whether the result quality came from the model overfitting on these descriptions or not.\n2. Training the model using generated data and computing how well the model follows the instructions is a relatively clean way to do a proof of concept of the idea, which is appropriate currently, as the field of using LLM and timeseries models together is still in its infancy." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The article describe a new model to incorporate textual information with a more traditional timeseries forecasting model. It does so by combining an embedding computed from the historical numerical data with an embedding computing from the textual information. The combined embedding is then used to generate the forecast.\n\nThe model is tested both on real-world data, where it shows competitive results, and on generated data, where it is shown to follow the instructions included in the textual information." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. There seems to be a mismatch between the described technique used to apply the modification (equation 3), and the examples shown (figure 3). According to the equation, the data in the forecast window should be a pure affine function, without any of the noise shown in figure 3.\n2. While the model is tested against other multimodal text+timeseries models, it should also be tested against pure LLM approaches: just plugging the text and the history in a prompt for GPT 4 or LLama 3, and looking at the generated output. While such an approach won't scale to long series, recent work have shown it to be surprisingly decent at forecasting under textual instructions. See: LLM Processes by Requiema 2024 for a slightly more complex approach, but there may be more appropriate references for the more direct one.\n3. Hyperparameters and training curiculum for the timeseries portion of the model are missing." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 1 }, "primary_area": null, "questions": { "value": "- **How would the proposed model perform without access to textual inputs or under noisy conditions?** If textual instructions are incomplete, inconsistent, or contain noise, how would the model's performance be affected? This scenario is particularly relevant in high-stakes areas like finance, where decision-making often involves dealing with imperfect information. What measures have been taken to ensure robustness against these issues, which are common in real-world data?\n- **How does the proposed framework address interpretability in practice?** The paper claims that incorporating textual instructions enhances interpretability, but there are no concrete demonstrations of how this contributes to meaningful insights for domain experts. Could you provide explicit examples or user studies that validate this claim? Without such evidence, how can the claim of improved interpretability be substantiated?" }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- A novel two-stage framework for integrating temporal and textual data.\n- A data generation workflow for instruction-based forecasting, compatible with LLMs.\n- Comprehensive ablation studies and comparative evaluations demonstrating the effectiveness of TITSP." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents Text-Informed Time Series Prediction (TITSP), a multimodal framework that integrates textual context with time series data using Large Language Models (LLMs). The approach involves two stages: AutoPrompter, which aligns time series data with text embeddings, and a refinement stage that incorporates task-specific textual instructions to enhance prediction accuracy and interpretability. While TITSP proves particularly effective for context-rich forecasting tasks, by demonstrating improved performance under specific settings against some other methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- **Technical Contributions are Incremental** The proposed approach lacks significant technical innovation. Integrating LLMs with time series is an incremental step rather than a groundbreaking contribution. The use of cross-attention and VQ-VAE offers no substantial improvement beyond established techniques.\n- **Poor Structure and Clarity** The paper is poorly organized, with unclear explanations and an incoherent flow. The motivation and rationale for the proposed method are inadequately communicated, and critical components like AutoPrompter are explained in a convoluted manner, hindering comprehension.\n- **Inadequate Experiments** Experimental validation is weak, relying heavily on synthetic datasets that limit the assessment of practical applicability. Comparisons to related state-of-the-art methods are lacking, and statistical significance testing is absent, making it difficult to validate the performance claims.\n- **Superficial Related Work** The related work section lacks depth and fails to properly differentiate the contribution from prior research. Key works are missing or insufficiently discussed, weakening the justification for originality.\n- **Numerous Typos and Lack of Polish** Frequent typos (e.g. citation mistaches in line 54-55), poorly formatted figures(fig. 6), and poorly constructed tables suggest a lack of careful proofreading, which detracts from the overall quality and credibility of the paper.\n- **Insufficient Practical Insights** The claimed interpretability through textual integration lacks demonstration. There are no real-world examples showing how domain experts would benefit from these insights, making the practical value of TITSP unclear." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "The paper does not raise any significant ethical concerns." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the weaknesses." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The paper presents a novel approach to time series forecasting by integrating textual instructions, which is a creative extension of existing multimodal time series models. The introduction of a two-stage framework and the focus on instruction-based forecasting address a significant gap in the field.\n2. The paper is well-written and logically organized. The figures and tables are clear and effectively support the text. The problem formulation and the description of the methodology are easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces Text-Informed Time Series Prediction (TITSP), a novel two-stage framework that enhances time series forecasting by integrating domain-specific textual information. The paper demonstrates that TITSP significantly outperforms traditional and existing multimodal approaches, improving both predictive accuracy and interpretability." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Given the synthetic data generation process, how can the authors ensure that there is no data leakage between the text data and forecasting targets? Could the authors provide a detailed explanation of the data generation process to address this concern?\n2. How practical is the proposed approach in real-world scenarios where textual instructions may not always be available or may be ambiguous? Could the authors discuss the potential limitations and challenges in deploying TITSP in practical applications?\n3. Has the model been tested on any other multimodal time series analysis tasks beyond forecasting? If not, what are the potential challenges in applying TITSP to other tasks?" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Questions:\n1. The choice of order compliance rate as an evaluation metric is intriguing. This metric appears specifically tailored to the instructions outlined in the paper, which may limit its applicability to real-world scenarios. Could you clarify the advantages this metric offers over existing metrics for evaluating forecasting performance?\n\nSuggestions:\n\n- Benchmark results against a broader selection of existing multimodal forecasting models to enhance comparative insights.\n- Include a detailed discussion of the dataset, covering aspects such as sample size, history length, and forecasting horizon.\n- If feasible, incorporate more complex textual cues in the experiments to better reflect real-world forecasting challenges." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The strengths include the relevance of the problem of text-aided forecasting and the novelty of the prompting method. The methodology section is comprehensive and well-described, and the techniques and experiments have been explained in detail and are easy to follow. The figures convey the overall idea and highlight the improvements over the no-instruction setup." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a novel two-stage for multimodal forecasting through historical data and textual cues that are useful for LLM-based forecasters. The multimodal framework is evaluated on numerous multimodal forecasting tasks. The paper provides a setup to include expert opinions for a forecasting problem." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The primary weaknesses of the paper are as follows:\n\n1. **Incomplete Literature Coverage**: Section 2.2 does not fully address relevant multimodal forecasting models, omitting key references such as UniTime ([https://dl.acm.org/doi/10.1145/3589334.3645434](https://dl.acm.org/doi/10.1145/3589334.3645434)).\n\n2. **Limited Comparative Analysis**: The results lack sufficient comparison with other multimodal forecasting models, reducing insight into how the proposed method performs relative to similar approaches.\n\n3. **Insufficient Dataset Description**: Essential dataset details, including sample counts, history length, and forecasting horizon, are not provided. Additionally, the impact of the forecasting horizon on prediction quality remains underexplored.\n\n4. **Simplistic Experimental Instructions**: The experimental instructions are overly simplistic, failing to reflect realistic scenarios. The limited set of training instructions may also suggest that simpler alternatives for instruction embedding could have been more effective.\n\n5. **Circular Evaluation**: The evaluation datasets have been tailored from existing datasets based on the training instructions intended for evaluation, which creates a circular reasoning issue that undermines the reliability of the evaluation setup. A similar statement about the order compliance rate metric can also be made.\n\n**Minor Issues:**\n\n1. The paper inconsistently uses closing quotes (\") instead of opening quotes (``) in multiple locations, including but not limited to lines 197, 203, and 213.\n\n2. Textual citations, rather than parenthetical citations, would be more suitable for the references in lines 117 to 128, enhancing the readability and flow of the text.\n\n3. Appropriate citations are not provided for the original dataset sources." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We propose TITSP, a multimodal framework that integrates textual knowledge with time series data using LLMs, significantly enhancing prediction accuracy and interpretability." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024instructionfollowing,\ntitle={{INSTRUCTION}-{FOLLOWING} {LLMS} {FOR} {TIME} {SERIES} {PREDICTION}: A {TWO}-{STAGE} {MULTIMODAL} {APPROACH}},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=01wMplF8TL},\nnote={under review}\n}" }, "abstract": { "value": "We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting" }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/448e8a13abf683caa4fdc433d298a04dcb59bbe8.pdf" }, "presentation": null, "primary_area": { "value": "learning on time series and dynamical systems" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/4e0c464af7a349b9a73543bcd65624333bc923af.zip" }, "title": { "value": "INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
029hDSVoXK
Dynamic Neural Fortresses: An Adaptive Shield for Model Extraction Defense
main
Active
Model Extraction Defense
alignment, fairness, safety, privacy, and societal considerations
1;5;5;6;8
4;3;3;3;4
2;3;3;3;2
2;3;2;3;4
2;2;2;3;3
5
3.4
2.6
2.8
2.4
-0.179029
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "- The authors claim their approach falls under the model extraction prevention defense category. Still, it works like a detection approach where the OOD detector is built into the model itself and thus relies heavily on the OOD data used for classification. The results shared by authors, to argue otherwise, are insufficient. I would ask the authors to include more experiments for this argument. \n- If the model is trained to early exit in the case of OOD samples, but the labels used are from the original neural network (essentially the last possible exit), what is the accuracy of the model on OOD data used for training the model? I suspect that the early exit model misclassifies OOD data with high confidence. If it were learning the original network’s output labels for OOD data, then the defense would not work for the hard-label setting as the attacker would still receive a large portion of the original network’s labels as output with some erroneous ones.\n- Regarding the exit point evaluation ablation study, I would like to know the accuracy at each exit and the exact number of ID and OOD samples passing through each exit instead of terms such as “over half,” etc." }, "rating": { "value": 1 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The proposed idea of implementing early exits as a defense against model extraction is novel and sound.\n- The method is easily adaptable to different architectures like ResNets and ViTs. \n- The use of entropy and information bottleneck theory is sound and well-suited to the goal of reducing extractable information for the attacker.\n- The experiments conducted cover various scenarios, models and datasets validating its generalizability. The performance comparisons with state-of-the-art defenses further strengthen its credibility. \n- The ablation study is thorough and captures various scenarios that highlight the effectiveness of the proposed method and its components." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents “Dynamic Neural Fortress” or DNF framework as a defense against Model Extraction Attacks. These attacks allow an adversary to create a copy of a pre-trained model accessible via black-box APIs, posing risks to proprietary models. The authors identify two main challenges in current defenses: (1) Neural Network architecture protection, a thing that is taken for granted in previously proposed attacks by using the same model architecture for victim and clone models, and (2) optimizing computational resources by avoiding allocation of equal resources to both benign and attack queries. \n\nThe authors implement an Early-Exit neural network wrapper (EENN) on top of a trained model. This wrapper facilitates random exits at earlier layers for attack queries while preserving model utility by making benign queries exit at later layers. The authors assume the usage of out-of-distribution (OOD) data by attackers in most cases, but there are some experiments conducted for in-distribution (ID) data as well. Using concepts from deep information bottleneck theory, the authors optimize mutual information between input data, latent features, and output labels for training the EENN model. \n\nThe proposed method has been evaluated via testing on various architectures and datasets, and compared against other state of the art defenses." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "The paper presents a technically sound idea, but the presentation is poor and needs major revisions. I am listing the weaknesses sectionwise. \n### Related work:\n- The related work is not organized properly, and some works are not cited in their appropriate sections, although they are cited later in the paper. For example, ActiveThief by Pal et al. (2020) [1] should be present under functionality stealing. \n- When a model extraction attack is data-based, the data might be natural or synthetic. For E.g., I can generate a dataset of 10,000 images from a pretrained generative network and use that for model extraction. This would still fall under the category of data-based model extraction. Data-free model extraction means that the data used for stealing is generated based on some information received from the victim. \n- Therefore, restructuring the related work section is necessary here. \n\n### Methodology:\n- The steps followed to convert a pre-trained victim model into an EENN are not easily followed. A network is trained on the ID data first. Then exit classifiers are added on top of it. Then, an OOD generator is used to generate OOD data, which is then passed through the original network without the exit networks for inference. The steps followed after this are not written in a coherent manner. One has to go through Algorithm 1 to get a clear picture of the training process.\n- Overuse of the term specific to start two consecutive paragraphs (224-235 and 236-241) and even inside the paragraphs when the sentences contained in both paragraphs are not specific at all. \n\n### Experimentation:\n- The authors should differentiate between the DFME and DBME settings in more detail. In line 387, it is assumed that the reader will know that they are talking about the DFME setting instead of the soft-label setting. This also invites confusion regarding the budget difference between the soft and hard label settings, where the budget should be the same for valid comparison. \n- For the DFME setting, one clone model architecture should be the same as the victim model for valid comparison (Resnet-34 in this case). Previous methods, like the prediction poisoning [2] method used by authors for comparison, have conducted experiments that keep the victim architecture for the stolen model. Moreover, the proposed method is not better than MeCo for the CIFAR-10 dataset. This should be analyzed and discussed.\n- For the DBME setting, using the random strategy for sampling images is not ideal. It has been shown in the ActiveThief [1] paper that using an uncertainty-based sampling method is more effective. \n- To showcase the effectiveness of the in-distribution defense, using JBDA as the attack strategy is fairly obsolete, and the paper cited needs to be corrected. The paper that proposed the attack is [3]. The authors should use either ActiveThief or Knockoff nets attack for evaluation as they are more recent and utilize intelligent sampling-based strategies for attack. If an actual attacker has access to in-distribution data, they will try to use the best strategy possible. \n- To demonstrate the defense’s effectiveness against model architecture stealing, the authors pick the latest attack by Carlini et al. but fail to show effectiveness against previously cited work, specifically “Towards reverse-engineering black-box neural networks. In International Conference on Learning Representations, 2018.” that perform attack on imagenet models. Considering that this was one of the major claims made by the authors, they should evaluate this aspect thoroughly. \n\n\n### Grammar:\nThe paper has incoherent paragraphs, spelling mistakes, and redundant sentences. Some of them are listed below:\n- Line 225, it should be “convert” instead of “covert.”\n- In Table 1 and Table 2, the spelling of label is incorrect. \n- Appendix D, Lines 778-779, same line repeated twice. \n\nCitations:\n- [1] Pal, Soham, et al. “Activethief: Model extraction using active learning and unannotated public data.” Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 01. 2020.\n- [2] Orekondy, Tribhuvanesh, Bernt Schiele, and Mario Fritz. “Prediction poisoning: Towards defenses against dnn model stealing attacks.” arXiv preprint arXiv:1906.10908 (2019).\n- [3] Papernot, Nicolas, et al. “Practical black-box attacks against machine learning.” Proceedings of the 2017 ACM on Asia conference on computer and communications security. 2017." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Can you provide a formal definition or description of in-distribution and out-distribution data in this paper's setting? How to distinguish the normal user data (OOD) and attack data (OOD)?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "+ Good motivation. The authors adopt multi-exit architecture to defend architecture extraction attack, which is a well motivated and interesting idea.\n+ Extensive evaluation. The authors not only evaluate the defense effectiveness but also adaptive attacks." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces a new defense against model extraction attack for model architecture and model utility. The key idea is to use multi-exit neural network architecture and its random exit mechanism to protect the network's architecture while ensuring the efficiency. For benign queries, the authors trains the early-exit model to distinguish OOD data (attack queries) and in-distribution data to ensure the model utility.\nFinally, the authors show that DNF outperforms previous defenses and evaluate the adaptive attacks." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The assumption of attack data are OOD data, although widely adopted in prior work, should be more carefully justified. Meanwhile, as the model's training data are unknown to the user, benign queries may also be OOD data. DNF might decrease the model utility in this case.\n- The main part of paper (Section 4) is somehow hard to follow. I would suggest the author to simplify the notations or subscripts. Moreover, I also suggest the authors to provide an overview figure to replace some descriptions.\n- Although the authors investigate the adaptive attacks, the adversary can still design more powerful attack by exploiting the multi-exit model. Please discuss more about the potential vulnerability of multi-exit architecture and compare with prior attacks on multi-exit networks.\n\n[1] Auditing Membership Leakages of Multi-Exit Networks. ACM CCS 2022.\n\n[2] Model Stealing Attack against Multi-Exit Networks. arXiv:2305.13584.\n\n[3] Mind your heart: Stealthy backdoor attack on dynamic deep neural network in edge computing. IEEE INFOCOM 2023.\n\n[4] Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks. Usenix Security 2023.\n\n[5] Prediction Privacy in Distributed Multi-Exit Neural Networks: Vulnerabilities and Solutions. ACM CCS 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* Can the proposed defense be easily extended to other tasks and domains, such as object detection and NLP applications?\n\n* Does the number of exit points impact the performance of the proposed defense?\n\n* According to the design, earlier blocks are intended to reduce the model's predictive capability. However, it is notable that the ID dataset maintains high accuracy even after exiting at Exit2. This raises questions about the effectiveness of the defense mechanism. Moreover, the OOD dataset still retains 35% of its data after passing through the last two blocks. What is the observed defense effect in this case?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The first defense framework simultaneously offers three key protective benefits: protecting the functionality, and model architecture, while improving the efficiency of the inference.\n\n* An innovative design of the loss function is achieved by incorporating the Information Bottleneck (IB) theory.\n\n* The experimental design is well-structured and covers various scenarios, effectively validating the method's effectiveness." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The dynamic neural fortress (DNF) defense method introduced in this paper employs a dynamic early exit neural network to defend model extraction attacks. This approach effectively provides simultaneous protection for model functionality, network architecture, and enhanced defense efficiency against these threats. Extensive experiments demonstrate that the proposed defense method outperforms SOTA model extraction defenses in terms of both effectiveness and efficiency." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The claims regarding the protection of model architecture are overstated. Early Exit (EE) mechanisms indeed prevent attackers from executing the entire pipeline of DNN, therefore protecting the entire model architecture information from being leaked. However, the authors fail to provide how attackers might exploit this vulnerability to steal the model architecture when executing the entire network. Furthermore, EE mechanisms typically occur in the last few layers of DNNs; therefore, while the proposed approach may protect certain layers, it only works those that are unexecuted, leaving the majority of the neural network vulnerable (if there are effective attacks that can steal the model architecture). The authors should consider discussing these limitations in a dedicated section titled \"Limitations.\"\n\n* The definitions of out-of-distribution (OOD) and in-distribution (ID) data lack clarity. It is unclear why the authors consider OOD data to be \"illegal\" while ID data is deemed \"legal,\" and the rationale behind the corresponding loss term needs further explanation. Additionally, the authors aim to minimize the mutual information between $X_{id}$ and $Z_{id}$ in Eq. (3). However, this approach could potentially compromise the overall performance of deep neural networks (DNNs). The authors should provide additional clarification on why a reduced mutual information between $X_{id}$ and $Z_{id}$ does not impact the prediction accuracy.\n\n* Table 12 indicates that queries drawn from ID dataset exit at Exit2 over 90%, while the OOD queries only exit at about 75% at the same stage. This discrepancy seems inconsistent with the motivation behind two loss terms in Eq. (3) and Eq. (4). The authors should explain this discrepancy and discuss how it impacts the effectiveness of the proposed defense mechanism. I would like to suggest the authors provide a more detailed analysis of the exit patterns for ID vs OOD data.\n\n* The explanation for choosing a specific mutual information optimization method to achieve the defense objectives lacks a deeper theoretical explanation and intuitive justification, making it challenging to fully follow the principles behind the proposed method.\n\n* The experiments conducted to protect the model architecture appear limited, which does not sufficiently demonstrate the contribution related to model architecture protection mentioned in the paper. Consider adding additional experiments and evaluation metrics specifically designed to assess the robustness of the model architecture against potential theft. \n\n* It would be advantageous to include experiments that investigate the correlation between accuracy and exit points, providing a clearer visualization of the early exit mechanism's impact. I would like to suggest a graph showing accuracy vs. exit points for both ID and OOD data or report a statistical analysis of this relationship.\n\n* It seems that all datasets utilized are classification datasets, which makes it difficult to validate the effectiveness of the proposed method in other tasks and domains.\n\n* The notations in this article have been used repetitively, e.g., $r$." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. Concepts related to entropy and IB regularization are presented with some mathematical rigor and learning objectives for both ID and OOD data are presented with entropy and IB regularization contratints; However some additional insights into potential limitations are necessary – How would the strategy perform under adaptive attacks with a much varied and increasingly sophisticated OOD spectrum? And how it would impact models that aim for domain generalizability and to incorporate that OOD spectrum into their model’s capabilities?\n2. How does this defensive method translate to multi-modal architectures like VLMs. Or multi-pipeline methods where each branch operates on different modalities? Or ML methods where different models are trained for different modalities and their outputs are combined (via some aggregation)?" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper presents an interesting defenseive method to counter model extraction attacks. The paper’s novelty lies in the core idea of using a dynamic exit strategy based on the input query. While early exit strategies have been explored in the context of neural networks, their application to defensive methods is novel.\n2. The paper is well written, and the core idea is simple to understand. The language is lucid but see weakness 2, 3.\n3. The paper is well organized with a clear progression between sections. Figure 1 greatly aids clarity in trying to understand the pipeline, however see weakness 2.\n4. Experimental evaluation is robust and does seem to support the author’s claims that DNF achieve substantial reduction is successful model cloning.\n5. This paper addresses a growing concern in the space of AI/ML model deployment – protecting against model cloning and privacy and intellectual rights protection. This work does have the potential to help drive forward work in defense space for these attack types." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Model extraction is a type of attack where an attacker tries to replicate a victim model to either:\n1. Estimate the model’s parameters to emulate the model’s performance\n2. Copy the model’s architecture, to recreate the model as-is.\n3. Get protected knowledge of the training data of the victim model, to better understand the data distribution it was trained on, so that other type of adversarial attacks can be done.\n\nExisting defense strategies are costly – they do not differentiate between benign and malicious queries from an attacker and this form of defense allocates the same computational power to both. This paper provides a novel way to tackle model extraction attacks – Dynamic Neural Fortresses. \n\nThey propose an early-exit strategy wherein the victim model has built-in early exits routes that the model can take and provide outputs that are OOD from it’s expected input-output combination. If an input query matches an early-exits threshold, the model inference exits with the output at that stage." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Despite strength 5, this method can be adapted widely only after these weaknesses are addressed and questions explored.\n2. Should make better use to visual elements – probably, atleast in the appendix, add an example of what an attack query would look like, why the victim system would classify the query as attack and what the victim model’s behaviour would be on it, how early would it exit?\n3. Math is useful and helps to aid the reader’s understanding but at times also hampers readability. Especially in textual sections it breaks the flow of readers. Something that may help is condense the math and limit them to equations that can be repeatedly referenced or have a table of symbol notations that readers can refer to.\n4. Some sections could use with clearer explanations - OOD Data Learning Objective, underlying theory for Entropy and IB regularization. Maybe providing examples around mutual information or ER could help.\n5. The paper does provide some explanation about Entropy and IB regularization but could expand a little more on how mutual information reduction leads to lower predictability and can be leveraged for distinguishing between benign and malignant queries.\n6. Maybe a comparison with other information-theory based approaches such as standard adversarial training would help drive home the imminent advantages on DNF. Another set of comparisons that could strengthen the paper’s results are against other dynamic architectures (example ‘BranchyNet’).\n7. The paper uses ER to determine optimal exits from the model’s inference. However the choice of thresholds is only briefly discussed. Maybe an ablation study of various hyperparameters, exit thresholds and entropy weights could help explain the choice a certain threshold or explain the assumptions that the authors may have made." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Could you please estimate the impact of early exiting for IID samples? For instance, you might compute the misalignment in model outputs for IID samples when they exit early with respect to being forwarded into the entire network.\n- Could you please evaluate the defense against a worst-case attacker, enhancing the already implemented adaptive attacks with (partial) knowledge of the training data distribution?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- The paper presents a clearly novel idea to address a very relevant issue. Indeed, to the best of my knowledge, this is the first application of a multi-exit neural network to defend against model extraction attacks.\n- The proposed network architecture can also reduce the inference time during deployment.\n- The approach is very intuitive and well-justified.\n- The reported results are promising." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "In this paper, a defense against model stealing attacks (targeting either the model architecture or its functionality) based on a multi-exit neural network is proposed. The main idea is to output accurate prediction scores for ID data from the later network exits, as well as uninformative scores for OOD data from the earlier exits. To do so, for each network exit, a thresholded classifier is trained on the respective intermediate layer representation with a specifically designed loss, which maximizes the aforementioned objective using concepts from information theory. During the deployment, an exit is chosen for a sample when the maximum score of an exit classifier exceeds the respective threshold." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- 90% of IID samples exit in the first 3 exits. Although this can be viewed as a benefit (it reduces the inference time), on the other side, the defense mechanism will produce less informative outputs for those samples. The impacts of these effects should be clearly understood.\n- I appreciate the fact that the authors consider different types of attacks and try to implement adaptive ones. However, a best practice when dealing with security is to simulate a worst-case scenario against the strongest attack. This helps understand the limitations of the defense and estimate lower bounds of robustness in these settings - even if, in practice, they are unlikely to occur. In this case, the adaptive attacks should be implemented using model extraction techniques that rely on some knowledge about the training data distribution. This assumption is not too unrealistic, as it might happen that the attacker (who knows the domain on which the model is applied) is able to gather in-distribution data from public domains - for instance, if the model is a malware detector, it should be very easy to collect samples and also very likely to have some overlap between them and the training data used by the victim. In other cases, the attacker might possess a subset of or all the training data, and she could easily train its own model, but she is rather interested in reproducing the exact model functionality and reproducing its decision boundaries to build a surrogate model and use it for other attacks (like evasion ones, aka adversarial examples)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024dynamic,\ntitle={Dynamic Neural Fortresses: An Adaptive Shield for Model Extraction Defense},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=029hDSVoXK},\nnote={under review}\n}" }, "abstract": { "value": "Model extraction aims to acquire a pre-trained black-box model concealed behind a black-box API. \nExisting defense strategies against model extraction primarily concentrate on preventing the unauthorized extraction of API functionality. However, two significant challenges still need to be solved: (i) Neural network architecture of the API constitutes a form of intellectual property that also requires protection; (ii) The current practice of allocating the same network architecture to both attack and benign queries results in substantial resource wastage. To address these challenges, we propose a novel \\textit{Dynamic Neural Fortresses} (DNF) defense method, employing a dynamic Early-Exit neural network, deviating from the conventional fixed architecture. Firstly, we facilitate the random exit of attack queries from the network at earlier layers. This strategic exit point selection significantly reduces the computational cost for attack queries. Furthermore, the random exit of attack queries from earlier layers introduces increased uncertainty for attackers attempting to discern the exact architecture, thereby enhancing architectural protection. On the contrary, we aim to facilitate benign queries to exit at later layers, preserving model utility, as these layers typically yield meaningful information. \nExtensive experiments on defending against various model extraction scenarios and datasets demonstrate the effectiveness of DNF, achieving a notable 2$\\times$ improvement in efficiency and an impressive reduction of up to 12\\% in clone model accuracy compared to SOTA defense methods. Additionally, DNF provides strong protection against neural architecture theft, effectively safeguarding network architecture from being stolen." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Model Extraction Defense" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/bf422ad6c14f7dc2ca3d5a9bb6f184542a4a40f2.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "Dynamic Neural Fortresses: An Adaptive Shield for Model Extraction Defense" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
02DCEU6vSU
Gen-LRA: Towards a Principled Membership Inference Attack for Generative Models
main
Active
Privacy;Membership Inference Attacks;Generative Models
alignment, fairness, safety, privacy, and societal considerations
3;3;5;5;8
4;4;4;3;4
2;2;2;3;3
2;2;3;2;3
3;3;3;3;3
4.8
3.8
2.4
2.4
3
-0.054554
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Although I could follow the gist of the idea, some of the notation is not precisely defined. $p_{\\mathbb{P} \\cup x*}$. It might be clearer to skip Eq.s 3/4 and jump to Eq 5.\n1. Do you have any ideas for how to generalize this to forms of data that are not amenable to KDEs (even after applying PCA)?\n1. Section 5.3 is not clear to me. What exactly is the experiment here, and what is it supposed to demonstrate?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "The idea of performing MIA on a generative model by using likelihood ratio of generated data between models with and without the targeted example is very natural and efficient. I'm not surprised that it is very effective, as demonstrated in the experiments. The paper is mostly well-written and well-motivated, and to my knowledge original." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper describes a membership inference attack on generative models. It requires a set of examples generated by the model, S, and a set of reference examples, R, presumably from the same distribution as the data the model was trained on. Then to guess whether some new point x* was part of the training data, it estimates the likelihood ratio of S between a model trained on R vs. a model trained on $R \\cup \\{x*\\}$ using two kernel density estimators. It then thresholds on the likelihood ratio. Experimental results demonstrate impressive improvements compared to baseline models, particularly when evaluated with the critical \"true positive rate at low false positive rate\" metric." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I'm afraid the specific approach of using kernel density estimators will limit the method's applicability to low-dimensional tabular datasets. I would love to see this idea generalized to higher-dimensional data, probably using something that will scale better than KDEs." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1.The manuscript lacks a clear explanation of the practical utility of applying MIA to synthetic data. It remains unclear why synthetic data was chosen as the focus, rather than real-world or other benchmark datasets. The authors are encouraged to provide references in the Related Work section to strengthen the justification for studying synthetic data specifically. Expounding on the unique relevance of synthetic data to MIA would better demonstrate the necessity and contributions of this study.\n2.Several typographical errors and repeated references are present in the reference section, such as on Line 527 and Line 729. A thorough review of the references is recommended to ensure accuracy and consistency across all citations." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "This paper introduces the Generative Likelihood Ratio Attack (Gen-LRA), a novel membership inference attack specifically aimed at detecting privacy leakage due to overfitting in generative models. Unlike prior methods, Gen-LRA employs a likelihood ratio-based hypothesis testing approach to infer membership without requiring extensive knowledge of the model structure or parameters. By leveraging density estimation techniques, the authors assess whether synthetic data generated by a model is overfitting to specific training data points, particularly in regions with outliers. The authors demonstrate that Gen-LRA significantly outperforms existing MIA methods across various generative architectures and datasets, with particular success in scenarios with low false positive rates, highlighting the nuanced privacy risks associated with generative models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces the Generative Likelihood Ratio Attack (Gen-LRA), a novel membership inference attack specifically aimed at detecting privacy leakage due to overfitting in generative models. Unlike prior methods, Gen-LRA employs a likelihood ratio-based hypothesis testing approach to infer membership without requiring extensive knowledge of the model structure or parameters. By leveraging density estimation techniques, the authors assess whether synthetic data generated by a model is overfitting to specific training data points, particularly in regions with outliers. The authors demonstrate that Gen-LRA significantly outperforms existing MIA methods across various generative architectures and datasets, with particular success in scenarios with low false positive rates, highlighting the nuanced privacy risks associated with generative models." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. The effectiveness of Gen-LRA depends heavily on accurate density estimation, which can be challenging in high-dimensional data settings. The use of kernel density estimation (KDE) or principal component analysis (PCA) for dimensionality reduction may limit applicability and accuracy. This limitation is critical because the success of the Gen-LRA method hinges on reliable density estimation, which becomes less accurate in high-dimensional spaces without significant computational expense. Inaccuracies here can undermine the method's robustness, making this the most pressing limitation.\n2. Although Gen-LRA performs well at low false positive rates, its reliance on outlier detection may lead to elevated false positives in datasets with inherently high variability or complex distributions. False positives can impair the practical applicability of Gen-LRA in privacy-sensitive contexts, as overly cautious results may lead to unnecessary restrictions on data release. \n3. Gen-LRA presumes that privacy leakage primarily stems from overfitting, potentially overlooking other forms of leakage that may not manifest as local overfitting. This could lead to incomplete privacy assessments, as the Gen-LRA approach might miss privacy vulnerabilities that do not align with the overfitting model. Expanding Gen-LRA’s scope to address other leakage types could enhance its overall utility." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "The paper focuses on membership inference attacks, which could be leveraged by adversaries to launch privacy attacks." }, "flag_for_ethics_review": { "value": [ "Yes, Privacy, security and safety", "Yes, Potentially harmful insights, methodologies and applications" ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "First, I would like to point out that I am not fully up-to-date on the literature regarding membership inference attacks, especially those involving tabular data. As a result, I may be unable to assess the novelty of this work and might not be familiar with the common settings examined in recent literature.\n\n1. The paper assumes the reference data is available to the attacker. This does not seem to be very realistic to me. Section 1 discussed that a common scenario for synthetic data release is that the data owner wants to release data for open research. This implies that such data is not available to the public before that (if such data is already available, then there is no motivation or value for the data owner to release an additional dataset). That means that the attacker does not have access to the reference data either. The prior work I knew often considered attacks that do not make such assumptions (e.g., https://arxiv.org/pdf/1705.07663 and https://arxiv.org/pdf/1909.03935).\n\n The paper claims that this setting is realistic in Section 2: \"We assume this in practice because this represents a plausible scenario for the owner of S as an attacker may be able to find comparable data in the real world...\" Unfortuantely, I do not fully understand this example. It would be great if the author can explain it in more detail in the rebuttal.\n\n2. Continuing on the above point, the paper needs to make it clearer what assumptions each of the baseline methods in Section 5 make. Which of them also makes the assumption that reference data is available to the attacker? This would clarify whether the claimed improvement comes from the relaxation of the assumptions or the fundamental advances of the algorithm itself.\n\n3. The paper only evaluates the proposed algorithm on tabular data. But this is not reflected in the title and abstract. By reading only the title and the abstract, the readers might be misled to think that the paper proposes and evaluates the attack on diverse data types.\n\n I think it is important to clarify that, as the proposed approach relies on kernel density estimation, which (as discussed in the paper) does not scale well with the data dimension. (The proposed approach relies on dimension-reduction techniques to tackle the issue.) Therefore, it is unclear if such a pipeline can work well on other more high-dimensional and complicated data such as images and text. \n\n4. How do you determine the kernel size and the type of the kernel in the experiments? Is the algorithm sensitive to that?\n\n5. Section 5 mentioned that \"For Gen-LRA, we found that the choice of k can have a small impact on the performance of the attack (See Appendix A.3), we therefore use the results of the best k choice for each run as the goal for an MIA is to characterize the maximal empirical privacy risk.\" I understand that choosing the best k could help \"characterize the maximal empirical privacy risk\". However, this table is mainly for comparing between different baselines. The comparison would be unfair if you chose the best hyper-parameter for your own approach while not doing that for the baseline methods.\n\n7. The discussion in Section 6.2 is nice, but it would be more self-contained if the paper could describe how DCR works in the main text.\n\n\nOther minor questions:\n\n1. Section 1: \"We demonstrate that Gen-LRA identifies a different source of privacy leakage relative to other commonly used MIAs.\" It would be better to clarify what \"the different source\" means here. I could only understand it after reading Section 5.\n\n2. Line 116 and 117: what are M and D? These notations do not seem consistent with what was used before.\n\n3. Line 127: typo on the left quotation mark\n\n4. Line 266: missing a )" }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The proposed method is simple and effective.\n\n* In general, the writing of the paper is clear.\n\n* The paper has demonstrated results on many datasets and models." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a new approach to do membership inference attacks for tabular data generative models. The approach first estimates the distributions of (1) the reference samples plus the target sample and (2) the reference samples with kernel density estimation, and then computes the density ratio of synthetic samples over these two distributions. The intuition is that, if the target sample were used in training, the density of synthetic samples on distribution (1) would be higher. Results across various datasets and models show that the proposed approach yields better AUC-ROC and TPR at low FPRs." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* The assumption that the reference data is available to the attacker is too strong.\n\n* The title and the abstract do not reflect the scope and constraint of the method sufficiently." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Can you expand the related work to also include the shadow-modeling based MIAs? \n\n- To truly understand the contribution, could you implement the shadow-modeling based MIAs [1,2,3] as well and report their results? Right now, the Gen-LRA method seems to be better than the prior work you consider, and does so with limited assumptions for the attacker and with limited computational cost. How does this change when the attacker now (i) has knowledge of the training algorithm and (ii) has the computational resources to train shadow models? Could authors implement these shadow-model MIAs and report the results alongside Gen-LRA? This would help to position the method and its results in the literature, giving a clear understanding of the impact of certain assumptions and computational cost on the MIA results. \n\n- Similarly, the work on shadow modeling MIAs also discusses disparate vulnerability of outliers [1,2,3]. Stadler et al [1] finds outliers to be more vulnerable than randomly selected records, while Meeus et al [3] proposes a method to identify more vulnerable records. Could authors have more elaborate results for the outlier discussion (e.g. show MIA results for outliers vs random points across datasets) and relate these findings to prior work? While the fact that Gen-LRA focuses on outliers is distinct from distance-based methods, these findings might not be very different than the ones in shadow-modeling based MIAs." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "- Technically novel, and interesting, way to compute the membership inference inference signal from synthetic data. The method is theoretically grounded, computationally efficient and relies on limited assumptions for the attacker. \n- They show the method to outperform a range of MIAs from the literature\n- Comprehensive evaluation of the attack across 15 datasets\n- Authors include intuitive examples (eg Fig 1 and Sec 6.2) that are well explained and help the understanding of the paper." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper introduces Gen-LRA, a novel membership inference attack (MIA) methodology for evaluating privacy risks in synthetic tabular data. The authors propose a hypothesis testing framework that computes a likelihood ratio specifically targeted at identifying any local overfitting of the target record. The method requires minimal assumptions, just access to the released synthetic dataset and a reference dataset. They find their method to outperform baselines from the literature across 15 datasets. They further find their method to be particularly successful against outliers, in contrast with other MIAs from the literature." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(More details see questions)\n\n- My main concern comes down to a lack of related work being discussed. A range of important works have studied MIAs against synthetic tabular data using shadow modeling [1,2,3]. While I understand that these works are computationally more expensive and additionally rely on the attacker's knowledge of the training algorithm, I find these works to be very relevant to position this paper and its findings. \n- Limited secondary insights with experimental depth. For instance, to make the claim that the method works better for outliers (especially compared to other methods), section 5.3 is mostly anecdotal. \n\n[1] Stadler, T., Oprisanu, B., & Troncoso, C. (2022). Synthetic data–anonymisation groundhog day. In 31st USENIX Security Symposium (USENIX Security 22) (pp. 1451-1468).\n\n[2] Houssiau, F., Jordon, J., Cohen, S. N., Daniel, O., Elliott, A., Geddes, J., ... & Szpruch, L. TAPAS: a Toolbox for Adversarial Privacy Auditing of Synthetic Data. In NeurIPS 2022 Workshop on Synthetic Data for Empowering ML Research.\n\n[3] Meeus, M., Guepin, F., Creţu, A. M., & de Montjoye, Y. A. (2023, September). Achilles’ heels: vulnerable record identification in synthetic data publishing. In European Symposium on Research in Computer Security (pp. 380-399). Cham: Springer Nature Switzerland." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "No further questions." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "The likelihood ratio that Gen-LRA estimates is novel to my knowledge, and seems to be closer to the likelihood ratio that would be theoretically optimal than what previous work has looked at. The paper is easy to understand, and the writing is generally polished.\n\nLooking at TPR @ low FPR is good practice, and too often neglected in the MIA literature. The paper could even highlight these results further: most of the AUC-ROC scores for all methods are close to random guessing, but Gen-LRA is much more accurate than random guessing at FPR = 0.001." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper proposes a novel membership inference attack on synthetic data generators called Gen-LRA, based on estimating a likelihood ratio between the synthetic data coming from a reference distribution vs. it coming from the reference distribution with a target point included. Gen-LRA is benchmarked againt several competing attacks on a variety of datasets, where Gen-LRA generally outperforms the competition." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Using the PCA+KDE density estimator for DOMIAS is not fully fair, since the DOMIAS paper used a more sophisticated density estimator which was found to perform better than the KDE. Of course, the same estimator could also improve the results of Gen-LRA, and PCA+KDE could be computationally cheaper, but these should be checked empirically.\n\nUsing PCA may limit the applicability of outlier overfitting detection for outliers with rare categorical values. For example, consider the detection of overfitting on datapoints of French people on the Adult dataset. PCA weights the input dimensions based on how much variance they have, so the indicator for being French would have a very low weight (<1% of the data is French). As a result, the PCA outputs would be very similar between French and non-French people, and Gen-LRA would not be able to detect overfitting affecting French people. Unless I'm completely mistaken about this phenomenon, this should be mentioned as a limitation.\n\nFor a similar reason, you should check if datapoints with high DCR score have similarities. It could be that they do, but UMAP is not considering these important. This could change the interpretation of Figure 2 that DCR does not target specific outlier regions. \n\nYou should also discuss the fact that Ward et al. (2024) report a very similar finding to your Figure 2 with their MIA. As a part of this, it would be interesting to see analogues of Figure 2 for the other MIAs used as baselines.\n\nPlease include separate results from each dataset in addition to the mean results across datasets. The datasets could have significant performance differences that the aggregation hides. I'm also not sure if the standard deviations of performance across different datasets are meaningful in any way.\n\nMinor points:\n- The paper should make the differences between DOMIAS and Gen-LRA clearer, since the methods are fairly similar.\n- It not clear what $\\mathbb{P}\\cup \\{x^*\\}$ precisely is, which makes the motivation leading to Equation 4 seem a bit handwavy.\n- Contribution 1: this sentence is a bit unclear, making it seem like the null and alternative hypotheses are the same.\n- Line 172: capitalise \"equation 4\".\n- Line 266: missing parenthesis.\n- Line 346: \"scale\" is ambiguous, I would suggest \"normalise\" if that is what you are doing.\n- Several references are missing the publication forum, for example Durkan et al. (2019), Ganev and De Cristofaro (2023)." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": { "value": "@inproceedings{\nanonymous2024genlra,\ntitle={Gen-{LRA}: Towards a Principled Membership Inference Attack for Generative Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=02DCEU6vSU},\nnote={under review}\n}" }, "abstract": { "value": "Evaluating the potential privacy leakage of synthetic data is an important but unresolved problem. Most existing adversarial auditing frameworks for synthetic data rely on heuristics and unreasonable assumptions to attack the failure modes of generative models, exhibiting limited capability to describe and detect the privacy exposure of training data. In this paper, we study designing Membership Inference Attacks (MIAs) that specifically exploit the observation that generative models tend to memorize certain data points in their training sets, leading to significant local overfitting. Here, we propose Generative Likelihood Ratio Attack (Gen-LRA), a novel, computationally efficient shadow-box MIA that, with no assumption of model knowledge or access, attacks the generated synthetic dataset by conducting a hypothesis test that it is locally overfit to potential training data. Assessed over a comprehensive benchmark spanning diverse datasets, model architectures, and attack parameters, we find that Gen-LRA consistently dominates other MIAs for generative models across multiple performance metrics. These results underscore Gen-LRA's effectiveness as an interpretable and robust privacy auditing tool, highlighting the significant privacy risks posed by generative model overfitting in real-world applications" }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Privacy", "Membership Inference Attacks", "Generative Models" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/bcad18f87958725e9b50970906e168913dcdf521.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/50c96fb68049a4bec3f129b7c7f85b812793218e.pdf" }, "title": { "value": "Gen-LRA: Towards a Principled Membership Inference Attack for Generative Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
02Od16GFRW
Ensembles provably learn equivariance through data augmentation
main
Active
equivariance;invariance;ensemble models;data augmentation;SGD
unsupervised, self-supervised, semi-supervised, and supervised representation learning
3;6;6
4;3;3
3;3;3
2;3;2
3;3;2
5
3.333333
3
2.333333
2.666667
-1
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please see the weaknesses." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- The work show the emergence of equivariant in ensemble models\n- The work generalizes previous works where the proof relied on NTKs\n- Experiments with large ensemble of models show the emergence of equivariance" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "This paper shows that an ensemble of models when trained with data augmentation leads to emergence of equivariance properties naturally. The results generalize over past known results based on NTKs. The theory assumes some basic assumptions on the architecture and shows that, when the initialization of the weights in an architecture has some symmetry, then, the expected architecture of the ensemble is equivariant. Experimental results with various ensembles validates the results for the C4 group of symmetries." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I have several concerns over the usefulness of the theory and the experimental results.\n\nUsefulness of theory:\n- What is the use of the theory in model design or practical use cases? Since equivariant models seems to give perfect equivariance and data augmentation techniques give approximate equivariance. So, I am wondering what is the use of ensemble technique for symmetries, especially, given that we need over 1000 models to get good equivariant results.\n- What are the advantages of the proposed technique compared to existing symmetrization and canonicalization methods [1-4] that can convert non-equivariant models into equivariant ones using techniques somewhat similar to ensemble methods but with additional transformations that looks similar to augmentation.\n\nExperimental Results:\n- Although the experimental does show that the architecture with symmetric support does give invariant output, but even the asymmetric architecture seems to be giving invariant output, questioning the usefulness of the theory. It is also discussed in the paper about the symmetric states being attractors potentially, but, it still makes the current theory not very useful.\n- Experiments are only shown for C4 symmetries\n\n[1] Basu, Sourya, et al. \"Equi-tuning: Group equivariant fine-tuning of pretrained models.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 6. 2023.\n\n[2] Mondal, Arnab Kumar, et al. \"Equivariant adaptation of large pretrained models.\" Advances in Neural Information Processing Systems 36 (2023): 50293-50309.\n\n[3] Basu, Sourya, et al. \"Efficient equivariant transfer learning from pretrained models.\" Advances in Neural Information Processing Systems 36 (2024).\n\n[4] Kaba, Sékou-Oumar, et al. \"Equivariance with learned canonicalization functions.\" International Conference on Machine Learning. PMLR, 2023." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "The results in Table 1 aren't that clear to me. In the asymmetric case where you have a symmetric initialization, shouldn't you get results that are similar to the symmetric case? Yet there is a large gap" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "- It generalizes the results in Gerken & Kessel \n- The topic of invariance/equivariance is important so these results would be of interest to people in that community" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper expands the results of Gerken & Kessel that show that data augmentation produces equivariant ensembles of models using NTK, by looking at finite network sizes. They then show empirically that their theoretical results indeed hold in practice (up to sampling errors)." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "My main issue is with the writing: \n- The results presented in the main text are quite trivial, that if you start with an invariant distribution and use an invariant flow you end up with an invariant distribution. The more interesting results are in the appendix (appendix B and C)\n- You writing $\\mathcal{L} = A_\\mathcal{L} + T\\mathcal{L}$ with $T\\mathcal{L}$ the tangent space is very confusing, as tangent space is defined for a manifold and we are talking about a linear space. It needlessly complicates things as there is no need to involve differential geometry when we are working on linear spaces." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "1. Why does the OSP not increase at initialization when ensemble size increases?\n1. From the figures, it seems like the results could improve with more epochs (also for baselines). Could you please provide results with a larger number of epochs?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. The paper is well-structured and easy to follow.\n1. The paper extends previous results to more reasonable and applicable settings. This is a significant extension." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents a theoretical analysis showing that data augmentation can lead to equivariance in deep ensembles. The paper's main result is that under several assumptions (e.g. on initialization, architecture, etc.), deep ensembles trained with data augmentation are equivariant in mean, even when individual models are generally not. A similar result was previously presented, but the paper extends these previous results, which were primarily focused on infinitely wide NNs trained with gradient descent under full augmentation, to ensembles of finite-width trained with SGD and random augmentation.\nThe paper is mainly theoretical and validates the theoretical results through limited and small-scale empirical experiments." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "I like the paper and believe it has a sufficient contribution and interesting results. However, there are several limitations stated below:\n\n1. While the assumptions for the theoretical analysis are more applicable compared to previous works, they still hold only for infinite-size ensembles. Any analysis (including empirical) on the error bounds for finite ensembles would be beneficial.\n1. While the results are important, the novelty is somewhat moderate in the sense that the emergent equivariance property of ensembles was previously proposed and the fact that the theoretical analysis heavily relies on previous works [1].\n1. From the empirical evidence, it is unclear if some of the assumptions (like symmetric initialization) are indeed necessary. The authors discuss this, but I believe it can be extended further.\n1. Empirical evaluation is limited. It would be beneficial to extend it to more settings, even by small modifications like considering cyclic groups C_k of different orders (k), different architectures, model sizes, etc.\n1. It would be beneficial to see the impact of ensemble size on the metrics in Table 1, like adding a line plot for ensemble size vs. OSP. The authors show results for different sizes, but summarizing them in one clear view would make it easier to follow.\n1. The paper could benefit from a clearer and more explicit discussion of the limitations of the results.\n1. Minor:\n - Line 37: “... a definitive question to the question…”.\n\nReference\n\n[1] Flinth & Ohlsson, Optimization Dynamics of Equivariant and Augmented Neural Networks, 2023." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We prove that ensemble models learn equivariance through data augmentation." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024ensembles,\ntitle={Ensembles provably learn equivariance through data augmentation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=02Od16GFRW},\nnote={under review}\n}" }, "abstract": { "value": "Recently, it was proved that group equivariance emerges in ensembles of neural networks as the result of full augmentation in the limit of infinitely wide neural networks (neural tangent kernel limit). In this paper, we extend this result significantly. We provide a proof that this emergence does not depend on the neural tangent kernel limit at all. We also consider stochastic settings, and furthermore general architectures. For the latter, we provide a simple sufficient condition on the relation between the architecture and the action of the group for our results to hold. We validate our findings through simple numeric experiments." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "equivariance", "invariance", "ensemble models", "data augmentation", "SGD" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d6a8bd193bcc928733dcbba2b6319d8fcb54d671.pdf" }, "presentation": null, "primary_area": { "value": "unsupervised, self-supervised, semi-supervised, and supervised representation learning" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/b8e3c4c2c81cde74095c52c2c359a5d2af6cf52f.zip" }, "title": { "value": "Ensembles provably learn equivariance through data augmentation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
02haSpO453
VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation
main
Active
Unified Visual Language Model;Autoregressive Model
foundation or frontier models, including LLMs
3;5;5;6
4;5;3;4
3;2;2;4
2;2;3;3
3;2;3;3
4.75
4
2.75
2.5
2.75
0
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "- Please share missing details as mentioned in the weaknesses\n- What are the number of image and video tokens going into the LLM? How many tokens are processed by the RQ-transformer and what is its size (the RQ-VAE paper has multiple different settings)?\n- It would be interesting to see if the vision tower training results hold for a general VAE setup instead of an RQ-VAE since that would make the results even more broadly applicable" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "- The paper's most interesting contribution is the unified vision tower exploration to unify generation and understanding and the appropriate ways to train such an encoder\n- The approach is quite straightforward and the application of RQ-VAE allows for token efficiency while preserving more information\n- VILA-U is close to SOTA on visual understanding tasks (image and video) with comparable models\n- The model also fares well on image generation tasks and comes close to diffusion models" }, "student_author": null, "submission_guidelines": null, "summary": { "value": "- The paper presents VILA-U, a unified model for language, image and video understanding + generation\n- The model is trained with an autoregressive next token prediction loss for all tasks\n- The paper explores vision encoder choices to ensure understanding and generation performance" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "- The method chooses RQ-VAE for efficiency, but there isn't a discussion / results around this. How would the results look if the vision tower didn't use RQ-VAE? How important is the RQ-VAE?\n- The generated images are relatively low-resolution (256 or 384px), especially since the RQ-VAE allows for increased efficiency in tokens\n- The paper doesn't really discuss video implementation details. Video understanding and generation have a mismatch in FPS / durations they usually support, what does VILA-U support? There isn't a discussion around this.\n- The paper claims to support video generation, but there are no quantitative results around this. The two qualitative examples are also very simplistic in Figure 7." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "Please refer to the weaknesses section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. VILA-U introduces a unified framework that handles both visual understanding and generation in a single autoregressive next-token prediction model. \n\n2. The model leverages a unified vision tower that uses contrastive learning to align discrete visual tokens with textual inputs, which enhances the model's visual perception and text-visual alignment capabilities.\n\n3. The experiments indicate the state-of-the-art performance of VILA-U in both image generation and understanding." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper presents VILA-U, a unified foundation model for visual understanding and generation that integrates image and language processing into a single autoregressive next-token prediction framework. Unlike traditional visual language models that rely on separate modules or diffusion models for generation, VILA-U employs a unified vision tower to discretize visual inputs, aligning them with textual tokens through contrastive learning. From the experiments, the authors show that VILA-U can achieve state-of-the-art performance in both image generation and comprehension." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. Missing the clarification between VILA-U and other tokenization-based multimodal models, like AnyGPT [1] and SEED-LLaMa [2]. Those models also used visual tokenizers to discrete the images and trained with causal language modeling loss. I noticed the authors cite the SEED-LLaMa in the line 102, but the claim of “In this work, we design our framework based on the autoregressive next-token prediction method for visual generation and make our VLM learn to generate visual content effectively.” does not the main difference between VILA-U and SEED-LLaMa.\n\n2. One of the claimed contributions of this paper is about proposing the training strategy for the unified foundation vision tower. However, the training strategy seems similar to SEED [3], which also used contrastive loss between image embeddings and text embeddings. Can authors clarify the difference between the unified foundation vision tower and SEED?\n\n3. Comparisons with other tokenization-based multimodal models [1,2] and Emu2 [4] are missing.\n\n4. The limitation section, which is required, is missing.\n\n[1] Zhan, Jun, et al. \"Anygpt: Unified multimodal llm with discrete sequence modeling.\" arXiv preprint arXiv:2402.12226 (2024).\n\n[2] Ge, Yuying, et al. \"Making llama see and draw with seed tokenizer.\" arXiv preprint arXiv:2310.01218 (2023).\n\n[3] Ge, Yuying, et al. \"Planting a seed of vision in large language model.\" arXiv preprint arXiv:2307.08041 (2023).\n\n[4] Sun, Quan, et al. \"Generative multimodal models are in-context learners.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 3 }, "primary_area": null, "questions": { "value": "My biggest suggestion/question is related to the number 1 weakness described above. If the author could highlight the main contribution of the work that would make its positioning much easier. One positioning that was left out in the weakness section above is to position the work as the \"first\" in some regards. However, while autoregressive modeling of text + language is a burgeoning field, VILA-U is not the first model that performs autoregressive modeling of multiple modalities." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The unification of multiple modalities in the same architecture (with the same training objective) is a very important topic. The paper is a valuable contribution to this overall research program. In the current work, the choice of quantized image tokens for image representation makes the autoregressive modeling task more natural as the image modality is tokenized into discrete tokens much like language. This helps minimizes the amount of code development required for adapting existing LLM code bases to their multimodal counterparts.\n2. The paper performed fairly complete evaluations (image-text, video-text, text-image, ) and ablation studies that include model backbone and training objective." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper, VILA-U presents a unified framework of autoregressive multimodal generation and understanding. It achieves this by first training a vision encoder (discretized via RQ codebook) for text-conditioned image tokens (initialized from CLIP) and then training image+text data using autoregressive modeling. It presents a complete training recipe for creating autoregressive multimodal models, and the resulting model is benchmarked against a wide range of existing models across tasks (generation and understanding)" }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. It is not clear to me how to position the work in its novelty or effectiveness and this may be addressable with some rewriting. I see 3 potential angles\n 1. Training effectiveness by leveraging pretrained networks. The authors motivates the work by emphasizing that existing methods that attempt to unify multimodal generation and understanding either require significant architectural modifications to their uni-modal counterparts, or training from scratch. However, this comparison seems not to play a central role in the subsequent discussions. If the effectiveness of the proposed method is reflected in ease of training, then readers would expect to see comparison of training time/compute for comparable performances. \n 2. Effective token representation of image modality as discrete tokens: VILA-U differs from prior work in its adoption of RQ-VAE embedding for images. However, if this is the main innovation, the choice of RQ, its superiority over alternative methods, the important of discontinuous embedding of images (as compared to, for example, continuous embedding as in LaViT) will need to be elevated.\n 3. State-of-the-art performance: If the main contribution is instead just the shear effectiveness of the method. Then it should demonstrate this quantitative in the paper. Unfortunately, the comparison tables doesn’t seem to suggest that the VILA-U model is the state-of-the-art in most benchmarks. Perhaps it achieves Pareto frontier between understanding and generation tasks? Or outperforms other models for the same training compute/time? Either way I’m not clear what the main advantage of the current work is over others. \n2. The discussion around training recipe is very important and useful for practitioners. However, it lacks both quantitative and qualitative (with examples) comparisons of the different training recipes. With the conclusion seems to be use an aligned CLIP model for image encoder initialization, which doesn’t seem to be a novel finding. I would recommend either supporting the discussion with more evaluation (quantitive or qualitative, ideally both) or moving the discussion to the appendix.\n3. The paper suffers from unsubstantiated claims ( neither references nor experimental support). I've highlighted a few statements that are very important for the message in the paper below:\n - \"replacing continuous tokens with VQ tokens in VLMs usually results in a severe performance drop\"\n - \"A straightforward combination of contrastive and reconstruction loss cannot converge\"\n - \"both the rFID and Top-1 accuracy of the vision tower only serves as a medium indicator instead of directly linearly correlated to the final performance of our whole multi-modal framework.\"" }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 5 }, "contribution": { "value": 2 }, "desk_reject_comments": null, "details_of_ethics_concerns": { "value": "All datasets used are public, no ethics review needed." }, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "1. The solid experimental results of VILA-U have largely reignited my confidence in the autoregressive image-text unified modeling direction. However, why is there no comparison with other text-image unified modeling models such as \\textbf{MM-Interleaved, SEED, and DEEM} on image understanding tasks? Ignoring the contributions of pioneers is not advisable.\n\n2. The video generation experiments are insufficient. Why not compare with methods like \\textbf{OpenSora} and \\textbf{CogVideoX} on \\textbf{VBench}?\n\n3. The article is unclear in its expression; are the visual tokens features directly discretized by the visual encoder, or are they encoded by a large language model? I suspect it is the former.\n\n4. VILA-U claims to have lower computational complexity and to avoid misalignment. While I recognize the importance of addressing misalignment, the claim of lower complexity requires experimental support. Specifically, compared to unified autoregressive image-text modeling models, using separate models like fine-tuning Stable Diffusion can also construct end-to-end autoregressive image-text modeling, which is more efficient in training and performs better. Moreover, utilizing existing mature acceleration schemes offers fast speeds. VILA-U should emphasize more on data cleansing quality and misalignment.\n\n5. Lastly, and most critically, I hypothesize that the structural improvements of the model provide minimal benefits compared to previous autoregressive unified models, with the majority of improvements stemming from the engineered data cleansing. For instance, MMC4-Core contains 22.4M data while MMC4 has 375M, yet some research indicates that training with these two datasets yields similar outcomes. Large-scale datasets like MMC4 are of very low quality. However, using just 6M of data to achieve excellent results suggests that your data is meticulously filtered, yet the paper lacks any detail on the core contributions of data construction. Conducting experiments on the same data with other model structures like \\textbf{DreamLLM} is necessary to demonstrate the efficiency of \\textbf{VILA-U}. \n\nI will improve my rating score if my concerns are addressed." }, "rating": { "value": 5 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 2 }, "strengths": { "value": "1. The idea of VILA-U is very straightforward, and the experiments are solid. It significantly enhances the capabilities of end-to-end autoregressive multimodal models in visual-language tasks, bridging the gap between autoregressive multimodal models and the LLAVA series, while also excelling in image generation.\n\n2. The structure of the VILA-U paper is simple and easy to read, and the model implementation is very easy to follow." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Summary:\n\nVILA-U is a foundation model that unifies video, image, and language understanding and generation. Unlike traditional models that use separate components for different tasks, VILA-U simplifies this by employing a single autoregressive framework. This reduces misalignment and maintains near state-of-the-art performance in both understanding and generating visual language content. Key factors for its success include a unified vision tower that aligns visual and textual inputs, enhancing perception, and the ability to achieve high-quality image generation similar to diffusion models.\n\nContributions:\n\n1. VILA-U strives for an end-to-end autoregressive model that handles both visual and textual inputs through a unified next-token prediction approach. This approach eliminates the need for external components like diffusion models, simplifying the infrastructure.\n2. VILA-U is tested across a range of tasks, including image-language and video-language understanding, as well as image and video generation. It demonstrates notable improvements, particularly in narrowing the gap between autoregressive and continuous-token models in visual understanding, while also offering robust visual generation capabilities." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1.Regarding the issue of missing in context learning assessments, VILA-U has undergone extensive training on image-text sequences and can accept any interleaved layouts of images and text. Therefore, it should possess excellent contextual learning abilities. This work could be enhanced by conducting tests on its ICT capabilities.\n\n2.The description of the data curation process is not sufficiently clear, making it uncertain whether the data was meticulously selected or randomly chosen. If it is the former, I suspect that most of the improvements stem from high-quality data engineering rather than advancements in model architecture." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "VILA-U is a Unified foundation model that integrates Video, Image, Language understanding and generation. It employs a single autoregressive next-token prediction framework for both tasks." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024vilau,\ntitle={{VILA}-U: a Unified Foundation Model Integrating Visual Understanding and Generation},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=02haSpO453},\nnote={under review}\n}" }, "abstract": { "value": "VILA-U is a Unified foundation model that integrates Video, Image, Language understanding and generation. Traditional visual language models (VLMs) use separate modules for understanding and generating visual content, which can lead to misalignment and increased complexity. In contrast, VILA-U employs a single autoregressive next-token prediction framework for both tasks, eliminating the need for additional components like diffusion models. This approach not only simplifies the model but also achieves near state-of-the-art performance in visual language understanding and generation. The success of VILA-U is attributed to two main factors: the unified vision tower that aligns discrete visual tokens with textual inputs during pretraining, which enhances visual perception, and autoregressive image generation can achieve similar quality as diffusion models with high-quality dataset. This allows VILA-U to perform comparably to more complex models using a fully token-based autoregressive framework." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "Unified Visual Language Model", "Autoregressive Model" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/50e98a8144a1cddb2de5c13e4af3f3a5a157d4f3.pdf" }, "presentation": null, "primary_area": { "value": "foundation or frontier models, including LLMs" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": null, "title": { "value": "VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
main
Active
RLHF;Alignment;Online Alignment;Self-Play
alignment, fairness, safety, privacy, and societal considerations
3;6;6;8
4;3;4;4
3;3;3;4
3;3;4;3
2;4;2;4
5.75
3.75
3.25
3.25
3
-0.080845
[ { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 4 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "* Reward margin and offline-reward evaluation is interesting by itself and could provide information of the effectiveness of the method, but I personally think is not as an important measurement as pairwise winrate. Could you elaborate on Section 6.1 why one should consider looking into it?\n\n* Please check the questions in weaknesses as well!" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "* The authors test of two LLM-as-a-Judge benchmarks as well as on a well-established classification benchmark, and their results are consistent.\n* The authors provide a theoretical explanation of why their method works effectively.\n* Showing all possible combinations at Figure 2 helped understanding what kind of online RLHF methods one should consider\n* The results are consistent across smaller models (0.5B) up to widely used scale models (8B)." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "Compared to offline RLHF methods, online RLHF methods empirically show stronger performance, yet is computationally expensive, vulnerable to distribution shifts and lacks a unified framework. The authors ablate different online RLHF methods based on all possible combinations (namely, SAIL-PR, SAIL-PP, SAIL-DP) which could be useful for future work exploring online RLHF methods. Personally, it was surprising that SAIL-PP generally works on par or slightly better than SAIL-PR, which open up further research questions on what would be the optimal way to obtain preference dataset." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "* As a practitioner, at least the presentation/writing wasn't clear enough to agree that SAIL provides a unified framework for those who might want to consider using online RLHF in future works. I would personally suggest adding a section explains about how one could use SAIL instead of iterative DPO methods, as well as a huge emphasis on how the provided code could be used.\n* There is a huge emphasis on trying to improve reward models (on RewardBench) to mitigated reward model overoptimization & train better LMs. I am curious if given a fixed budget/time limit, whether one should try to employ online RLHF methods or try to enhance reward models in general.\n* I would suggest adding an explanation of what is the limitation of online RLHF methods that the paper could not address. For example, it is still unclear on what is the best practice to \"whether to discard instances from a preference dataset that have a subtle difference on the preference strength\" or \"would it be beneficial to employ more models when gathering responses when consisting a preference dataset\"." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 3 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "There is a large amount of blank space below Section 6.1. Is there any missing content in this part of the paper?" }, "rating": { "value": 6 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "1. Introducing Bi-level Preference Optimization: The process of bi-level preference optimization is integrated into the modeling of online RLHF. By leveraging the unique correspondence between the reward function and the LLM policy, this approach innovatively transforms the process into an equivalent single-layer form that is easier to solve.\n\n2. Extensive Experiments on SAIL: Comprehensive and rich experiments were conducted to address the three significant challenges in online RLHF and to demonstrate the relevant applications of SAIL." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The authors identify three significant challenges in online RLHF algorithms: Challenge 1: the interdependence between models and data in implicit reward learning; Challenge 2: the computational complexity of bi-level optimization; and Challenge 3: the reliance on preference oracles. They propose SAIL to address these challenges. \n\nThe main contributions of the paper can be summarized as follows:\n\n1. **Unified LLM Alignment Mathematical Framework**: The authors have designed a principled online RLHF framework that provides concrete guidance for generating new responses, assuming the existence of a preference oracle.\n\n2. **Adaptive Direct Preference Optimization**: By introducing a DPO-style analysis, the authors present an efficient single-layer solution capable of effectively addressing distribution shifts and providing a scalable online preference optimization method.\n\n3. **Introduction of a Self-Improvement Mechanism**: This mechanism reduces the reliance on preference oracles.\n\n4. **Extensive Experimental Evaluation**: The experiments conducted demonstrate that SAIL significantly outperforms baseline methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "Regarding the three variants of the SAIL method, Table 3 shows that in the Eval-Reward and MT-bench columns, the SAIL method performs worse than the baseline DPO. Please clarify whether these experimental results undermine the assertion that the SAIL method is superior to the baseline DPO." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 4 }, "primary_area": null, "questions": { "value": "1. The paper demonstrates SAIL's efficiency with models up to 8B parameters. Could you share any considerations or expected challenges for scaling SAIL to significantly larger models, such as those with over 100B parameters?\n\n2. SAIL currently relies on the Bradley-Terry preference model. Have you considered experimenting with other preference models, and do you anticipate any impact on alignment performance if different utility functions are used?\n\n3. SAIL-DP seems to show some overfitting on in-distribution responses. Could you discuss any regularization techniques you considered or plans to mitigate this, particularly to enhance generalization to out-of-distribution data?\n\n4. Given the dependence on an initial offline dataset, how does SAIL perform in situations with minimal or noisy initial data? Are there strategies within the current framework to mitigate issues arising from a limited initial dataset?\n\n5. Could you provide more detail on the computational costs of SAIL, particularly in comparison with other RLHF approaches? How does the single-level optimization approach compare in terms of resource requirements, and what practical considerations should be kept in mind when implementing it?" }, "rating": { "value": 8 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 4 }, "strengths": { "value": "1. **Innovative Formulation**: The paper provides a novel formulation of online RLHF through bilevel optimization, enhancing computational efficiency by reducing this problem to a single-level optimization, which is a significant advancement for practical LLM training.\n2. **Effective Self-improvement Mechanism**: SAIL effectively addresses challenges related to reliance on preference oracles, making online alignment more feasible by leveraging the model's self-generated responses for iterative improvement.\n3. **Comprehensive Evaluation**: The paper includes extensive experiments that demonstrate substantial improvements in evaluation reward, win rate, and efficiency over other methods like DPO, supporting SAIL's efficacy and computational advantage.\n4. **Scalability and Adaptability**: SAIL’s approach to handling distribution shifts and reducing oracle reliance presents a promising method for more scalable RLHF applications, especially for emerging large-scale LLMs.\n5. **Detailed Experiment Design and Baselines**: The experiment section is well-structured, covering a range of metrics (reward-margin, eval-reward, win rate) and configurations (SAIL-PR, SAIL-PP, SAIL-DP), providing insights into the trade-offs and performance across different setups." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper introduces SAIL (Self-improving Efficient Online Alignment), an approach for online reinforcement learning from human feedback (RLHF) that aims to align large language models (LLMs) with human preferences. SAIL addresses limitations in offline RLHF methods by framing online LLM alignment as a bilevel optimization problem, which it reduces to a single-level first-order optimization method to enhance computational efficiency. The approach allows for continuous model improvement by generating samples iteratively, regulating preferences, and exploring online feedback. SAIL's self-improvement mechanism enables it to reduce reliance on preference oracles, thus allowing for more scalable alignment. Empirical evaluations demonstrate significant performance improvements over standard RLHF baselines." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "1. **Limited Exploration of Alternative Utility Functions**: The method relies on the Bradley-Terry preference model, which may not be optimal for all RLHF applications. Future work could benefit from exploring alternative utility models that account for more nuanced preference data.\n2. **Scalability Concerns for Larger Models**: Although the paper demonstrates SAIL’s effectiveness on LLMs with up to 8B parameters, additional scaling experiments would strengthen the paper's claims about computational efficiency for significantly larger models.\n3. **Dependency on Initial Offline Dataset**: While SAIL reduces oracle dependency, it still relies on an initial offline dataset to bootstrap alignment. Further discussion on managing this dependency, especially when starting with limited labeled data, could be beneficial.\n4. **Potential Overfitting in SAIL-DP**: The paper mentions that SAIL-DP shows signs of overfitting on in-distribution responses, suggesting that the method may benefit from more refined regularization techniques to ensure robust generalization to out-of-distribution samples." }, "withdrawal_confirmation": null }, { "TLDR": null, "_bibtex": null, "abstract": null, "anonymous_url": null, "authorids": null, "authors": null, "code_of_conduct": { "value": "Yes" }, "code_of_ethics": null, "comment": null, "confidence": { "value": 4 }, "contribution": { "value": 3 }, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": { "value": [ "No ethics review needed." ] }, "keywords": null, "large_language_models": null, "no_acknowledgement_section": null, "other_comments_on_LLMs": null, "paperhash": null, "pdf": null, "presentation": { "value": 2 }, "primary_area": null, "questions": { "value": "See the weakness section." }, "rating": { "value": 3 }, "reciprocal_reviewing": null, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": { "value": 3 }, "strengths": { "value": "(1) The paper introduces a novel unified framework for online RLHF that effectively addresses the challenges of static datasets and distribution shifts.\n(2) By reducing a bilevel optimization problem to a single-level method, SAIL maintains theoretical benefits while significantly lowering computational costs, making it more practical for real-world applications.\n(3) The self-improving aspect of SAIL allows models to iteratively enhance alignment without extensive supervision, addressing the challenge of needing constant access to human preference data.\n(4) Extensive experiments validate the effectiveness of SAIL, showing substantial improvements in performance metrics compared to existing methods, thus showcasing its applicability across various datasets.\n\nI would consider rescoring if the authors can solve my concern." }, "student_author": null, "submission_guidelines": null, "summary": { "value": "The paper addresses the limitations of traditional reinforcement learning from human feedback (RLHF) methods for aligning large language models (LLMs) with human preferences. The authors propose a unified framework for online RLHF formulated as a bilevel optimization problem, which they simplify to a single-level method for efficiency. This approach, called SAIL, allows for continuous model improvement through online exploration and iterative refinement of preference labels, mitigating issues related to distribution shifts and reducing reliance on static preference oracles. Experimental results demonstrate significant performance gains, with SAIL outperforming state-of-the-art RLHF methods." }, "supplementary_material": null, "title": null, "venue": null, "venueid": null, "weaknesses": { "value": "(1) The method does not improve much in the AlpacaEval 2.0 Score. The author should give a detailed explanation. And why not use metrics like length-controlled win rate?\n(2) Authors should compare more advanced preference optimization algorithms like ORPO and SimPO. And current results are not impressive for the alignment community.\n(3) Why did the author just include MMLU as the downstream task metric? They should incorporate more tasks (eg., arc-challenge) like the similar self-improvement work SPIN (ICML24) to better illustrate their contribution.\n(4) In the alignment area, it's better to conduct experiments in the Arena-Hard benchmark since it's a common metric to evaluate the alignment ability." }, "withdrawal_confirmation": null }, { "TLDR": { "value": "We introduce SAIL, an efficient online RLHF approach that addresses distribution shift and reduces reliance on preference oracles for improved LLM alignment." }, "_bibtex": { "value": "@inproceedings{\nanonymous2024sail,\ntitle={{SAIL}: Self-improving Efficient Online Alignment of Large Language Models},\nauthor={Anonymous},\nbooktitle={Submitted to The Thirteenth International Conference on Learning Representations},\nyear={2024},\nurl={https://openreview.net/forum?id=02kZwCo0C3},\nnote={under review}\n}" }, "abstract": { "value": "Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead." }, "anonymous_url": { "value": "I certify that there is no URL (e.g., github page) that could be used to find authors’ identity." }, "authorids": null, "authors": null, "code_of_conduct": null, "code_of_ethics": { "value": "I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics." }, "comment": null, "confidence": null, "contribution": null, "desk_reject_comments": null, "details_of_ethics_concerns": null, "flag_for_ethics_review": null, "keywords": { "value": [ "RLHF", "Alignment", "Online Alignment", "Self-Play" ] }, "large_language_models": null, "no_acknowledgement_section": { "value": "I certify that there is no acknowledgement section in this submission for double blind review." }, "other_comments_on_LLMs": null, "paperhash": null, "pdf": { "value": "/pdf/d5a230b3d82181e94b8d74fb961b8cc3abd38e94.pdf" }, "presentation": null, "primary_area": { "value": "alignment, fairness, safety, privacy, and societal considerations" }, "questions": null, "rating": null, "reciprocal_reviewing": { "value": "I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6." }, "resubmission": null, "revert_desk_rejection_confirmation": null, "revert_withdrawal_confirmation": null, "soundness": null, "strengths": null, "student_author": null, "submission_guidelines": { "value": "I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide." }, "summary": null, "supplementary_material": { "value": "/attachment/0eb1dbe2bdc367b1d3e0552efcb28e056a4766bb.zip" }, "title": { "value": "SAIL: Self-improving Efficient Online Alignment of Large Language Models" }, "venue": { "value": "ICLR 2025 Conference Submission" }, "venueid": { "value": "ICLR.cc/2025/Conference/Submission" }, "weaknesses": null, "withdrawal_confirmation": null } ]
03EkqSCKuO
Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks
main
Active
graph representation learning;long-range propagation;ordinary differential equations
learning on graphs and other geometries & topologies
5;6;8
2;3;3
2;4;4
2;2;3
3;3;3
6.333333
2.666667
3.333333
2.333333
3
0.755929
[{"TLDR":null,"_bibtex":null,"abstract":null,"anonymous_url":null,"authorids":null,"authors":null,"c(...TRUNCATED)
03OkC0LKDD
The Vital Role of Gradient Clipping in Byzantine-Resilient Distributed Learning
main
Active
Byzantine resilience;distributed machine learning
optimization
3;5;6;6
4;3;5;3
1;2;2;3
2;3;4;3
3;3;3;3
5
3.75
2
3
3
0
[{"TLDR":null,"_bibtex":null,"abstract":null,"anonymous_url":null,"authorids":null,"authors":null,"c(...TRUNCATED)
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
41
Edit dataset card