bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
848
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
34
id
stringclasses
44 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
899 values
n_linked_authors
int64
-1
13
upvotes
int64
-1
109
num_comments
int64
-1
13
n_authors
int64
-1
92
Models
sequencelengths
0
100
Datasets
sequencelengths
0
19
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
19
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
null
https://openreview.net/forum?id=recsheQ7e8
@inproceedings{ lee2024aligning, title={Aligning to Thousands of Preferences via System Message Generalization}, author={Seongyun Lee and Sue Hyun Park and Seungone Kim and Minjoon Seo}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=recsheQ7e8} }
Although humans inherently have diverse values, current large language model (LLM) alignment methods often assume that aligning LLMs with the general public’s preferences is optimal. A major challenge in adopting a more individualized approach to LLM alignment is its lack of scalability, as it involves repeatedly acquiring preference data and training new reward models and LLMs for each individual’s preferences. To address these challenges, we propose a new paradigm where users specify what they value most within the system message, steering the LLM’s generation behavior to better align with the user’s intentions. However, a naive application of such an approach is non-trivial since LLMs are typically trained on a uniform system message (e.g., “You are a helpful assistant”), which limits their ability to generalize to diverse, unseen system messages. To improve this generalization, we create Multifaceted Collection, augmenting 66k user instructions into 197k system messages through hierarchical user value combinations. Using this dataset, we train a 7B LLM called Janus and test it on 921 prompts from 5 benchmarks (AlpacaEval 2.0, FLASK, Koala, MT-Bench, and Self-Instruct) by adding system messages that reflect unseen user values. JANUS achieves tie+win rate of 75.2%, 72.4%, and 66.4% against Mistral 7B Instruct v0.2, GPT-3.5 Turbo, and GPT-4, respectively. Unexpectedly, on three benchmarks focused on response helpfulness (AlpacaEval 2.0, MT-Bench, Arena Hard Auto v0.1), JANUS also outperforms LLaMA 3 8B Instruct by a +4.0%p, +0.1%p, +3.0%p margin, underscoring that training with a vast array of system messages could also enhance alignment to the general public’s preference as well. Our code, dataset, benchmark, and models are available at https://lklab.kaist.ac.kr/Janus/.
Aligning to Thousands of Preferences via System Message Generalization
[ "Seongyun Lee", "Sue Hyun Park", "Seungone Kim", "Minjoon Seo" ]
NeurIPS.cc/2024/Conference
2405.17977
[ "https://github.com/kaistAI/Janus" ]
https://huggingface.co/papers/2405.17977
3
6
0
4
[ "kaist-ai/janus-7b", "kaist-ai/janus-rm-7b", "kaist-ai/janus-orpo-7b", "kaist-ai/janus-dpo-7b", "RichardErkhov/kaist-ai_-_janus-7b-gguf", "RichardErkhov/kaist-ai_-_janus-orpo-7b-gguf", "RichardErkhov/kaist-ai_-_janus-dpo-7b-gguf" ]
[ "kaist-ai/Multifaceted-Collection", "kaist-ai/Multifaceted-Collection-ORPO", "kaist-ai/Multifaceted-Collection-DPO", "kaist-ai/Multifaceted-Collection-small", "kaist-ai/Multifaceted-Collection-RM", "kaist-ai/Multifaceted-Bench" ]
[]
[ "kaist-ai/janus-7b", "kaist-ai/janus-rm-7b", "kaist-ai/janus-orpo-7b", "kaist-ai/janus-dpo-7b", "RichardErkhov/kaist-ai_-_janus-7b-gguf", "RichardErkhov/kaist-ai_-_janus-orpo-7b-gguf", "RichardErkhov/kaist-ai_-_janus-dpo-7b-gguf" ]
[ "kaist-ai/Multifaceted-Collection", "kaist-ai/Multifaceted-Collection-ORPO", "kaist-ai/Multifaceted-Collection-DPO", "kaist-ai/Multifaceted-Collection-small", "kaist-ai/Multifaceted-Collection-RM", "kaist-ai/Multifaceted-Bench" ]
[]
1
poster
null
https://openreview.net/forum?id=re2jPCnzkA
@inproceedings{ leboutet2024midgard, title={{MIDGA}rD: Modular Interpretable Diffusion over Graphs for Articulated Designs}, author={Quentin Leboutet and Nina Wiedemann and zhipeng cai and Michael Paulitsch and Kai Yuan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=re2jPCnzkA} }
Providing functionality through articulation and interaction with objects is a key objective in 3D generation. We introduce MIDGArD (Modular Interpretable Diffusion over Graphs for Articulated Designs), a novel diffusion-based framework for articulated 3D asset generation. MIDGArD improves over foundational work in the field by enhancing quality, consistency, and controllability in the generation process. This is achieved through MIDGArD's modular approach that separates the problem into two primary components: structure generation and shape generation. The structure generation module of MIDGArD aims at producing coherent articulation features from noisy or incomplete inputs. It acts on the object's structural and kinematic attributes, represented as features of a graph that are being progressively denoised to issue coherent and interpretable articulation solutions. This denoised graph then serves as an advanced conditioning mechanism for the shape generation module, a 3D generative model that populates each link of the articulated structure with consistent 3D meshes. Experiments show the superiority of MIDGArD on the quality, consistency, and interpretability of the generated assets. Importantly, the generated models are fully simulatable, i.e., can be seamlessly integrated into standard physics engines such as MuJoCo, broadening MIDGArD's applicability to fields such as digital content creation, meta realities, and robotics.
MIDGArD: Modular Interpretable Diffusion over Graphs for Articulated Designs
[ "Quentin Leboutet", "Nina Wiedemann", "zhipeng cai", "Michael Paulitsch", "Kai Yuan" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=re0ly2Ylcu
@inproceedings{ jia2024decisionmaking, title={Decision-Making Behavior Evaluation Framework for {LLM}s under Uncertain Context}, author={Jingru Jia and Zehua Yuan and Junhao Pan and Paul E McNamara and Deming Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=re0ly2Ylcu} }
When making decisions under uncertainty, individuals often deviate from rational behavior, which can be evaluated across three dimensions: risk preference, probability weighting, and loss aversion. Given the widespread use of large language models (LLMs) in supporting decision-making processes, it is crucial to assess whether their behavior aligns with human norms and ethical expectations or exhibits potential biases. Although several empirical studies have investigated the rationality and social behavior performance of LLMs, their internal decision-making tendencies and capabilities remain inadequately understood. This paper proposes a framework, grounded in behavioral economics theories, to evaluate the decision-making behaviors of LLMs. With a multiple-choice-list experiment, we initially estimate the degree of risk preference, probability weighting, and loss aversion in a context-free setting for three commercial LLMs: ChatGPT-4.0-Turbo, Claude-3-Opus, and Gemini-1.0-pro. Our results reveal that LLMs generally exhibit patterns similar to humans, such as risk aversion and loss aversion, with a tendency to overweight small probabilities, but there are significant variations in the degree to which these behaviors are expressed across different LLMs. Further, we explore their behavior when embedded with socio-demographic features of human beings, uncovering significant disparities across various demographic characteristics.
Decision-Making Behavior Evaluation Framework for LLMs under Uncertain Context
[ "Jingru Jia", "Zehua Yuan", "Junhao Pan", "Paul E McNamara", "Deming Chen" ]
NeurIPS.cc/2024/Conference
2406.05972
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rbtnRsiXSN
@inproceedings{ zhang2024demo, title={DeMo: Decoupling Motion Forecasting into Directional Intentions and Dynamic States}, author={Bozhou Zhang and Nan Song and Li Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rbtnRsiXSN} }
Accurate motion forecasting for traffic agents is crucial for ensuring the safety and efficiency of autonomous driving systems in dynamically changing environments. Mainstream methods adopt a one-query-one-trajectory paradigm, where each query corresponds to a unique trajectory for predicting multi-modal trajectories. While straightforward and effective, the absence of detailed representation of future trajectories may yield suboptimal outcomes, given that the agent states dynamically evolve over time. To address this problem, we introduce DeMo, a framework that decouples multi-modal trajectory queries into two types: mode queries capturing distinct directional intentions and state queries tracking the agent's dynamic states over time. By leveraging this format, we separately optimize the multi-modality and dynamic evolutionary properties of trajectories. Subsequently, the mode and state queries are integrated to obtain a comprehensive and detailed representation of the trajectories. To achieve these operations, we additionally introduce combined Attention and Mamba techniques for global information aggregation and state sequence modeling, leveraging their respective strengths. Extensive experiments on both the Argoverse 2 and nuScenes benchmarks demonstrate that our DeMo achieves state-of-the-art performance in motion forecasting. In addition, we will make our code and models publicly available.
DeMo: Decoupling Motion Forecasting into Directional Intentions and Dynamic States
[ "Bozhou Zhang", "Nan Song", "Li Zhang" ]
NeurIPS.cc/2024/Conference
[ "https://github.com/fudan-zvg/demo" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rblaF2euXQ
@inproceedings{ kim2024local, title={Local Anti-Concentration Class: Logarithmic Regret for Greedy Linear Contextual Bandit}, author={Seok-Jin Kim and Min-hwan Oh}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rblaF2euXQ} }
We study the performance guarantees of exploration-free greedy algorithms for the linear contextual bandit problem. We introduce a novel condition, named the \textit{Local Anti-Concentration} (LAC) condition, which enables a greedy bandit algorithm to achieve provable efficiency. We show that the LAC condition is satisfied by a broad class of distributions, including Gaussian, exponential, uniform, Cauchy, and Student's~$t$ distributions, along with other exponential family distributions and their truncated variants. This significantly expands the class of distributions under which greedy algorithms can perform efficiently. Under our proposed LAC condition, we prove that the cumulative expected regret of the greedy algorithm for the linear contextual bandit is bounded by $\mathcal{O}(\operatorname{poly} \log T)$. Our results establish the widest range of distributions known to date that allow a sublinear regret bound for greedy algorithms, further achieving a sharp poly-logarithmic regret.
Local Anti-Concentration Class: Logarithmic Regret for Greedy Linear Contextual Bandit
[ "Seok-Jin Kim", "Min-hwan Oh" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rajRJ6WKj2
@inproceedings{ maillard2024debara, title={DeBa{RA}: Denoising-Based 3D Room Arrangement Generation}, author={L{\'e}opold Maillard and Nicolas Sereyjol-Garros and Tom Durand and Maks Ovsjanikov}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rajRJ6WKj2} }
Generating realistic and diverse layouts of furnished indoor 3D scenes unlocks multiple interactive applications impacting a wide range of industries. The inherent complexity of object interactions, the limited amount of available data and the requirement to fulfill spatial constraints all make generative modeling for 3D scene synthesis and arrangement challenging. Current methods address these challenges autoregressively or by using off-the-shelf diffusion objectives by simultaneously predicting all attributes without 3D reasoning considerations. In this paper, we introduce DeBaRA, a score-based model specifically tailored for precise, controllable and flexible arrangement generation in a bounded environment. We argue that the most critical component of a scene synthesis system is to accurately establish the size and position of various objects within a restricted area. Based on this insight, we propose a lightweight conditional score-based model designed with 3D spatial awareness at its core. We demonstrate that by focusing on spatial attributes of objects, a single trained DeBaRA model can be leveraged at test time to perform several downstream applications such as scene synthesis, completion and re-arrangement. Further, we introduce a novel Self Score Evaluation procedure so it can be optimally employed alongside external LLM models. We evaluate our approach through extensive experiments and demonstrate significant improvement upon state-of-the-art approaches in a range of scenarios.
DeBaRA: Denoising-Based 3D Room Arrangement Generation
[ "Léopold Maillard", "Nicolas Sereyjol-Garros", "Tom Durand", "Maks Ovsjanikov" ]
NeurIPS.cc/2024/Conference
2409.18336
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rafVvthuxD
@inproceedings{ xie2024em, title={{EM} Distillation for One-step Diffusion Models}, author={Sirui Xie and Zhisheng Xiao and Diederik P Kingma and Tingbo Hou and Ying Nian Wu and Kevin Patrick Murphy and Tim Salimans and Ben Poole and Ruiqi Gao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rafVvthuxD} }
While diffusion models can learn complex distributions, sampling requires a computationally expensive iterative process. Existing distillation methods enable efficient sampling, but have notable limitations, such as performance degradation with very few sampling steps, reliance on training data access, or mode-seeking optimization that may fail to capture the full distribution. We propose EM Distillation (EMD), a maximum likelihood-based approach that distills a diffusion model to a one-step generator model with minimal loss of perceptual quality. Our approach is derived through the lens of Expectation-Maximization (EM), where the generator parameters are updated using samples from the joint distribution of the diffusion teacher prior and inferred generator latents. We develop a reparametrized sampling scheme and a noise cancellation technique that together stabilizes the distillation process. We further reveal an interesting connection of our method with existing methods that minimize mode-seeking KL. EMD outperforms existing one-step generative methods in terms of FID scores on ImageNet-64 and ImageNet-128, and compares favorably with prior work on distilling text-to-image diffusion models.
EM Distillation for One-step Diffusion Models
[ "Sirui Xie", "Zhisheng Xiao", "Diederik P Kingma", "Tingbo Hou", "Ying Nian Wu", "Kevin Patrick Murphy", "Tim Salimans", "Ben Poole", "Ruiqi Gao" ]
NeurIPS.cc/2024/Conference
2405.16852
[ "" ]
https://huggingface.co/papers/2405.16852
6
10
1
9
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=raABeiV71j
@inproceedings{ singhania2024loki, title={Loki: Low-rank Keys for Efficient Sparse Attention}, author={Prajwal Singhania and Siddharth Singh and Shwai He and Soheil Feizi and Abhinav Bhatele}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=raABeiV71j} }
Inference on large language models (LLMs) can be expensive in terms of the compute and memory costs involved, especially when long sequence lengths are used. In particular, the self-attention mechanism used in LLM inference contributes significantly to these costs, which has sparked an interest in approximating the self-attention computation to reduce such costs. In this work, we propose to approximate self-attention by focusing on the dimensionality of key vectors computed in the attention block. Our analysis reveals that key vectors lie in a significantly lower-dimensional space, consistently across several datasets and models. Exploiting this observation, we propose Loki, a novel sparse attention method that ranks and selects tokens in the KV-cache based on attention scores computed in low-dimensional space. Our evaluations show that Loki is able to speed up the attention computation due to reduced data movement (load/store) and compute costs while maintaining the efficacy of the models better than other popular approximation methods.
Loki: Low-rank Keys for Efficient Sparse Attention
[ "Prajwal Singhania", "Siddharth Singh", "Shwai He", "Soheil Feizi", "Abhinav Bhatele" ]
NeurIPS.cc/2024/Conference
2406.02542
[ "https://github.com/hpcgroup/loki" ]
https://huggingface.co/papers/2406.02542
2
0
1
5
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=rYs2Dmn9tD
@inproceedings{ cheng2024trace, title={Trace is the Next AutoDiff: Generative Optimization with Rich Feedback, Execution Traces, and {LLM}s}, author={Ching-An Cheng and Allen Nie and Adith Swaminathan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rYs2Dmn9tD} }
We study a class of optimization problems motivated by automating the design and update of AI systems like coding assistants, robots, and copilots. AutoDiff frameworks, like PyTorch, enable efficient end-to-end optimization of differentiable systems. However, general computational workflows can be non-differentiable and involve rich feedback (e.g. console output or user’s responses), heterogeneous parameters (e.g. prompts, codes), and intricate objectives (beyond maximizing a score). We investigate end-to-end generative optimization – using generative models such as LLMs within the optimizer for automatic updating of general computational workflows. We discover that workflow execution traces are akin to back-propagated gradients in AutoDiff and can provide key information to interpret feedback for efficient optimization. Formally, we frame a new mathematical setup, Optimization with Trace Oracle (OPTO). In OPTO, an optimizer receives an execution trace along with feedback on the computed output and updates parameters iteratively. We provide a Python library, Trace, that efficiently converts a workflow optimization problem into an OPTO instance using PyTorch-like syntax. Using Trace, we develop a general LLM-based generative optimizer called OptoPrime. In empirical studies, we find that OptoPrime is capable of first-order numerical optimization, prompt optimization, hyper-parameter tuning, robot controller design, code debugging, etc., and is often competitive with specialized optimizers for each domain. We envision Trace as an open research platform for devising novel generative optimizers and developing the next generation of interactive learning agents. Website: https://microsoft.github.io/Trace/.
Trace is the Next AutoDiff: Generative Optimization with Rich Feedback, Execution Traces, and LLMs
[ "Ching-An Cheng", "Allen Nie", "Adith Swaminathan" ]
NeurIPS.cc/2024/Conference
2406.16218
[ "" ]
https://huggingface.co/papers/2406.16218
1
1
1
3
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=rYjYwuM6yH
@inproceedings{ liao2024in, title={3-in-1: 2D Rotary Adaptation for Efficient Finetuning, Efficient Batching and Composability}, author={Baohao Liao and Christof Monz}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rYjYwuM6yH} }
Parameter-efficient finetuning (PEFT) methods effectively adapt large language models (LLMs) to diverse downstream tasks, reducing storage and GPU memory demands. Despite these advantages, several applications pose new challenges to PEFT beyond mere parameter efficiency. One notable challenge involves the efficient deployment of LLMs equipped with multiple task- or user-specific adapters, particularly when different adapters are needed for distinct requests within the same batch. Another challenge is the interpretability of LLMs, which is crucial for understanding how LLMs function. Previous studies introduced various approaches to address different challenges. In this paper, we introduce a novel method, RoAd, which employs a straightforward 2D rotation to adapt LLMs and addresses all the above challenges: (1) RoAd is remarkably parameter-efficient, delivering optimal performance on GLUE, eight commonsense reasoning tasks and four arithmetic reasoning tasks with <0.1% trainable parameters; (2) RoAd facilitates the efficient serving of requests requiring different adapters within a batch, with an overhead comparable to element-wise multiplication instead of batch matrix multiplication; (3) RoAd enhances LLM's interpretability through integration within a framework of distributed interchange intervention, demonstrated via composition experiments.
3-in-1: 2D Rotary Adaptation for Efficient Finetuning, Efficient Batching and Composability
[ "Baohao Liao", "Christof Monz" ]
NeurIPS.cc/2024/Conference
2409.00119
[ "https://github.com/baohaoliao/road" ]
https://huggingface.co/papers/2409.00119
0
0
0
2
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=rXGxbDJadh
@inproceedings{ he2024everyday, title={Everyday Object Meets Vision-and-Language Navigation Agent via Backdoor}, author={Keji He and Kehan Chen and Jiawang Bai and Yan Huang and Qi Wu and Shu-Tao Xia and Liang Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rXGxbDJadh} }
Vision-and-Language Navigation (VLN) requires an agent to dynamically explore environments following natural language. The VLN agent, closely integrated into daily lives, poses a substantial threat to the security of privacy and property upon the occurrence of malicious behavior. However, this serious issue has long been overlooked. In this paper, we pioneer the exploration of an object-aware backdoored VLN, achieved by implanting object-aware backdoors during the training phase. Tailored to the unique VLN nature of cross-modality and continuous decision-making, we propose a novel backdoored VLN paradigm: IPR Backdoor. This enables the agent to act in abnormal behavior once encountering the object triggers during language-guided navigation in unseen environments, thereby executing an attack on the target scene. Experiments demonstrate the effectiveness of our method in both physical and digital spaces across different VLN agents, as well as its robustness to various visual and textual variations. Additionally, our method also well ensures navigation performance in normal scenarios with remarkable stealthiness.
Everyday Object Meets Vision-and-Language Navigation Agent via Backdoor
[ "Keji He", "Kehan Chen", "Jiawang Bai", "Yan Huang", "Qi Wu", "Shu-Tao Xia", "Liang Wang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rVSc3HIZS4
@inproceedings{ shani2024multiturn, title={Multi-turn Reinforcement Learning with Preference Human Feedback}, author={Lior Shani and Aviv Rosenberg and Asaf Cassel and Oran Lang and Daniele Calandriello and Avital Zipori and Hila Noga and Orgad Keller and Bilal Piot and Idan Szpektor and Avinatan Hassidim and Yossi Matias and Remi Munos}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rVSc3HIZS4} }
Reinforcement Learning from Human Feedback (RLHF) has become the standard approach for aligning Large Language Models (LLMs) with human preferences, allowing LLMs to demonstrate remarkable abilities in various tasks. Existing methods work by emulating the human preference at the single decision (turn) level, limiting their capabilities in settings that require planning or multi-turn interactions to achieve a long-term goal. In this paper, we address this issue by developing novel methods for Reinforcement Learning (RL) from preference feedback between two full multi-turn conversations. In the tabular setting, we present a novel mirror-descent-based policy optimization algorithm for the general multi-turn preference-based RL problem, and prove its convergence to Nash equilibrium. To evaluate performance, we create a new environment, Education Dialogue, where a teacher agent guides a student in learning a random topic, and show that a deep RL variant of our algorithm outperforms RLHF baselines. Finally, we show that in an environment with explicit rewards, our algorithm recovers the same performance as a reward-based RL baseline, despite relying solely on a weaker preference signal.
Multi-turn Reinforcement Learning with Preference Human Feedback
[ "Lior Shani", "Aviv Rosenberg", "Asaf Cassel", "Oran Lang", "Daniele Calandriello", "Avital Zipori", "Hila Noga", "Orgad Keller", "Bilal Piot", "Idan Szpektor", "Avinatan Hassidim", "Yossi Matias", "Remi Munos" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rTxCIWsfsD
@inproceedings{ yang2024uncertaintybased, title={Uncertainty-based Offline Variational Bayesian Reinforcement Learning for Robustness under Diverse Data Corruptions}, author={Rui Yang and Jie Wang and Guoping Wu and Bin Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rTxCIWsfsD} }
Real-world offline datasets are often subject to data corruptions (such as noise or adversarial attacks) due to sensor failures or malicious attacks. Despite advances in robust offline reinforcement learning (RL), existing methods struggle to learn robust agents under high uncertainty caused by the diverse corrupted data (i.e., corrupted states, actions, rewards, and dynamics), leading to performance degradation in clean environments. To tackle this problem, we propose a novel robust variational Bayesian inference for offline RL (TRACER). It introduces Bayesian inference for the first time to capture the uncertainty via offline data for robustness against all types of data corruptions. Specifically, TRACER first models all corruptions as the uncertainty in the action-value function. Then, to capture such uncertainty, it uses all offline data as the observations to approximate the posterior distribution of the action-value function under a Bayesian inference framework. An appealing feature of TRACER is that it can distinguish corrupted data from clean data using an entropy-based uncertainty measure, since corrupted data often induces higher uncertainty and entropy. Based on the aforementioned measure, TRACER can regulate the loss associated with corrupted data to reduce its influence, thereby enhancing robustness and performance in clean environments. Experiments demonstrate that TRACER significantly outperforms several state-of-the-art approaches across both individual and simultaneous data corruptions.
Uncertainty-based Offline Variational Bayesian Reinforcement Learning for Robustness under Diverse Data Corruptions
[ "Rui Yang", "Jie Wang", "Guoping Wu", "Bin Li" ]
NeurIPS.cc/2024/Conference
2411.00465
[ "https://github.com/MIRALab-USTC/RL-TRACER" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rTONicCCJm
@inproceedings{ deng2024learning, title={Learning from Highly Sparse Spatio-temporal Data}, author={Leyan Deng and Chenwang Wu and Defu Lian and Enhong Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rTONicCCJm} }
Incomplete spatio-temporal data in real-world has spawned many research. However, existing methods often utilize iterative message-passing across temporal and spatial dimensions, resulting in substantial information loss and high computational cost. We provide a theoretical analysis revealing that such iterative models are not only susceptible to data sparsity but also to graph sparsity, causing unstable performances on different datasets. To overcome these limitations, we introduce a novel method named One-step Propagation and Confidence-based Refinement (OPCR). In the first stage, OPCR leverages inherent spatial and temporal relationships by employing sparse attention mechanism. These modules propagate limited observations directly to the global context through one-step imputation, which are theoretically effected only by data sparsity. Following this, we assign confidence levels to the initial imputations by correlating missing data with valid data. This confidence-based propagation refines the seperate spatial and temporal imputation results through spatio-temporal dependencies. We evaluate the proposed model across various downstream tasks involving highly sparse spatio-temporal data. Empirical results indicate that our model outperforms state-of-the-art imputation methods, demonstrating its superior effectiveness and robustness.
Learning from Highly Sparse Spatio-temporal Data
[ "Leyan Deng", "Chenwang Wu", "Defu Lian", "Enhong Chen" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rQYyWGYuzK
@inproceedings{ tran2024monomial, title={Monomial Matrix Group Equivariant Neural Functional Networks}, author={Hoang V. Tran and Thieu Vo and Tho Tran Huu and An Nguyen The and Tan Minh Nguyen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rQYyWGYuzK} }
Neural functional networks (NFNs) have recently gained significant attention due to their diverse applications, ranging from predicting network generalization and network editing to classifying implicit neural representation. Previous NFN designs often depend on permutation symmetries in neural networks' weights, which traditionally arise from the unordered arrangement of neurons in hidden layers. However, these designs do not take into account the weight scaling symmetries of $\operatorname{ReLU}$ networks, and the weight sign flipping symmetries of $\operatorname{sin}$ or $\operatorname{Tanh}$ networks. In this paper, we extend the study of the group action on the network weights from the group of permutation matrices to the group of monomial matrices by incorporating scaling/sign-flipping symmetries. Particularly, we encode these scaling/sign-flipping symmetries by designing our corresponding equivariant and invariant layers. We name our new family of NFNs the Monomial Matrix Group Equivariant Neural Functional Networks (Monomial-NFN). Because of the expansion of the symmetries, Monomial-NFN has much fewer independent trainable parameters compared to the baseline NFNs in the literature, thus enhancing the model's efficiency. Moreover, for fully connected and convolutional neural networks, we theoretically prove that all groups that leave these networks invariant while acting on their weight spaces are some subgroups of the monomial matrix group. We provide empirical evidences to demonstrate the advantages of our model over existing baselines, achieving competitive performance and efficiency. The code is publicly available at https://github.com/MathematicalAI-NUS/Monomial-NFN.
Monomial Matrix Group Equivariant Neural Functional Networks
[ "Hoang V. Tran", "Thieu Vo", "Tho Tran Huu", "An Nguyen The", "Tan Minh Nguyen" ]
NeurIPS.cc/2024/Conference
2409.11697
[ "https://github.com/mathematicalai-nus/monomial-nfn" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rPgc5brxmT
@inproceedings{ gladin2024interactionforce, title={Interaction-Force Transport Gradient Flows}, author={Egor Gladin and Pavel Dvurechensky and Alexander Mielke and Jia-Jie Zhu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rPgc5brxmT} }
This paper presents a new gradient flow dissipation geometry over non-negative and probability measures. This is motivated by a principled construction that combines the unbalanced optimal transport and interaction forces modeled by reproducing kernels. Using a precise connection between the Hellinger geometry and the maximum mean discrepancy (MMD), we propose the interaction-force transport (IFT) gradient flows and its spherical variant via an infimal convolution of the Wasserstein and spherical MMD tensors. We then develop a particle-based optimization algorithm based on the JKO-splitting scheme of the mass-preserving spherical IFT gradient flows. Finally, we provide both theoretical global exponential convergence guarantees and improved empirical simulation results for applying the IFT gradient flows to the sampling task of MMD-minimization. Furthermore, we prove that the spherical IFT gradient flow enjoys the best of both worlds by providing the global exponential convergence guarantee for both the MMD and KL energy.
Interaction-Force Transport Gradient Flows
[ "Egor Gladin", "Pavel Dvurechensky", "Alexander Mielke", "Jia-Jie Zhu" ]
NeurIPS.cc/2024/Conference
2405.17075
[ "https://github.com/egorgladin/ift_flow" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rPKCrzdqJx
@inproceedings{ harris2024regret, title={Regret Minimization in Stackelberg Games with Side Information}, author={Keegan Harris and Steven Wu and Maria Florina Balcan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rPKCrzdqJx} }
Algorithms for playing in Stackelberg games have been deployed in real-world domains including airport security, anti-poaching efforts, and cyber-crime prevention. However, these algorithms often fail to take into consideration the additional information available to each player (e.g. traffic patterns, weather conditions, network congestion), a salient feature of reality which may significantly affect both players' optimal strategies. We formalize such settings as Stackelberg games with side information, in which both players observe an external context before playing. The leader commits to a (context-dependent) strategy, and the follower best-responds to both the leader's strategy and the context. We focus on the online setting in which a sequence of followers arrive over time, and the context may change from round-to-round. In sharp contrast to the non-contextual version, we show that it is impossible for the leader to achieve good performance (measured by regret) in the full adversarial setting. Motivated by our impossibility result, we show that no-regret learning is possible in two natural relaxations: the setting in which the sequence of followers is chosen stochastically and the sequence of contexts is adversarial, and the setting in which the sequence of contexts is stochastic and the sequence of followers is chosen by an adversary.
Regret Minimization in Stackelberg Games with Side Information
[ "Keegan Harris", "Steven Wu", "Maria Florina Balcan" ]
NeurIPS.cc/2024/Conference
2402.08576
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rM3FFH1mqk
@inproceedings{ chen2024semidefinite, title={Semidefinite Relaxations of the Gromov-Wasserstein Distance}, author={Junyu Chen and Binh Nguyen and Shang Hui Koh and Yong Sheng Soh}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rM3FFH1mqk} }
The Gromov-Wasserstein (GW) distance is an extension of the optimal transport problem that allows one to match objects between incomparable spaces. At its core, the GW distance is specified as the solution of a non-convex quadratic program and is not known to be tractable to solve. In particular, existing solvers for the GW distance are only able to find locally optimal solutions. In this work, we propose a semi-definite programming (SDP) relaxation of the GW distance. The relaxation can be viewed as the Lagrangian dual of the GW distance augmented with constraints that relate to the linear and quadratic terms of transportation plans. In particular, our relaxation provides a tractable (polynomial-time) algorithm to compute globally optimal transportation plans (in some instances) together with an accompanying proof of global optimality. Our numerical experiments suggest that the proposed relaxation is strong in that it frequently computes the globally optimal solution. Our Python implementation is available at https://github.com/tbng/gwsdp.
Semidefinite Relaxations of the Gromov-Wasserstein Distance
[ "Junyu Chen", "Binh Nguyen", "Shang Hui Koh", "Yong Sheng Soh" ]
NeurIPS.cc/2024/Conference
2312.14572
[ "https://github.com/tbng/gwsdp" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rM24UUgZg8
@inproceedings{ lee2024activating, title={Activating Self-Attention for Multi-Scene Absolute Pose Regression}, author={Miso Lee and Jihwan Kim and Jae-Pil Heo}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rM24UUgZg8} }
Multi-scene absolute pose regression addresses the demand for fast and memory-efficient camera pose estimation across various real-world environments. Nowadays, transformer-based model has been devised to regress the camera pose directly in multi-scenes. Despite its potential, transformer encoders are underutilized due to the collapsed self-attention map, having low representation capacity. This work highlights the problem and investigates it from a new perspective: distortion of query-key embedding space. Based on the statistical analysis, we reveal that queries and keys are mapped in completely different spaces while only a few keys are blended into the query region. This leads to the collapse of the self-attention map as all queries are considered similar to those few keys. Therefore, we propose simple but effective solutions to activate self-attention. Concretely, we present an auxiliary loss that aligns queries and keys, preventing the distortion of query-key space and encouraging the model to find global relations by self-attention. In addition, the fixed sinusoidal positional encoding is adopted instead of undertrained learnable one to reflect appropriate positional clues into the inputs of self-attention. As a result, our approach resolves the aforementioned problem effectively, thus outperforming existing methods in both outdoor and indoor scenes.
Activating Self-Attention for Multi-Scene Absolute Pose Regression
[ "Miso Lee", "Jihwan Kim", "Jae-Pil Heo" ]
NeurIPS.cc/2024/Conference
2411.01443
[ "https://github.com/dlalth557/ActMST" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rLJisJmMKw
@inproceedings{ seo2024genwarp, title={GenWarp: Single Image to Novel Views with Semantic-Preserving Generative Warping}, author={Junyoung Seo and Kazumi Fukuda and Takashi Shibuya and Takuya Narihira and Naoki Murata and Shoukang Hu and Chieh-Hsin Lai and Seungryong Kim and Yuki Mitsufuji}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rLJisJmMKw} }
Generating novel views from a single image remains a challenging task due to the complexity of 3D scenes and the limited diversity in the existing multi-view datasets to train a model on. Recent research combining large-scale text-to-image (T2I) models with monocular depth estimation (MDE) has shown promise in handling in-the-wild images. In these methods, an input view is geometrically warped to novel views with estimated depth maps, then the warped image is inpainted by T2I models. However, they struggle with noisy depth maps and loss of semantic details when warping an input view to novel viewpoints. In this paper, we propose a novel approach for single-shot novel view synthesis, a semantic-preserving generative warping framework that enables T2I generative models to learn where to warp and where to generate, through augmenting cross-view attention with self-attention. Our approach addresses the limitations of existing methods by conditioning the generative model on source view images and incorporating geometric warping signals. Qualitative and quantitative evaluations demonstrate that our model outperforms existing methods in both in-domain and out-of-domain scenarios. Project page is available at https://GenWarp-NVS.github.io.
GenWarp: Single Image to Novel Views with Semantic-Preserving Generative Warping
[ "Junyoung Seo", "Kazumi Fukuda", "Takashi Shibuya", "Takuya Narihira", "Naoki Murata", "Shoukang Hu", "Chieh-Hsin Lai", "Seungryong Kim", "Yuki Mitsufuji" ]
NeurIPS.cc/2024/Conference
2405.17251
[ "https://github.com/sony/genwarp" ]
https://huggingface.co/papers/2405.17251
3
2
0
9
[ "Sony/genwarp" ]
[]
[ "Sony/genwarp" ]
[ "Sony/genwarp" ]
[]
[ "Sony/genwarp" ]
1
poster
null
https://openreview.net/forum?id=rL7OtNsD9a
@inproceedings{ lee2024episodic, title={Episodic Future Thinking Mechanism for Multi-agent Reinforcement Learning}, author={Dongsu Lee and Minhae Kwon}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rL7OtNsD9a} }
Understanding cognitive processes in multi-agent interactions is a primary goal in cognitive science. It can guide the direction of artificial intelligence (AI) research toward social decision-making in multi-agent systems, which includes uncertainty from character heterogeneity. In this paper, we introduce *episodic future thinking (EFT) mechanism* for a reinforcement learning (RL) agent, inspired by the cognitive processes observed in animals. To enable future thinking functionality, we first develop a *multi-character policy* that captures diverse characters with an ensemble of heterogeneous policies. The *character* of an agent is defined as a different weight combination on reward components, representing distinct behavioral preferences. The future thinking agent collects observation-action trajectories of the target agents and leverages the pre-trained multi-character policy to infer their characters. Once the character is inferred, the agent predicts the upcoming actions of target agents and simulates the potential future scenario. This capability allows the agent to adaptively select the optimal action, considering the predicted future scenario in multi-agent scenarios. To evaluate the proposed mechanism, we consider the multi-agent autonomous driving scenario in which autonomous vehicles with different driving traits are on the road. Simulation results demonstrate that the EFT mechanism with accurate character inference leads to a higher reward than existing multi-agent solutions. We also confirm that the effect of reward improvement remains valid across societies with different levels of character diversity.
Episodic Future Thinking Mechanism for Multi-agent Reinforcement Learning
[ "Dongsu Lee", "Minhae Kwon" ]
NeurIPS.cc/2024/Conference
2410.17373
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rIOl7KbSkv
@inproceedings{ pang2024no, title={No Free Lunch in {LLM} Watermarking: Trade-offs in Watermarking Design Choices}, author={Qi Pang and Shengyuan Hu and Wenting Zheng and Virginia Smith}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rIOl7KbSkv} }
Advances in generative models have made it possible for AI-generated text, code, and images to mirror human-generated content in many applications. Watermarking, a technique that aims to embed information in the output of a model to verify its source, is useful for mitigating the misuse of such AI-generated content. However, we show that common design choices in LLM watermarking schemes make the resulting systems surprisingly susceptible to attack---leading to fundamental trade-offs in robustness, utility, and usability. To navigate these trade-offs, we rigorously study a set of simple yet effective attacks on common watermarking systems, and propose guidelines and defenses for LLM watermarking in practice.
No Free Lunch in LLM Watermarking: Trade-offs in Watermarking Design Choices
[ "Qi Pang", "Shengyuan Hu", "Wenting Zheng", "Virginia Smith" ]
NeurIPS.cc/2024/Conference
2402.16187
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rIOTceoNc8
@inproceedings{ joly2024graph, title={Graph Coarsening with Message-Passing Guarantees}, author={Antonin Joly and Nicolas Keriven}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rIOTceoNc8} }
Graph coarsening aims to reduce the size of a large graph while preserving some of its key properties, which has been used in many applications to reduce computational load and memory footprint. For instance, in graph machine learning, training Graph Neural Networks (GNNs) on coarsened graphs leads to drastic savings in time and memory. However, GNNs rely on the Message-Passing (MP) paradigm, and classical spectral preservation guarantees for graph coarsening do not directly lead to theoretical guarantees when performing naive message-passing on the coarsened graph. In this work, we propose a new message-passing operation specific to coarsened graphs, which exhibit theoretical guarantees on the preservation of the propagated signal. Interestingly, and in a sharp departure from previous proposals, this operation on coarsened graphs is oriented, even when the original graph is undirected. We conduct node classification tasks on synthetic and real data and observe improved results compared to performing naive message-passing on the coarsened graph.
Graph Coarsening with Message-Passing Guarantees
[ "Antonin Joly", "Nicolas Keriven" ]
NeurIPS.cc/2024/Conference
2405.18127
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rI80PHlnFm
@inproceedings{ mehta2024model, title={Model Based Inference of Synaptic Plasticity Rules}, author={Yash Mehta and Danil Tyulmankov and Adithya E. Rajagopalan and Glenn C Turner and James E Fitzgerald and Jan Funke}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rI80PHlnFm} }
Inferring the synaptic plasticity rules that govern learning in the brain is a key challenge in neuroscience. We present a novel computational method to infer these rules from experimental data, applicable to both neural and behavioral data. Our approach approximates plasticity rules using a parameterized function, employing either truncated Taylor series for theoretical interpretability or multilayer perceptrons. These plasticity parameters are optimized via gradient descent over entire trajectories to align closely with observed neural activity or behavioral learning dynamics. This method can uncover complex rules that induce long nonlinear time dependencies, particularly involving factors like postsynaptic activity and current synaptic weights. We validate our approach through simulations, successfully recovering established rules such as Oja's, as well as more intricate plasticity rules with reward-modulated terms. We assess the robustness of our technique to noise and apply it to behavioral data from \textit{Drosophila} in a probabilistic reward-learning experiment. Notably, our findings reveal an active forgetting component in reward learning in flies, improving predictive accuracy over previous models. This modeling framework offers a promising new avenue for elucidating the computational principles of synaptic plasticity and learning in the brain.
Model Based Inference of Synaptic Plasticity Rules
[ "Yash Mehta", "Danil Tyulmankov", "Adithya E. Rajagopalan", "Glenn C Turner", "James E Fitzgerald", "Jan Funke" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rI7oZj1WMc
@inproceedings{ chua2024learning, title={Learning Successor Features the Simple Way}, author={Raymond Chua and Arna Ghosh and Christos Kaplanis and Blake Aaron Richards and Doina Precup}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rI7oZj1WMc} }
In Deep Reinforcement Learning (RL), it is a challenge to learn representations that do not exhibit catastrophic forgetting or interference in non-stationary environments. Successor Features (SFs) offer a potential solution to this challenge. However, canonical techniques for learning SFs from pixel-level observations often lead to representation collapse, wherein representations degenerate and fail to capture meaningful variations in the data. More recent methods for learning SFs can avoid representation collapse, but they often involve complex losses and multiple learning phases, reducing their efficiency. We introduce a novel, simple method for learning SFs directly from pixels. Our approach uses a combination of a Temporal-difference (TD) loss and a reward prediction loss, which together capture the basic mathematical definition of SFs. We show that our approach matches or outperforms existing SF learning techniques in both 2D (Minigrid) and 3D (Miniworld) mazes, for both single and continual learning scenarios. As well, our technique is efficient, and can reach higher levels of performance in less time than other approaches. Our work provides a new, streamlined technique for learning SFs directly from pixel observations, with no pretraining required.
Learning Successor Features the Simple Way
[ "Raymond Chua", "Arna Ghosh", "Christos Kaplanis", "Blake Aaron Richards", "Doina Precup" ]
NeurIPS.cc/2024/Conference
2410.22133
[ "https://github.com/raymondchua/simple_successor_features" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rGEDFS3emy
@inproceedings{ zhuang2024foal, title={F-{OAL}: Forward-only Online Analytic Learning with Fast Training and Low Memory Footprint in Class Incremental Learning}, author={Huiping Zhuang and Yuchen Liu and Run He and Kai Tong and Ziqian Zeng and Cen Chen and Yi Wang and Lap-Pui Chau}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rGEDFS3emy} }
Online Class Incremental Learning (OCIL) aims to train models incrementally, where data arrive in mini-batches, and previous data are not accessible. A major challenge in OCIL is Catastrophic Forgetting, i.e., the loss of previously learned knowledge. Among existing baselines, replay-based methods show competitive results but requires extra memory for storing exemplars, while exemplar-free (i.e., data need not be stored for replay in production) methods are resource friendly but often lack accuracy. In this paper, we propose an exemplar-free approach—Forward-only Online Analytic Learning (F-OAL). Unlike traditional methods, F-OAL does not rely on back-propagation and is forward-only, significantly reducing memory usage and computational time. Cooperating with a pre-trained frozen encoder with Feature Fusion, F-OAL only needs to update a linear classifier by recursive least square. This approach simultaneously achieves high accuracy and low resource consumption. Extensive experiments on bench mark datasets demonstrate F-OAL’s robust performance in OCIL scenarios. Code is available at: https://github.com/liuyuchen-cz/F-OAL
F-OAL: Forward-only Online Analytic Learning with Fast Training and Low Memory Footprint in Class Incremental Learning
[ "Huiping Zhuang", "Yuchen Liu", "Run He", "Kai Tong", "Ziqian Zeng", "Cen Chen", "Yi Wang", "Lap-Pui Chau" ]
NeurIPS.cc/2024/Conference
2403.15751
[ "https://github.com/liuyuchen-cz/f-oal" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rF1YRtZfoJ
@inproceedings{ jha2024clapclip, title={{CLAP}4{CLIP}: Continual Learning with Probabilistic Finetuning for Vision-Language Models}, author={Saurav Jha and Dong Gong and Lina Yao}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rF1YRtZfoJ} }
Continual learning (CL) aims to help deep neural networks to learn new knowledge while retaining what has been learned. Owing to their powerful generalizability, pre-trained vision-language models such as Contrastive Language-Image Pre-training (CLIP) have lately gained traction as practical CL candidates. However, the domain mismatch between the pre-training and the downstream CL tasks calls for finetuning of the CLIP on the latter. The deterministic nature of the existing finetuning methods makes them overlook the many possible interactions across the modalities and deems them unsafe for high-risk tasks requiring reliable uncertainty estimation. To address these, our work proposes **C**ontinual **L**e**A**rning with **P**robabilistic finetuning (CLAP) - a probabilistic modeling framework over visual-guided text features per task, thus providing more calibrated CL finetuning. Unlike recent data-hungry anti-forgetting CL techniques, CLAP alleviates forgetting by exploiting the rich pre-trained knowledge of CLIP for weight initialization and distribution regularization of task-specific parameters. Cooperating with the diverse range of existing prompting methods, CLAP can surpass the predominant deterministic finetuning approaches for CL with CLIP. We conclude with out-of-the-box applications of superior uncertainty estimation abilities of CLAP including novel data detection and exemplar selection within the existing CL setups. Our code is available at https://github.com/srvCodes/clap4clip.
CLAP4CLIP: Continual Learning with Probabilistic Finetuning for Vision-Language Models
[ "Saurav Jha", "Dong Gong", "Lina Yao" ]
NeurIPS.cc/2024/Conference
2403.19137
[ "https://github.com/srvcodes/clap4clip" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rDoPMODpki
@inproceedings{ jiang2024kgfit, title={{KG}-{FIT}: Knowledge Graph Fine-Tuning Upon Open-World Knowledge}, author={Pengcheng Jiang and Lang Cao and Cao Xiao and Parminder Bhatia and Jimeng Sun and Jiawei Han}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rDoPMODpki} }
Knowledge Graph Embedding (KGE) techniques are crucial in learning compact representations of entities and relations within a knowledge graph, facilitating efficient reasoning and knowledge discovery. While existing methods typically focus either on training KGE models solely based on graph structure or fine-tuning pre-trained language models with classification data in KG, KG-FIT leverages LLM-guided refinement to construct a semantically coherent hierarchical structure of entity clusters. By incorporating this hierarchical knowledge along with textual information during the fine-tuning process, KG-FIT effectively captures both global semantics from the LLM and local semantics from the KG. Extensive experiments on the benchmark datasets FB15K-237, YAGO3-10, and PrimeKG demonstrate the superiority of KG-FIT over state-of-the-art pre-trained language model-based methods, achieving improvements of 14.4\%, 13.5\%, and 11.9\% in the Hits@10 metric for the link prediction task, respectively. Furthermore, KG-FIT yields substantial performance gains of 12.6\%, 6.7\%, and 17.7\% compared to the structure-based base models upon which it is built. These results highlight the effectiveness of KG-FIT in incorporating open-world knowledge from LLMs to significantly enhance the expressiveness and informativeness of KG embeddings.
KG-FIT: Knowledge Graph Fine-Tuning Upon Open-World Knowledge
[ "Pengcheng Jiang", "Lang Cao", "Cao Xiao", "Parminder Bhatia", "Jimeng Sun", "Jiawei Han" ]
NeurIPS.cc/2024/Conference
2405.16412
[ "https://github.com/pat-jj/KG-FIT" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rCnZrFikX6
@inproceedings{ zeng2024neural, title={Neural Persistence Dynamics}, author={Sebastian Zeng and Florian Graf and Martin Uray and Stefan Huber and Roland Kwitt}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rCnZrFikX6} }
We consider the problem of learning the dynamics in the topology of time-evolving point clouds, the prevalent spatiotemporal model for systems exhibiting collective behavior, such as swarms of insects and birds or particles in physics. In such systems, patterns emerge from (local) interactions among self-propelled entities. While several well-understood governing equations for motion and interaction exist, they are notoriously difficult to fit to data, as most prior work requires knowledge about individual motion trajectories, i.e., a requirement that is challenging to satisfy with an increasing number of entities. To evade such confounding factors, we investigate collective behavior from a _topological perspective_, but instead of summarizing entire observation sequences (as done previously), we propose learning a latent dynamical model from topological features _per time point_. The latter is then used to formulate a downstream regression task to predict the parametrization of some a priori specified governing equation. We implement this idea based on a latent ODE learned from vectorized (static) persistence diagrams and show that a combination of recent stability results for persistent homology justifies this modeling choice. Various (ablation) experiments not only demonstrate the relevance of each model component but provide compelling empirical evidence that our proposed model -- _Neural Persistence Dynamics_ -- substantially outperforms the state-of-the-art across a diverse set of parameter regression tasks.
Neural Persistence Dynamics
[ "Sebastian Zeng", "Florian Graf", "Martin Uray", "Stefan Huber", "Roland Kwitt" ]
NeurIPS.cc/2024/Conference
2405.15732
[ "https://github.com/plus-rkwitt/neural_persistence_dynamics" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=rCXTkIhkbF
@inproceedings{ franke2024improving, title={Improving Deep Learning Optimization through Constrained Parameter Regularization}, author={J{\"o}rg K.H. Franke and Michael Hefenbrock and Gregor Koehler and Frank Hutter}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=rCXTkIhkbF} }
Regularization is a critical component in deep learning. The most commonly used approach, weight decay, applies a constant penalty coefficient uniformly across all parameters. This may be overly restrictive for some parameters, while insufficient for others. To address this, we present Constrained Parameter Regularization (CPR) as an alternative to traditional weight decay. Unlike the uniform application of a single penalty, CPR enforces an upper bound on a statistical measure, such as the L$_2$-norm, of individual parameter matrices. Consequently, learning becomes a constraint optimization problem, which we tackle using an adaptation of the augmented Lagrangian method. CPR introduces only a minor runtime overhead and only requires setting an upper bound. We propose simple yet efficient mechanisms for initializing this bound, making CPR rely on no hyperparameter or one, akin to weight decay. Our empirical studies on computer vision and language modeling tasks demonstrate CPR's effectiveness. The results show that CPR can outperform traditional weight decay and increase performance in pre-training and fine-tuning.
Improving Deep Learning Optimization through Constrained Parameter Regularization
[ "Jörg K.H. Franke", "Michael Hefenbrock", "Gregor Koehler", "Frank Hutter" ]
NeurIPS.cc/2024/Conference
2311.09058
[ "https://github.com/automl/cpr" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=r8YntmAd0g
@inproceedings{ zhang2024doppler, title={{DOPPLER}: Differentially Private Optimizers with Low-pass Filter for Privacy Noise Reduction}, author={Xinwei Zhang and Zhiqi Bu and Mingyi Hong and Meisam Razaviyayn}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=r8YntmAd0g} }
Privacy is a growing concern in modern deep-learning systems and applications. Differentially private (DP) training prevents the leakage of sensitive information in the collected training data from the trained machine learning models. DP optimizers, including DP stochastic gradient descent (DPSGD) and its variants, privatize the training procedure by gradient clipping and *DP noise* injection. However, in practice, DP models trained using DPSGD and its variants often suffer from significant model performance degradation. Such degradation prevents the application of DP optimization in many key tasks, such as foundation model pretraining. In this paper, we provide a novel *signal processing perspective* to the design and analysis of DP optimizers. We show that a ''frequency domain'' operation called *low-pass filtering* can be used to effectively reduce the impact of DP noise. More specifically, by defining the ''frequency domain'' for both the gradient and differential privacy (DP) noise, we have developed a new component, called DOPPLER. This component is designed for DP algorithms and works by effectively amplifying the gradient while suppressing DP noise within this frequency domain. As a result, it maintains privacy guarantees and enhances the quality of the DP-protected model. Our experiments show that the proposed DP optimizers with a low-pass filter outperform their counterparts without the filter on various models and datasets. Both theoretical and practical evidence suggest that the DOPPLER is effective in closing the gap between DP and non-DP training.
DOPPLER: Differentially Private Optimizers with Low-pass Filter for Privacy Noise Reduction
[ "Xinwei Zhang", "Zhiqi Bu", "Mingyi Hong", "Meisam Razaviyayn" ]
NeurIPS.cc/2024/Conference
2408.13460
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=r8M9SfYMDi
@inproceedings{ pouransari2024dataset, title={Dataset Decomposition: Faster {LLM} Training with Variable Sequence Length Curriculum}, author={Hadi Pouransari and Chun-Liang Li and Jen-Hao Rick Chang and Pavan Kumar Anasosalu Vasu and Cem Koc and Vaishaal Shankar and Oncel Tuzel}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=r8M9SfYMDi} }
Large language models (LLMs) are commonly trained on datasets consisting of fixed-length token sequences. These datasets are created by randomly concatenating documents of various lengths and then chunking them into sequences of a predetermined target length (concat-and-chunk). Recent attention implementations mask cross-document attention, reducing the effective length of a chunk of tokens. Additionally, training on long sequences becomes computationally prohibitive due to the quadratic cost of attention. In this study, we introduce dataset decomposition, a novel variable sequence length training technique, to tackle these challenges. We decompose a dataset into a union of buckets, each containing sequences of the same size extracted from a unique document. During training, we use variable sequence length and batch-size, sampling simultaneously from all buckets with a curriculum. In contrast to the concat-and-chunk baseline, which incurs a fixed attention cost at every step of training, our proposed method incurs a computational cost proportional to the actual document lengths at each step, resulting in significant savings in training time. We train an 8k context-length 1B model at the same cost as a 2k context-length model trained with the baseline approach. Experiments on a web-scale corpus demonstrate that our approach significantly enhances performance on standard language evaluations and long-context benchmarks, reaching target accuracy with up to 6x faster training compared to the baseline. Our method not only enables efficient pretraining on long sequences but also scales effectively with dataset size. Lastly, we shed light on a critical yet less studied aspect of training large language models: the distribution and curriculum of sequence lengths, which results in a non-negligible difference in performance.
Dataset Decomposition: Faster LLM Training with Variable Sequence Length Curriculum
[ "Hadi Pouransari", "Chun-Liang Li", "Jen-Hao Rick Chang", "Pavan Kumar Anasosalu Vasu", "Cem Koc", "Vaishaal Shankar", "Oncel Tuzel" ]
NeurIPS.cc/2024/Conference
2405.13226
[ "" ]
https://huggingface.co/papers/2405.13226
1
0
0
7
[ "apple/DCLM-7B-8k" ]
[]
[]
[ "apple/DCLM-7B-8k" ]
[]
[]
1
poster
null
https://openreview.net/forum?id=r70jUOpDCM
@inproceedings{ shi2024multiscale, title={Multi-Scale {VM}amba: Hierarchy in Hierarchy Visual State Space Model}, author={Yuheng Shi and Minjing Dong and Chang Xu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=r70jUOpDCM} }
Despite the significant achievements of Vision Transformers (ViTs) in various vision tasks, they are constrained by the quadratic complexity. Recently, State Space Models (SSMs) have garnered widespread attention due to their global receptive field and linear complexity with respect to the input length, demonstrating substantial potential across fields including natural language processing and computer vision. To improve the performance of SSMs in vision tasks, a multi-scan strategy is widely adopted, which leads to significant redundancy of SSMs. For a better trade-off between efficiency and performance, we analyze the underlying reasons behind the success of the multi-scan strategy, where long-range dependency plays an important role. Based on the analysis, we introduce Multi-Scale Vision Mamba (MSVMamba) to preserve the superiority of SSMs in vision tasks with limited parameters. It employs a multi-scale 2D scanning technique on both original and downsampled feature maps, which not only benefits long-range dependency learning but also reduces computational costs. Additionally, we integrate a Convolutional Feed-Forward Network (ConvFFN) to address the lack of channel mixing. Our experiments demonstrate that MSVMamba is highly competitive, with the MSVMamba-Tiny model achieving 83.0% top-1 accuracy on ImageNet, 46.9% box mAP, and 42.5% instance mAP with the Mask R-CNN framework, 1x training schedule on COCO, and 47.9% mIoU with single-scale testing on ADE20K. Code is available at https://github.com/YuHengsss/MSVMamba.
Multi-Scale VMamba: Hierarchy in Hierarchy Visual State Space Model
[ "Yuheng Shi", "Minjing Dong", "Chang Xu" ]
NeurIPS.cc/2024/Conference
2405.14174
[ "https://github.com/yuhengsss/msvmamba" ]
https://huggingface.co/papers/2405.14174
1
0
1
3
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=r6tnDXIkNS
@inproceedings{ zhang2024neural, title={Neural Signed Distance Function Inference through Splatting 3D Gaussians Pulled on Zero-Level Set}, author={Wenyuan Zhang and Yu-Shen Liu and Zhizhong Han}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=r6tnDXIkNS} }
It is vital to infer a signed distance function (SDF) for multi-view based surface reconstruction. 3D Gaussian splatting (3DGS) provides a novel perspective for volume rendering, and shows advantages in rendering efficiency and quality. Although 3DGS provides a promising neural rendering option, it is still hard to infer SDFs for surface reconstruction with 3DGS due to the discreteness, the sparseness, and the off-surface drift of 3D Gaussians. To resolve these issues, we propose a method that seamlessly merge 3DGS with the learning of neural SDFs. Our key idea is to more effectively constrain the SDF inference with the multi-view consistency. To this end, we dynamically align 3D Gaussians on the zero-level set of the neural SDF, and then render the aligned 3D Gaussians through the differentiable rasterization. Meanwhile, we update the neural SDF by pulling neighboring space to the pulled 3D Gaussians, which progressively refine the signed distance field near the surface. With both differentiable pulling and splatting, we jointly optimize 3D Gaussians and the neural SDF with both RGB and geometry constraints, which recovers more accurate, smooth, and complete surfaces with more geometry details. Our numerical and visual comparisons show our superiority over the state-of-the-art results on the widely used benchmarks.
Neural Signed Distance Function Inference through Splatting 3D Gaussians Pulled on Zero-Level Set
[ "Wenyuan Zhang", "Yu-Shen Liu", "Zhizhong Han" ]
NeurIPS.cc/2024/Conference
2410.14189
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=r6V7EjANUK
@inproceedings{ yu2024gsdf, title={{GSDF}: 3{DGS} Meets {SDF} for Improved Neural Rendering and Reconstruction}, author={Mulin Yu and Tao Lu and Linning Xu and Lihan Jiang and Yuanbo Xiangli and Bo Dai}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=r6V7EjANUK} }
Representing 3D scenes from multiview images remains a core challenge in computer vision and graphics, requiring both reliable rendering and reconstruction, which often conflicts due to the mismatched prioritization of image quality over precise underlying scene geometry. Although both neural implicit surfaces and explicit Gaussian primitives have advanced with neural rendering techniques, current methods impose strict constraints on density fields or primitive shapes, which enhances the affinity for geometric reconstruction at the sacrifice of rendering quality. To address this dilemma, we introduce GSDF, a dual-branch architecture combining 3D Gaussian Splatting (3DGS) and neural Signed Distance Fields (SDF). Our approach leverages mutual guidance and joint supervision during the training process to mutually enhance reconstruction and rendering. Specifically, our method guides the Gaussian primitives to locate near potential surfaces and accelerates the SDF convergence. This implicit mutual guidance ensures robustness and accuracy in both synthetic and real-world scenarios. Experimental results demonstrate that our method boosts the SDF optimization process to reconstruct more detailed geometry, while reducing floaters and blurry edge artifacts in rendering by aligning Gaussian primitives with the underlying geometry.
GSDF: 3DGS Meets SDF for Improved Neural Rendering and Reconstruction
[ "Mulin Yu", "Tao Lu", "Linning Xu", "Lihan Jiang", "Yuanbo Xiangli", "Bo Dai" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=r5spnrY6H3
@inproceedings{ wu2024rgsan, title={{RG}-{SAN}: Rule-Guided Spatial Awareness Network for End-to-End 3D Referring Expression Segmentation}, author={Changli Wu and Qi Chen and Jiayi Ji and Haowei Wang and Yiwei Ma and You Huang and Gen Luo and Hao Fei and Xiaoshuai Sun and Rongrong Ji}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=r5spnrY6H3} }
3D Referring Expression Segmentation (3D-RES) aims to segment 3D objects by correlating referring expressions with point clouds. However, traditional approaches frequently encounter issues like over-segmentation or mis-segmentation, due to insufficient emphasis on spatial information of instances. In this paper, we introduce a Rule-Guided Spatial Awareness Network (RG-SAN) by utilizing solely the spatial information of the target instance for supervision. This approach enables the network to accurately depict the spatial relationships among all entities described in the text, thus enhancing the reasoning capabilities. The RG-SAN consists of the Text-driven Localization Module (TLM) and the Rule-guided Weak Supervision (RWS) strategy. The TLM initially locates all mentioned instances and iteratively refines their positional information. The RWS strategy, acknowledging that only target objects have supervised positional information, employs dependency tree rules to precisely guide the core instance’s positioning. Extensive testing on the ScanRefer benchmark has shown that RG-SAN not only establishes new performance benchmarks, with an mIoU increase of 5.1 points, but also exhibits significant improvements in robustness when processing descriptions with spatial ambiguity. All codes are available at https://github.com/sosppxo/RG-SAN.
RG-SAN: Rule-Guided Spatial Awareness Network for End-to-End 3D Referring Expression Segmentation
[ "Changli Wu", "Qi Chen", "Jiayi Ji", "Haowei Wang", "Yiwei Ma", "You Huang", "Gen Luo", "Hao Fei", "Xiaoshuai Sun", "Rongrong Ji" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=r5nev2SHtJ
@inproceedings{ rajendran2024from, title={From Causal to Concept-Based Representation Learning}, author={Goutham Rajendran and Simon Buchholz and Bryon Aragam and Bernhard Sch{\"o}lkopf and Pradeep Kumar Ravikumar}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=r5nev2SHtJ} }
To build intelligent machine learning systems, modern representation learning attempts to recover latent generative factors from data, such as in causal representation learning. A key question in this growing field is to provide rigorous conditions under which latent factors can be identified and thus, potentially learned. Motivated by extensive empirical literature on linear representations and concept learning, we propose to relax causal notions with a geometric notion of concepts. We formally define a notion of concepts and show rigorously that they can be provably recovered from diverse data. Instead of imposing assumptions on the "true" generative latent space, we assume that concepts can be represented linearly in this latent space. The tradeoff is that instead of identifying the "true" generative factors, we identify a subset of desired human-interpretable concepts that are relevant for a given application. Experiments on synthetic data, multimodal CLIP models and large language models supplement our results and show the utility of our approach. In this way, we provide a foundation for moving from causal representations to interpretable, concept-based representations by bringing together ideas from these two neighboring disciplines.
From Causal to Concept-Based Representation Learning
[ "Goutham Rajendran", "Simon Buchholz", "Bryon Aragam", "Bernhard Schölkopf", "Pradeep Kumar Ravikumar" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=r3c0WGCXgt
@inproceedings{ zhang2024how, title={How Control Information Influences Multilingual Text Image Generation and Editing?}, author={Boqiang Zhang and Zuan Gao and Yadong Qu and Hongtao Xie}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=r3c0WGCXgt} }
Visual text generation has significantly advanced through diffusion models aimed at producing images with readable and realistic text. Recent works primarily use a ControlNet-based framework, employing standard font text images to control diffusion models. Recognizing the critical role of control information in generating high-quality text, we investigate its influence from three perspectives: input encoding, role at different stages, and output features. Our findings reveal that: 1) Input control information has unique characteristics compared to conventional inputs like Canny edges and depth maps. 2) Control information plays distinct roles at different stages of the denoising process. 3) Output control features significantly differ from the base and skip features of the U-Net decoder in the frequency domain. Based on these insights, we propose TextGen, a novel framework designed to enhance generation quality by optimizing control information. We improve input and output features using Fourier analysis to emphasize relevant information and reduce noise. Additionally, we employ a two-stage generation framework to align the different roles of control information at different stages. Furthermore, we introduce an effective and lightweight dataset for training. Our method achieves state-of-the-art performance in both Chinese and English text generation. The code and dataset are available at https://github.com/CyrilSterling/TextGen.
How Control Information Influences Multilingual Text Image Generation and Editing?
[ "Boqiang Zhang", "Zuan Gao", "Yadong Qu", "Hongtao Xie" ]
NeurIPS.cc/2024/Conference
2407.11502
[ "https://github.com/cyrilsterling/textgen" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=r0eSCJ6qsL
@inproceedings{ kag2024ascan, title={As{CAN}: Asymmetric Convolution-Attention Networks for Efficient Recognition and Generation}, author={Anil Kag and Huseyin Coskun and Jierun Chen and Junli Cao and Willi Menapace and Aliaksandr Siarohin and Sergey Tulyakov and Jian Ren}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=r0eSCJ6qsL} }
Neural network architecture design requires making many crucial decisions. The common desiderata is that similar decisions, with little modifications, can be reused in a variety of tasks and applications. To satisfy that, architectures must provide promising latency and performance trade-offs, support a variety of tasks, scale efficiently with respect to the amounts of data and compute, leverage available data from other tasks, and efficiently support various hardware. To this end, we introduce AsCAN---a hybrid architecture, combining both convolutional and transformer blocks. We revisit the key design principles of hybrid architectures and propose a simple and effective \emph{asymmetric} architecture, where the distribution of convolutional and transformer blocks is \emph{asymmetric}, containing more convolutional blocks in the earlier stages, followed by more transformer blocks in later stages. AsCAN supports a variety of tasks: recognition, segmentation, class-conditional image generation, and features a superior trade-off between performance and latency. We then scale the same architecture to solve a large-scale text-to-image task and show state-of-the-art performance compared to the most recent public and commercial models. Notably, without performing any optimization of inference time our model shows faster execution, even when compared to works that do such optimization, highlighting the advantages and the value of our approach.
AsCAN: Asymmetric Convolution-Attention Networks for Efficient Recognition and Generation
[ "Anil Kag", "Huseyin Coskun", "Jierun Chen", "Junli Cao", "Willi Menapace", "Aliaksandr Siarohin", "Sergey Tulyakov", "Jian Ren" ]
NeurIPS.cc/2024/Conference
2411.04967
[ "" ]
https://huggingface.co/papers/2411.04967
0
1
0
8
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=qzwAG8qxI1
@inproceedings{ wang2024bridging, title={Bridging {OOD} Detection and Generalization: A Graph-Theoretic View}, author={Han Wang and Yixuan Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qzwAG8qxI1} }
In the context of modern machine learning, models deployed in real-world scenarios often encounter diverse data shifts like covariate and semantic shifts, leading to challenges in both out-of-distribution (OOD) generalization and detection. Despite considerable attention to these issues separately, a unified framework for theoretical understanding and practical usage is lacking. To bridge the gap, we introduce a graph-theoretic framework to jointly tackle both OOD generalization and detection problems. By leveraging the graph formulation, data representations are obtained through the factorization of the graph's adjacency matrix, enabling us to derive provable error quantifying OOD generalization and detection performance. Empirical results showcase competitive performance in comparison to existing methods, thereby validating our theoretical underpinnings.
Bridging OOD Detection and Generalization: A Graph-Theoretic View
[ "Han Wang", "Yixuan Li" ]
NeurIPS.cc/2024/Conference
2409.18205
[ "https://github.com/deeplearning-wisc/graph-spectral-ood" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qyaz3XP0FN
@inproceedings{ berman2024parametric, title={Parametric model reduction of mean-field and stochastic systems via higher-order action matching}, author={Jules Berman and Tobias Blickhan and Benjamin Peherstorfer}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qyaz3XP0FN} }
The aim of this work is to learn models of population dynamics of physical systems that feature stochastic and mean-field effects and that depend on physics parameters. The learned models can act as surrogates of classical numerical models to efficiently predict the system behavior over the physics parameters. Building on the Benamou-Brenier formula from optimal transport and action matching, we use a variational problem to infer parameter- and time-dependent gradient fields that represent approximations of the population dynamics. The inferred gradient fields can then be used to rapidly generate sample trajectories that mimic the dynamics of the physical system on a population level over varying physics parameters. We show that combining Monte Carlo sampling with higher-order quadrature rules is critical for accurately estimating the training objective from sample data and for stabilizing the training process. We demonstrate on Vlasov-Poisson instabilities as well as on high-dimensional particle and chaotic systems that our approach accurately predicts population dynamics over a wide range of parameters and outperforms state-of-the-art diffusion-based and flow-based modeling that simply condition on time and physics parameters.
Parametric model reduction of mean-field and stochastic systems via higher-order action matching
[ "Jules Berman", "Tobias Blickhan", "Benjamin Peherstorfer" ]
NeurIPS.cc/2024/Conference
2410.12000
[ "https://github.com/julesberman/hoam" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qxS4IvtLdD
@inproceedings{ pandey2024fast, title={Fast samplers for Inverse Problems in Iterative Refinement models}, author={Kushagra Pandey and Ruihan Yang and Stephan Mandt}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qxS4IvtLdD} }
Constructing fast samplers for unconditional diffusion and flow-matching models has received much attention recently; however, existing methods for solving *inverse problems*, such as super-resolution, inpainting, or deblurring, still require hundreds to thousands of iterative steps to obtain high-quality results. We propose a plug-and-play framework for constructing efficient samplers for inverse problems, requiring only *pre-trained* diffusion or flow-matching models. We present *Conditional Conjugate Integrators*, which leverage the specific form of the inverse problem to project the respective conditional diffusion/flow dynamics into a more amenable space for sampling. Our method complements popular posterior approximation methods for solving inverse problems using diffusion/flow models. We evaluate the proposed method's performance on various linear image restoration tasks across multiple datasets, employing diffusion and flow-matching models. Notably, on challenging inverse problems like 4x super-resolution on the ImageNet dataset, our method can generate high-quality samples in as few as *5* conditional sampling steps and outperforms competing baselines requiring 20-1000 steps. Our code will be publicly available at https://github.com/mandt-lab/c-pigdm.
Fast samplers for Inverse Problems in Iterative Refinement models
[ "Kushagra Pandey", "Ruihan Yang", "Stephan Mandt" ]
NeurIPS.cc/2024/Conference
2405.17673
[ "https://github.com/mandt-lab/ci2rm" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qwl3EiDi9r
@inproceedings{ uwamichi2024integrating, title={Integrating {GNN} and Neural {ODE}s for Estimating Non-Reciprocal Two-Body Interactions in Mixed-Species Collective Motion}, author={Masahito Uwamichi and Simon K. Schnyder and Tetsuya J. Kobayashi and Satoshi Sawai}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qwl3EiDi9r} }
Analyzing the motion of multiple biological agents, be it cells or individual animals, is pivotal for the understanding of complex collective behaviors. With the advent of advanced microscopy, detailed images of complex tissue formations involving multiple cell types have become more accessible in recent years. However, deciphering the underlying rules that govern cell movements is far from trivial. Here, we present a novel deep learning framework for estimating the underlying equations of motion from observed trajectories, a pivotal step in decoding such complex dynamics. Our framework integrates graph neural networks with neural differential equations, enabling effective prediction of two-body interactions based on the states of the interacting entities. We demonstrate the efficacy of our approach through two numerical experiments. First, we used simulated data from a toy model to tune the hyperparameters. Based on the obtained hyperparameters, we then applied this approach to a more complex model with non-reciprocal forces that mimic the collective dynamics of the cells of slime molds. Our results show that the proposed method can accurately estimate the functional forms of two-body interactions -- even when they are nonreciprocal -- thereby precisely replicating both individual and collective behaviors within these systems.
Integrating GNN and Neural ODEs for Estimating Non-Reciprocal Two-Body Interactions in Mixed-Species Collective Motion
[ "Masahito Uwamichi", "Simon K. Schnyder", "Tetsuya J. Kobayashi", "Satoshi Sawai" ]
NeurIPS.cc/2024/Conference
2405.16503
[ "https://github.com/MasahitoUWAMICHI/collectiveMotionNN" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qwgfh2fTtN
@inproceedings{ sun2024easytohard, title={Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision}, author={Zhiqing Sun and Longhui Yu and Yikang Shen and Weiyang Liu and Yiming Yang and Sean Welleck and Chuang Gan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qwgfh2fTtN} }
Current AI alignment methodologies rely on human-provided demonstrations or judgments, and the learned capabilities of AI systems would be upper-bounded by human capabilities as a result. This raises a challenging research question: How can we keep improving the systems when their capabilities have surpassed the levels of humans? This paper answers this question in the context of tackling hard reasoning tasks (e.g., level 4-5 MATH problems) via learning from human annotations on easier tasks (e.g., level 1-3 MATH problems), which we term as easy-to-hard generalization. Our key insight is that an evaluator (reward model) trained on supervisions for easier tasks can be effectively used for scoring candidate solutions of harder tasks and hence facilitating easy-to-hard generalization over different levels of tasks. Based on this insight, we propose a novel approach to scalable alignment, which firstly trains the (process-supervised) reward models on easy problems (e.g., level 1-3), and then uses them to evaluate the performance of policy models on hard problems. We show that such easy-to-hard generalization from evaluators can enable easy-to-hard generalizations in generators either through re-ranking or reinforcement learning (RL). Notably, our process-supervised 7b RL model and 34b model (reranking@1024) achieves an accuracy of 34.0% and 52.5% on MATH500, respectively, despite only using human supervision on easy problems. Our approach suggests a promising path toward AI systems that advance beyond the frontier of human supervision.
Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision
[ "Zhiqing Sun", "Longhui Yu", "Yikang Shen", "Weiyang Liu", "Yiming Yang", "Sean Welleck", "Chuang Gan" ]
NeurIPS.cc/2024/Conference
2403.09472
[ "https://github.com/edward-sun/easy-to-hard" ]
https://huggingface.co/papers/2403.09472
2
0
0
7
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=qvdc0oCX2n
@inproceedings{ wang2024cliploss, title={{CLIPL}oss and Norm-Based Data Selection Methods for Multimodal Contrastive Learning}, author={Yiping Wang and Yifang Chen and Wendan Yan and Alex Fang and Wenjing Zhou and Kevin Jamieson and Simon Shaolei Du}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qvdc0oCX2n} }
Data selection has emerged as a core issue for large-scale visual-language model pretaining (e.g., CLIP), particularly with noisy web-curated datasets. Three main data selection approaches are: (1) leveraging external non-CLIP models to aid data selection, (2) training new CLIP-style embedding models that are more effective at selecting high-quality data than the original OpenAI CLIP model, and (3) designing better metrics or strategies universally applicable to any CLIP embedding without requiring specific model properties (e.g., CLIPScore is one popular metric). While the first two approaches have been extensively studied, the third remains under-explored. In this paper, we advance the third approach by proposing two new methods. Firstly, instead of classical CLIP scores that only consider the alignment between two modalities from a single sample, we introduce $\textbf{negCLIPLoss}$, a method inspired by CLIP training loss that adds the alignment between one sample and its contrastive pairs as an extra normalization term to CLIPScore for better quality measurement. Secondly, when downstream tasks are known, we propose a new norm-based metric, $\textbf{NormSim}$, to measure the similarity between pretraining data and target data. We test our methods on the data selection benchmark, DataComp [Gadre et al., 2023]. Compared to the best baseline using only OpenAI's CLIP-L/14, our methods achieve a 5.3\% improvement on ImageNet-1k and a 2.8\% improvement on 38 downstream evaluation tasks. Moreover, both $\textbf{negCLIPLoss}$ and $\textbf{NormSim}$ are compatible with existing techniques. By combining our methods with the current best methods DFN [Fang et al., 2023] and HYPE [Kim et al., 2024], we can boost average performance on downstream tasks by 0.9\%, achieving a new state-of-the-art on the DataComp-medium benchmark.
CLIPLoss and Norm-Based Data Selection Methods for Multimodal Contrastive Learning
[ "Yiping Wang", "Yifang Chen", "Wendan Yan", "Alex Fang", "Wenjing Zhou", "Kevin Jamieson", "Simon Shaolei Du" ]
NeurIPS.cc/2024/Conference
2405.19547
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=qu5NTwZtxA
@inproceedings{ jing2024towards, title={Towards Editing Time Series}, author={Baoyu Jing and Shuqi Gu and Tianyu Chen and Zhiyu Yang and Dongsheng Li and Jingrui He and Kan Ren}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qu5NTwZtxA} }
Synthesizing time series data is pivotal in modern society, aiding effective decision making and ensuring privacy preservation in various scenarios. Time series are associated with various attributes, including trends, seasonality, and external information such as location. Recent research has predominantly focused on random unconditional synthesis or conditional synthesis. Nonetheless, these paradigms generate time series from scratch and are incapable of manipulating existing time series samples. This paper introduces a novel task, called Time Series Editing (TSE), to synthesize time series by manipulating existing time series. The objective is to modify the given time series according to the specified attributes while preserving other properties unchanged. This task is not trivial due to the inadequacy of data coverage and the intricate relationships between time series and their attributes. To address these issues, we introduce a novel diffusion model, called TEdit. The proposed TEdit is trained using a novel bootstrap learning algorithm that effectively enhances the coverage of the original data. It is also equipped with an innovative multi-resolution modeling and generation paradigm to capture the complex relationships between time series and their attributes. Experimental results demonstrate the efficacy of TEdit for editing specified attributes upon the existing time series data. The project page is at https://seqml.github.io/tse.
Towards Editing Time Series
[ "Baoyu Jing", "Shuqi Gu", "Tianyu Chen", "Zhiyu Yang", "Dongsheng Li", "Jingrui He", "Kan Ren" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qrlguvKu7a
@inproceedings{ yan2024perflow, title={Pe{RF}low: Piecewise Rectified Flow as Universal Plug-and-Play Accelerator}, author={Hanshu Yan and Xingchao Liu and Jiachun Pan and Jun Hao Liew and qiang liu and Jiashi Feng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qrlguvKu7a} }
We present Piecewise Rectified Flow (PeRFlow), a flow-based method for accelerating diffusion models. PeRFlow divides the sampling process of generative flows into several time windows and straightens the trajectories in each interval via the reflow operation, thereby approaching piecewise linear flows. PeRFlow achieves superior performance in a few-step generation. Moreover, through dedicated parameterizations, the PeRFlow models inherit knowledge from the pretrained diffusion models. Thus, the training converges fast and the obtained models show advantageous transfer ability, serving as universal plug-and-play accelerators that are compatible with various workflows based on the pre-trained diffusion models.
PeRFlow: Piecewise Rectified Flow as Universal Plug-and-Play Accelerator
[ "Hanshu Yan", "Xingchao Liu", "Jiachun Pan", "Jun Hao Liew", "qiang liu", "Jiashi Feng" ]
NeurIPS.cc/2024/Conference
2405.07510
[ "https://github.com/magic-research/piecewise-rectified-flow" ]
https://huggingface.co/papers/2405.07510
0
2
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=qrfp4eeZ47
@inproceedings{ joshi2024factorizephys, title={FactorizePhys: Matrix Factorization for Multidimensional Attention in Remote Physiological Sensing}, author={Jitesh Joshi and Sos Agaian and Youngjun Cho}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qrfp4eeZ47} }
Remote photoplethysmography (rPPG) enables non-invasive extraction of blood volume pulse signals through imaging, transforming spatial-temporal data into time series signals. Advances in end-to-end rPPG approaches have focused on this transformation where attention mechanisms are crucial for feature extraction. However, existing methods compute attention disjointly across spatial, temporal, and channel dimensions. Here, we propose the Factorized Self-Attention Module (FSAM), which jointly computes multidimensional attention from voxel embeddings using nonnegative matrix factorization. To demonstrate FSAM's effectiveness, we developed FactorizePhys, an end-to-end 3D-CNN architecture for estimating blood volume pulse signals from raw video frames. Our approach adeptly factorizes voxel embeddings to achieve comprehensive spatial, temporal, and channel attention, enhancing performance of generic signal extraction tasks. Furthermore, we deploy FSAM within an existing 2D-CNN-based rPPG architecture to illustrate its versatility. FSAM and FactorizePhys are thoroughly evaluated against state-of-the-art rPPG methods, each representing different types of architecture and attention mechanism. We perform ablation studies to investigate the architectural decisions and hyperparameters of FSAM. Experiments on four publicly available datasets and intuitive visualization of learned spatial-temporal features substantiate the effectiveness of FSAM and enhanced cross-dataset generalization in estimating rPPG signals, suggesting its broader potential as a multidimensional attention mechanism. The code is accessible at https://github.com/PhysiologicAILab/FactorizePhys.
FactorizePhys: Matrix Factorization for Multidimensional Attention in Remote Physiological Sensing
[ "Jitesh Joshi", "Sos Agaian", "Youngjun Cho" ]
NeurIPS.cc/2024/Conference
2411.01542
[ "https://github.com/physiologicailab/factorizephys" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qqQFOcUEqM
@inproceedings{ chen2024conjugated, title={Conjugated Semantic Pool Improves {OOD} Detection with Pre-trained Vision-Language Models}, author={Mengyuan Chen and Junyu Gao and Changsheng Xu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qqQFOcUEqM} }
A straightforward pipeline for zero-shot out-of-distribution (OOD) detection involves selecting potential OOD labels from an extensive semantic pool and then leveraging a pre-trained vision-language model to perform classification on both in-distribution (ID) and OOD labels. In this paper, we theorize that enhancing performance requires expanding the semantic pool, while increasing the expected probability of selected OOD labels being activated by OOD samples, and ensuring low mutual dependence among the activations of these OOD labels. A natural expansion manner is to adopt a larger lexicon; however, the inevitable introduction of numerous synonyms and uncommon words fails to meet the above requirements, indicating that viable expansion manners move beyond merely selecting words from a lexicon. Since OOD detection aims to correctly classify input images into ID/OOD class groups, we can "make up" OOD label candidates which are not standard class names but beneficial for the process. Observing that the original semantic pool is comprised of unmodified specific class names, we correspondingly construct a conjugated semantic pool (CSP) consisting of modified superclass names, each serving as a cluster center for samples sharing similar properties across different categories. Consistent with our established theory, expanding OOD label candidates with the CSP satisfies the requirements and outperforms existing works by 7.89% in FPR95. Codes are available in https://github.com/MengyuanChen21/NeurIPS2024-CSP.
Conjugated Semantic Pool Improves OOD Detection with Pre-trained Vision-Language Models
[ "Mengyuan Chen", "Junyu Gao", "Changsheng Xu" ]
NeurIPS.cc/2024/Conference
2410.08611
[ "https://github.com/mengyuanchen21/neurips2024-csp" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qpeAtfUWOQ
@inproceedings{ li2024variational, title={Variational Multi-scale Representation for Estimating Uncertainty in 3D Gaussian Splatting}, author={Ruiqi Li and Yiu-ming Cheung}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qpeAtfUWOQ} }
Recently, 3D Gaussian Splatting (3DGS) has become popular in reconstructing dense 3D representations of appearance and geometry. However, the learning pipeline in 3DGS inherently lacks the ability to quantify uncertainty, which is an important factor in applications like robotics mapping and navigation. In this paper, we propose an uncertainty estimation method built upon the Bayesian inference framework. Specifically, we propose a method to build variational multi-scale 3D Gaussians, where we leverage explicit scale information in 3DGS parameters to construct diversified parameter space samples. We develop an offset table technique to draw local multi-scale samples efficiently by offsetting selected attributes and sharing other base attributes. Then, the offset table is learned by variational inference with multi-scale prior. The learned offset posterior can quantify the uncertainty of each individual Gaussian component, and be used in the forward pass to infer the predictive uncertainty. Extensive experimental results on various benchmark datasets show that the proposed method provides well-aligned calibration performance on estimated uncertainty and better rendering quality compared with the previous methods that enable uncertainty quantification with view synthesis. Besides, by leveraging the model parameter uncertainty estimated by our method, we can remove noisy Gaussians automatically, thereby obtaining a high-fidelity part of the reconstructed scene, which is of great help in improving the visual quality.
Variational Multi-scale Representation for Estimating Uncertainty in 3D Gaussian Splatting
[ "Ruiqi Li", "Yiu-ming Cheung" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qp5VbGTaM0
@inproceedings{ chen2024on, title={On Softmax Direct Preference Optimization for Recommendation}, author={Yuxin Chen and Junfei Tan and An Zhang and Zhengyi Yang and Leheng Sheng and Enzhi Zhang and Xiang Wang and Tat-Seng Chua}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qp5VbGTaM0} }
Recommender systems aim to predict personalized rankings based on user preference data. With the rise of Language Models (LMs), LM-based recommenders have been widely explored due to their extensive world knowledge and powerful reasoning abilities. Most of the LM-based recommenders convert historical interactions into language prompts, pairing with a positive item as the target response and fine-tuning LM with a language modeling loss. However, the current objective fails to fully leverage preference data and is not optimized for personalized ranking tasks, which hinders the performance of LM-based recommenders. Inspired by the current advancement of Direct Preference Optimization (DPO) in human preference alignment and the success of softmax loss in recommendations, we propose Softmax-DPO (\textbf{S-DPO}) to instill ranking information into the LM to help LM-based recommenders distinguish preferred items from negatives, rather than solely focusing on positives. Specifically, we incorporate multiple negatives in user preference data and devise an alternative version of DPO loss tailored for LM-based recommenders, which is extended from the traditional full-ranking Plackett-Luce (PL) model to partial rankings and connected to softmax sampling strategies. Theoretically, we bridge S-DPO with the softmax loss over negative sampling and find that it has an inherent benefit of mining hard negatives, which assures its exceptional capabilities in recommendation tasks. Empirically, extensive experiments conducted on three real-world datasets demonstrate the superiority of S-DPO to effectively model user preference and further boost recommendation performance while providing better rewards for preferred items. Our codes are available at https://github.com/chenyuxin1999/S-DPO.
On Softmax Direct Preference Optimization for Recommendation
[ "Yuxin Chen", "Junfei Tan", "An Zhang", "Zhengyi Yang", "Leheng Sheng", "Enzhi Zhang", "Xiang Wang", "Tat-Seng Chua" ]
NeurIPS.cc/2024/Conference
2406.09215
[ "https://github.com/chenyuxin1999/s-dpo" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qo7NtGMr2u
@inproceedings{ shaw2024symmetry, title={Symmetry Discovery Beyond Affine Transformations}, author={Ben Shaw and Abram Magner and Kevin R. Moon}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qo7NtGMr2u} }
Symmetry detection has been shown to improve various machine learning tasks. In the context of continuous symmetry detection, current state of the art experiments are limited to the detection of affine transformations. Under the manifold assumption, we outline a framework for discovering continuous symmetry in data beyond the affine transformation group. We also provide a similar framework for discovering discrete symmetry. We experimentally compare our method to an existing method known as LieGAN and show that our method is competitive at detecting affine symmetries for large sample sizes and superior than LieGAN for small sample sizes. We also show our method is able to detect continuous symmetries beyond the affine group and is generally more computationally efficient than LieGAN.
Symmetry Discovery Beyond Affine Transformations
[ "Ben Shaw", "Abram Magner", "Kevin R. Moon" ]
NeurIPS.cc/2024/Conference
2406.03619
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qmoVQbwmCY
@inproceedings{ kori2024identifiable, title={Identifiable Object-Centric Representation Learning via Probabilistic Slot Attention}, author={Avinash Kori and Francesco Locatello and Ainkaran Santhirasekaram and Francesca Toni and Ben Glocker and Fabio De Sousa Ribeiro}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qmoVQbwmCY} }
Learning modular object-centric representations is said to be crucial for systematic generalization. Existing methods show promising object-binding capabilities empirically, but theoretical identifiability guarantees remain relatively underdeveloped. Understanding when object-centric representations can theoretically be identified is important for scaling slot-based methods to high-dimensional images with correctness guarantees. To that end, we propose a probabilistic slot-attention algorithm that imposes an *aggregate* mixture prior over object-centric slot representations, thereby providing slot identifiability guarantees without supervision, up to an equivalence relation. We provide empirical verification of our theoretical identifiability result using both simple 2-dimensional data and high-resolution imaging datasets.
Identifiable Object-Centric Representation Learning via Probabilistic Slot Attention
[ "Avinash Kori", "Francesco Locatello", "Ainkaran Santhirasekaram", "Francesca Toni", "Ben Glocker", "Fabio De Sousa Ribeiro" ]
NeurIPS.cc/2024/Conference
2406.07141
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qlH21Ig1IC
@inproceedings{ malitsky2024adaptive, title={Adaptive Proximal Gradient Method for Convex Optimization}, author={Yura Malitsky and Konstantin Mishchenko}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qlH21Ig1IC} }
In this paper, we explore two fundamental first-order algorithms in convex optimization, namely, gradient descent (GD) and proximal gradient method (ProxGD). Our focus is on making these algorithms entirely adaptive by leveraging local curvature information of smooth functions. We propose adaptive versions of GD and ProxGD that are based on observed gradient differences and, thus, have no added computational costs. Moreover, we prove convergence of our methods assuming only local Lipschitzness of the gradient. In addition, the proposed versions allow for even larger stepsizes than those initially suggested in [MM20].
Adaptive Proximal Gradient Method for Convex Optimization
[ "Yura Malitsky", "Konstantin Mishchenko" ]
NeurIPS.cc/2024/Conference
2308.02261
[ "https://github.com/ymalitsky/adproxgd" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qkoZgJhxsA
@inproceedings{ liu2024socraticlm, title={Socratic{LM}: Exploring Socratic Personalized Teaching with Large Language Models}, author={Jiayu Liu and Zhenya Huang and Tong Xiao and Jing Sha and Jinze Wu and Qi Liu and Shijin Wang and Enhong Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qkoZgJhxsA} }
Large language models (LLMs) are considered a crucial technology for advancing intelligent education since they exhibit the potential for an in-depth understanding of teaching scenarios and providing students with personalized guidance. Nonetheless, current LLM-based application in personalized teaching predominantly follows a "Question-Answering" paradigm, where students are passively provided with answers and explanations. In this paper, we propose SocraticLM, which achieves a Socratic "Thought-Provoking" teaching paradigm that fulfills the role of a real classroom teacher in actively engaging students in the thought process required for genuine problem-solving mastery. To build SocraticLM, we first propose a novel "Dean-Teacher-Student" multi-agent pipeline to construct a new dataset, SocraTeach, which contains $35$K meticulously crafted Socratic-style multi-round (equivalent to $208$K single-round) teaching dialogues grounded in fundamental mathematical problems. Our dataset simulates authentic teaching scenarios, interacting with six representative types of simulated students with different cognitive states, and strengthening four crucial teaching abilities. SocraticLM is then fine-tuned on SocraTeach with three strategies balancing its teaching and reasoning abilities. Moreover, we contribute a comprehensive evaluation system encompassing five pedagogical dimensions for assessing the teaching quality of LLMs. Extensive experiments verify that SocraticLM achieves significant improvements in the teaching performance, outperforming GPT4 by more than 12\%. Our dataset and code is available at https://github.com/Ljyustc/SocraticLM.
SocraticLM: Exploring Socratic Personalized Teaching with Large Language Models
[ "Jiayu Liu", "Zhenya Huang", "Tong Xiao", "Jing Sha", "Jinze Wu", "Qi Liu", "Shijin Wang", "Enhong Chen" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=qfCQ54ZTX1
@inproceedings{ chen2024entity, title={Entity Alignment with Noisy Annotations from Large Language Models}, author={Shengyuan Chen and Qinggang Zhang and Junnan Dong and Wen Hua and Qing Li and Xiao Huang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qfCQ54ZTX1} }
Entity alignment (EA) aims to merge two knowledge graphs (KGs) by identifying equivalent entity pairs. While existing methods heavily rely on human-generated labels, it is prohibitively expensive to incorporate cross-domain experts for annotation in real-world scenarios. The advent of Large Language Models (LLMs) presents new avenues for automating EA with annotations, inspired by their comprehensive capability to process semantic information. However, it is nontrivial to directly apply LLMs for EA since the annotation space in real-world KGs is large. LLMs could also generate noisy labels that may mislead the alignment. To this end, we propose a unified framework, LLM4EA, to effectively leverage LLMs for EA. Specifically, we design a novel active learning policy to significantly reduce the annotation space by prioritizing the most valuable entities based on the entire inter-KG and intra-KG structure. Moreover, we introduce an unsupervised label refiner to continuously enhance label accuracy through in-depth probabilistic reasoning. We iteratively optimize the policy based on the feedback from a base EA model. Extensive experiments demonstrate the advantages of LLM4EA on four benchmark datasets in terms of effectiveness, robustness, and efficiency.
Entity Alignment with Noisy Annotations from Large Language Models
[ "Shengyuan Chen", "Qinggang Zhang", "Junnan Dong", "Wen Hua", "Qing Li", "Xiao Huang" ]
NeurIPS.cc/2024/Conference
2405.16806
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qf2uZAdy1N
@inproceedings{ amortila2024reinforcement, title={Reinforcement Learning Under Latent Dynamics: Toward Statistical and Algorithmic Modularity}, author={Philip Amortila and Dylan J Foster and Nan Jiang and Akshay Krishnamurthy and Zakaria Mhammedi}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qf2uZAdy1N} }
Real-world applications of reinforcement learning often involve environments where agents operate on complex, high-dimensional observations, but the underlying (``latent'') dynamics are comparatively simple. However, beyond restrictive settings such as tabular latent dynamics, the fundamental statistical requirements and algorithmic principles for *reinforcement learning under latent dynamics* are poorly understood. This paper addresses the question of reinforcement learning under *general latent dynamics* from a statistical and algorithmic perspective. On the statistical side, our main negative result shows that *most* well-studied settings for reinforcement learning with function approximation become intractable when composed with rich observations; we complement this with a positive result, identifying *latent pushforward coverability* as a general condition that enables statistical tractability. Algorithmically, we develop provably efficient *observable-to-latent* reductions ---that is, reductions that transform an arbitrary algorithm for the latent MDP into an algorithm that can operate on rich observations--- in two settings: one where the agent has access to hindsight observations of the latent dynamics (Lee et al., 2023) and one where the agent can estimate *self-predictive* latent models (Schwarzer et al., 2020). Together, our results serve as a first step toward a unified statistical and algorithmic theory for reinforcement learning under latent dynamics.
Reinforcement Learning Under Latent Dynamics: Toward Statistical and Algorithmic Modularity
[ "Philip Amortila", "Dylan J Foster", "Nan Jiang", "Akshay Krishnamurthy", "Zakaria Mhammedi" ]
NeurIPS.cc/2024/Conference
2410.17904
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=qf1ncViBr5
@inproceedings{ ericsson2024einspace, title={einspace: Searching for Neural Architectures from Fundamental Operations}, author={Linus Ericsson and Miguel Espinosa and Chenhongyi Yang and Antreas Antoniou and Amos Storkey and Shay B Cohen and Steven McDonagh and Elliot J. Crowley}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qf1ncViBr5} }
Neural architecture search (NAS) finds high performing networks for a given task. Yet the results of NAS are fairly prosaic; they did not e.g. create a shift from convolutional structures to transformers. This is not least because the search spaces in NAS often aren’t diverse enough to include such transformations *a priori*. Instead, for NAS to provide greater potential for fundamental design shifts, we need a novel expressive search space design which is built from more fundamental operations. To this end, we introduce `einspace`, a search space based on a parameterised probabilistic context-free grammar. Our space is versatile, supporting architectures of various sizes and complexities, while also containing diverse network operations which allow it to model convolutions, attention components and more. It contains many existing competitive architectures, and provides flexibility for discovering new ones. Using this search space, we perform experiments to find novel architectures as well as improvements on existing ones on the diverse Unseen NAS datasets. We show that competitive architectures can be obtained by searching from scratch, and we consistently find large improvements when initialising the search with strong baselines. We believe that this work is an important advancement towards a transformative NAS paradigm where search space expressivity and strategic search initialisation play key roles.
einspace: Searching for Neural Architectures from Fundamental Operations
[ "Linus Ericsson", "Miguel Espinosa", "Chenhongyi Yang", "Antreas Antoniou", "Amos Storkey", "Shay B Cohen", "Steven McDonagh", "Elliot J. Crowley" ]
NeurIPS.cc/2024/Conference
2405.20838
[ "https://github.com/linusericsson/einspace" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qevq3FZ63J
@inproceedings{ tao2024magis, title={{MAGIS}: {LLM}-Based Multi-Agent Framework for GitHub Issue Resolution}, author={Wei Tao and Yucheng Zhou and Yanlin Wang and Wenqiang Zhang and Hongyu Zhang and Yu Cheng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qevq3FZ63J} }
In software development, resolving the emergent issues within GitHub repositories is a complex challenge that involves not only the incorporation of new code but also the maintenance of existing code. Large Language Models (LLMs) have shown promise in code generation but face difficulties in resolving Github issues, particularly at the repository level. To overcome this challenge, we empirically study the reason why LLMs fail to resolve GitHub issues and analyze the major factors. Motivated by the empirical findings, we propose a novel LLM-based **M**ulti-**A**gent framework for **G**itHub **I**ssue re**S**olution, **MAGIS**, consisting of four agents customized for software evolution: Manager, Repository Custodian, Developer, and Quality Assurance Engineer agents. This framework leverages the collaboration of various agents in the planning and coding process to unlock the potential of LLMs to resolve GitHub issues. In experiments, we employ the SWE-bench benchmark to compare MAGIS with popular LLMs, including GPT-3.5, GPT-4, and Claude-2. MAGIS can resolve **13.94%** GitHub issues, significantly outperforming the baselines. Specifically, MAGIS achieves an eight-fold increase in resolved ratio over the direct application of GPT-4, the advanced LLM.
MAGIS: LLM-Based Multi-Agent Framework for GitHub Issue Resolution
[ "Wei Tao", "Yucheng Zhou", "Yanlin Wang", "Wenqiang Zhang", "Hongyu Zhang", "Yu Cheng" ]
NeurIPS.cc/2024/Conference
2403.17927
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qdV1vp1AtL
@inproceedings{ kobus2024universal, title={Universal Sample Coding}, author={Szymon Kobus and Tze-Yang Tung and Deniz Gunduz}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qdV1vp1AtL} }
In this work, we study the problem of communicating multiple samples from an unknown probability distribution using as few bits as possible. This is a generalization of the channel simulation problem, which has recently found applications and achieved state of the art results in realistic image compression, neural network compression, and communication-efficient federated learning. In this problem, the transmitter wants the receiver to generate multiple independent and identically distributed (i.i.d.) samples from a target distribution $P$, while the transmitter and the receiver have access to independent samples from a reference distribution $Q$. The core idea is to employ channel simulation in multiple rounds while updating the reference distribution $Q$ after each round in order to reduce the KL-divergence between $P$ and $Q$, thereby reducing the communication cost in subsequent rounds. We derive a lower bound on the expected communication cost and construct a practical algorithm that achieves the lower bound up to a multiplicative constant. We then employ this algorithm in communication-efficient federated learning, in which model updates correspond to samples from a distribution, and achieve a 37% reduction in the communication load. To further highlight the potential of sample communication for generative models, we show that the number of bits needed to communicate samples from a large language model can be reduced by up to 16 times, compared to entropy-based data compression.
Universal Sample Coding
[ "Szymon Kobus", "Tze-Yang Tung", "Deniz Gunduz" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qd8blc0o0F
@inproceedings{ eliasof2024granola, title={{GRANOLA}: Adaptive Normalization for Graph Neural Networks}, author={Moshe Eliasof and Beatrice Bevilacqua and Carola-Bibiane Sch{\"o}nlieb and Haggai Maron}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qd8blc0o0F} }
Despite the widespread adoption of Graph Neural Networks (GNNs), these models often incorporate off-the-shelf normalization layers like BatchNorm or InstanceNorm, which were not originally designed for GNNs. Consequently, these normalization layers may not effectively capture the unique characteristics of graph-structured data, potentially even weakening the expressive power of the overall architecture. While existing graph-specific normalization layers have been proposed, they often struggle to offer substantial and consistent benefits. In this paper, we propose GRANOLA, a novel graph-adaptive normalization layer. Unlike existing normalization layers, GRANOLA normalizes node features by adapting to the specific characteristics of the graph, particularly by generating expressive representations of its nodes, obtained by leveraging the propagation of Random Node Features (RNF) in the graph. We provide theoretical results that support our design choices as well as an extensive empirical evaluation demonstrating the superior performance of GRANOLA over existing normalization techniques. Furthermore, GRANOLA emerges as the top-performing method among all baselines in the same time complexity class of Message Passing Neural Networks (MPNNs).
GRANOLA: Adaptive Normalization for Graph Neural Networks
[ "Moshe Eliasof", "Beatrice Bevilacqua", "Carola-Bibiane Schönlieb", "Haggai Maron" ]
NeurIPS.cc/2024/Conference
2404.13344
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qcPlGtzwW9
@inproceedings{ cai2024tighter, title={Tighter Convergence Bounds for Shuffled {SGD} via Primal-Dual Perspective}, author={Xufeng Cai and Cheuk Yin Lin and Jelena Diakonikolas}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qcPlGtzwW9} }
Stochastic gradient descent (SGD) is perhaps the most prevalent optimization method in modern machine learning. Contrary to the empirical practice of sampling from the datasets \emph{without replacement} and with (possible) reshuffling at each epoch, the theoretical counterpart of SGD usually relies on the assumption of \emph{sampling with replacement}. It is only very recently that SGD using sampling without replacement -- shuffled SGD -- has been analyzed with matching upper and lower bounds. However, we observe that those bounds are too pessimistic to explain often superior empirical performance of data permutations (sampling without replacement) over vanilla counterparts (sampling with replacement) on machine learning problems. Through fine-grained analysis in the lens of primal-dual cyclic coordinate methods and the introduction of novel smoothness parameters, we present several results for shuffled SGD on smooth and non-smooth convex losses, where our novel analysis framework provides tighter convergence bounds over all popular shuffling schemes (IG, SO, and RR). Notably, our new bounds predict faster convergence than existing bounds in the literature -- by up to a factor of $O(\sqrt{n})$, mirroring benefits from tighter convergence bounds using component smoothness parameters in randomized coordinate methods. Lastly, we numerically demonstrate on common machine learning datasets that our bounds are indeed much tighter, thus offering a bridge between theory and practice.
Tighter Convergence Bounds for Shuffled SGD via Primal-Dual Perspective
[ "Xufeng Cai", "Cheuk Yin Lin", "Jelena Diakonikolas" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qbvt3ocQxB
@inproceedings{ tang2024ioda, title={{IODA}: Instance-Guided One-shot Domain Adaptation for Super-Resolution}, author={Zaizuo Tang and Yu-Bin Yang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qbvt3ocQxB} }
The domain adaptation method effectively mitigates the negative impact of domain gaps on the performance of super-resolution (SR) networks through the guidance of numerous target domain low-resolution (LR) images. However, in real-world scenarios, the availability of target domain LR images is often limited, sometimes even to just one, which inevitably impairs the domain adaptation performance of SR networks. We propose Instance-guided One-shot Domain Adaptation for Super-Resolution (IODA) to enable efficient domain adaptation with only a single unlabeled target domain LR image. To address the limited diversity of the target domain distribution caused by a single target domain LR image, we propose an instance-guided target domain distribution expansion strategy. This strategy effectively expands the diversity of the target domain distribution by generating instance-specific features focused on different instances within the image. For SR tasks emphasizing texture details, we propose an image-guided domain adaptation method. Compared to existing methods that use text representation for domain difference, this method utilizes pixel-level representation with higher granularity, enabling efficient domain adaptation guidance for SR networks. Finally, we validate the effectiveness of IODA on multiple datasets and various network architectures, achieving satisfactory one-shot domain adaptation for SR networks. Our code is available at https://github.com/ZaizuoTang/IODA.
IODA: Instance-Guided One-shot Domain Adaptation for Super-Resolution
[ "Zaizuo Tang", "Yu-Bin Yang" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qamfjyhPeg
@inproceedings{ bar2024protected, title={Protected Test-Time Adaptation via Online Entropy Matching: A Betting Approach}, author={Yarin Bar and Shalev Shaer and Yaniv Romano}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qamfjyhPeg} }
We present a novel approach for test-time adaptation via online self-training, consisting of two components. First, we introduce a statistical framework that detects distribution shifts in the classifier's entropy values obtained on a stream of unlabeled samples. Second, we devise an online adaptation mechanism that utilizes the evidence of distribution shifts captured by the detection tool to dynamically update the classifier's parameters. The resulting adaptation process drives the distribution of test entropy values obtained from the self-trained classifier to match those of the source domain, building invariance to distribution shifts. This approach departs from the conventional self-training method, which focuses on minimizing the classifier's entropy. Our approach combines concepts in betting martingales and online learning to form a detection tool capable of quickly reacting to distribution shifts. We then reveal a tight relation between our adaptation scheme and optimal transport, which forms the basis of our novel self-supervised loss. Experimental results demonstrate that our approach improves test-time accuracy under distribution shifts while maintaining accuracy and calibration in their absence, outperforming leading entropy minimization methods across various scenarios.
Protected Test-Time Adaptation via Online Entropy Matching: A Betting Approach
[ "Yarin Bar", "Shalev Shaer", "Yaniv Romano" ]
NeurIPS.cc/2024/Conference
2408.07511
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qaRT6QTIqJ
@inproceedings{ edelman2024the, title={The Evolution of Statistical Induction Heads: In-Context Learning Markov Chains}, author={Ezra Edelman and Nikolaos Tsilivis and Benjamin L. Edelman and eran malach and Surbhi Goel}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qaRT6QTIqJ} }
Large language models have the ability to generate text that mimics patterns in their inputs. We introduce a simple Markov Chain sequence modeling task in order to study how this in-context learning capability emerges. In our setting, each example is sampled from a Markov chain drawn from a prior distribution over Markov chains. Transformers trained on this task form \emph{statistical induction heads} which compute accurate next-token probabilities given the bigram statistics of the context. During the course of training, models pass through multiple phases: after an initial stage in which predictions are uniform, they learn to sub-optimally predict using in-context single-token statistics (unigrams); then, there is a rapid phase transition to the correct in-context bigram solution. We conduct an empirical and theoretical investigation of this multi-phase process, showing how successful learning results from the interaction between the transformer's layers, and uncovering evidence that the presence of the simpler unigram solution may delay formation of the final bigram solution. We examine how learning is affected by varying the prior distribution over Markov chains, and consider the generalization of our in-context learning of Markov chains (ICL-MC) task to $n$-grams for $n > 2$.
The Evolution of Statistical Induction Heads: In-Context Learning Markov Chains
[ "Ezra Edelman", "Nikolaos Tsilivis", "Benjamin L. Edelman", "eran malach", "Surbhi Goel" ]
NeurIPS.cc/2024/Conference
2402.11004
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qaC4sSztlF
@inproceedings{ dong2024towards, title={Towards Safe Concept Transfer of Multi-Modal Diffusion via Causal Representation Editing}, author={Peiran Dong and Bingjie WANG and Song Guo and Junxiao Wang and Jie ZHANG and Zicong Hong}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qaC4sSztlF} }
Recent advancements in vision-language-to-image (VL2I) diffusion generation have made significant progress. While generating images from broad vision-language inputs holds promise, it also raises concerns about potential misuse, such as copying artistic styles without permission, which could have legal and social consequences. Therefore, it's crucial to establish governance frameworks to ensure ethical and copyright integrity, especially with widely used diffusion models. To address these issues, researchers have explored various approaches, such as dataset filtering, adversarial perturbations, machine unlearning, and inference-time refusals. However, these methods often lack either scalability or effectiveness. In response, we propose a new framework called causal representation editing (CRE), which extends representation editing from large language models (LLMs) to diffusion-based models. CRE enhances the efficiency and flexibility of safe content generation by intervening at diffusion timesteps causally linked to unsafe concepts. This allows for precise removal of harmful content while preserving acceptable content quality, demonstrating superior effectiveness, precision and scalability compared to existing methods. CRE can handle complex scenarios, including incomplete or blurred representations of unsafe concepts, offering a promising solution to challenges in managing harmful content generation in diffusion-based models.
Towards Safe Concept Transfer of Multi-Modal Diffusion via Causal Representation Editing
[ "Peiran Dong", "Bingjie WANG", "Song Guo", "Junxiao Wang", "Jie ZHANG", "Zicong Hong" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qZSwlcLMCS
@inproceedings{ gu2024kaleido, title={Kaleido Diffusion: Improving Conditional Diffusion Models with Autoregressive Latent Modeling}, author={Jiatao Gu and Ying Shen and Shuangfei Zhai and Yizhe Zhang and Navdeep Jaitly and Joshua M. Susskind}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qZSwlcLMCS} }
Diffusion models have emerged as a powerful tool for generating high-quality images from textual descriptions. Despite their successes, these models often exhibit limited diversity in the sampled images, particularly when sampling with a high classifier-free guidance weight. To address this issue, we present Kaleido, a novel approach that enhances the diversity of samples by incorporating autoregressive latent priors. Kaleido integrates an autoregressive language model that encodes the original caption and generates latent variables, serving as abstract and intermediary representations for guiding and facilitating the image generation process. In this paper, we explore a variety of discrete latent representations, including textual descriptions, detection bounding boxes, object blobs, and visual tokens. These representations diversify and enrich the input conditions to the diffusion models, enabling more diverse outputs. Our experimental results demonstrate that Kaleido effectively broadens the diversity of the generated image samples from a given textual description while maintaining high image quality. Furthermore, we show that Kaleido adheres closely to the guidance provided by the generated latent variables, demonstrating its capability to effectively control and direct the image generation process.
Kaleido Diffusion: Improving Conditional Diffusion Models with Autoregressive Latent Modeling
[ "Jiatao Gu", "Ying Shen", "Shuangfei Zhai", "Yizhe Zhang", "Navdeep Jaitly", "Joshua M. Susskind" ]
NeurIPS.cc/2024/Conference
2405.21048
[ "" ]
https://huggingface.co/papers/2405.21048
4
12
0
6
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=qZFshkbWDo
@inproceedings{ min2024uncovering, title={Uncovering, Explaining, and Mitigating the Superficial Safety of Backdoor Defense}, author={Rui Min and Zeyu Qin and Nevin L. Zhang and Li Shen and Minhao Cheng}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qZFshkbWDo} }
Backdoor attacks pose a significant threat to Deep Neural Networks (DNNs) as they allow attackers to manipulate model predictions with backdoor triggers. To address these security vulnerabilities, various backdoor purification methods have been proposed to purify compromised models. Typically, these purified models exhibit low Attack Success Rates (ASR), rendering them resistant to backdoored inputs. However, \textit{Does achieving a low ASR through current safety purification methods truly eliminate learned backdoor features from the pretraining phase?} In this paper, we provide an affirmative answer to this question by thoroughly investigating the \textit{Post-Purification Robustness} of current backdoor purification methods. We find that current safety purification methods are vulnerable to the rapid re-learning of backdoor behavior, even when further fine-tuning of purified models is performed using a very small number of poisoned samples. Based on this, we further propose the practical Query-based Reactivation Attack (QRA) which could effectively reactivate the backdoor by merely querying purified models. We find the failure to achieve satisfactory post-purification robustness stems from the insufficient deviation of purified models from the backdoored model along the backdoor-connected path. To improve the post-purification robustness, we propose a straightforward tuning defense, Path-Aware Minimization (PAM), which promotes deviation along backdoor-connected paths with extra model updates. Extensive experiments demonstrate that PAM significantly improves post-purification robustness while maintaining a good clean accuracy and low ASR. Our work provides a new perspective on understanding the effectiveness of backdoor safety tuning and highlights the importance of faithfully assessing the model's safety.
Uncovering, Explaining, and Mitigating the Superficial Safety of Backdoor Defense
[ "Rui Min", "Zeyu Qin", "Nevin L. Zhang", "Li Shen", "Minhao Cheng" ]
NeurIPS.cc/2024/Conference
2410.09838
[ "https://github.com/aisafety-hkust/stable_backdoor_purification" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=qXidsICaja
@inproceedings{ shi2024expertlevel, title={Expert-level protocol translation for self-driving labs}, author={Yu-Zhe Shi and Fanxu Meng and Haofei Hou and Zhangqian Bi and Qiao Xu and Lecheng Ruan and Qining Wang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qXidsICaja} }
Recent development in Artificial Intelligence (AI) models has propelled their application in scientific discovery, but the validation and exploration of these discoveries require subsequent empirical experimentation. The concept of self-driving laboratories promises to automate and thus boost the experimental process following AI-driven discoveries. However, the transition of experimental protocols, originally crafted for human comprehension, into formats interpretable by machines presents significant challenges, which, within the context of specific expert domain, encompass the necessity for structured as opposed to natural language, the imperative for explicit rather than tacit knowledge, and the preservation of causality and consistency throughout protocol steps. Presently, the task of protocol translation predominantly requires the manual and labor-intensive involvement of domain experts and information technology specialists, rendering the process time-intensive. To address these issues, we propose a framework that automates the protocol translation process through a three-stage workflow, which incrementally constructs Protocol Dependence Graphs (PDGs) that approach structured on the syntax level, completed on the semantics level, and linked on the execution level. Quantitative and qualitative evaluations have demonstrated its performance at par with that of human experts, underscoring its potential to significantly expedite and democratize the process of scientific discovery by elevating the automation capabilities within self-driving laboratories.
Expert-level protocol translation for self-driving labs
[ "Yu-Zhe Shi", "Fanxu Meng", "Haofei Hou", "Zhangqian Bi", "Qiao Xu", "Lecheng Ruan", "Qining Wang" ]
NeurIPS.cc/2024/Conference
2411.00444
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qXZVSy9LFR
@inproceedings{ cheng2024emotionllama, title={Emotion-{LL}a{MA}: Multimodal Emotion Recognition and Reasoning with Instruction Tuning}, author={Zebang Cheng and Zhi-Qi Cheng and Jun-Yan He and Kai Wang and Yuxiang Lin and Zheng Lian and Xiaojiang Peng and Alexander G Hauptmann}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qXZVSy9LFR} }
Accurate emotion perception is crucial for various applications, including human-computer interaction, education, and counseling. However, traditional single-modality approaches often fail to capture the complexity of real-world emotional expressions, which are inherently multimodal. Moreover, existing Multimodal Large Language Models (MLLMs) face challenges in integrating audio and recognizing subtle facial micro-expressions. To address this, we introduce the MERR dataset, containing 28,618 coarse-grained and 4,487 fine-grained annotated samples across diverse emotional categories. This dataset enables models to learn from varied scenarios and generalize to real-world applications. Furthermore, we propose Emotion-LLaMA, a model that seamlessly integrates audio, visual, and textual inputs through emotion-specific encoders. By aligning features into a shared space and employing a modified LLaMA model with instruction tuning, Emotion-LLaMA significantly enhances both emotional recognition and reasoning capabilities. Extensive evaluations show Emotion-LLaMA outperforms other MLLMs, achieving top scores in Clue Overlap (7.83) and Label Overlap (6.25) on EMER, an F1 score of 0.9036 on MER2023-SEMI challenge, and the highest UAR (45.59) and WAR (59.37) in zero-shot evaluations on DFEW dataset.
Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning
[ "Zebang Cheng", "Zhi-Qi Cheng", "Jun-Yan He", "Kai Wang", "Yuxiang Lin", "Zheng Lian", "Xiaojiang Peng", "Alexander G Hauptmann" ]
NeurIPS.cc/2024/Conference
2406.11161
[ "https://github.com/zebangcheng/emotion-llama" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qWi6ESgBjB
@inproceedings{ shen2024prune, title={Prune and Repaint: Content-Aware Image Retargeting for any Ratio}, author={Feihong Shen and Chao Li and Yifeng Geng and Yongjian Deng and Hao Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qWi6ESgBjB} }
Image retargeting is the task of adjusting the aspect ratio of images to suit different display devices or presentation environments. However, existing retargeting methods often struggle to balance the preservation of key semantics and image quality, resulting in either deformation or loss of important objects, or the introduction of local artifacts such as discontinuous pixels and inconsistent regenerated content. To address these issues, we propose a content-aware retargeting method called PruneRepaint. It incorporates semantic importance for each pixel to guide the identification of regions that need to be pruned or preserved in order to maintain key semantics. Additionally, we introduce an adaptive repainting module that selects image regions for repainting based on the distribution of pruned pixels and the proportion between foreground size and target aspect ratio, thus achieving local smoothness after pruning. By focusing on the content and structure of the foreground, our PruneRepaint approach adaptively avoids key content loss and deformation, while effectively mitigating artifacts with local repainting. We conduct experiments on the public RetargetMe benchmark and demonstrate through objective experimental results and subjective user studies that our method outperforms previous approaches in terms of preserving semantics and aesthetics, as well as better generalization across diverse aspect ratios. Codes will be available at https://github.com/fhshen2022/PruneRepaint.
Prune and Repaint: Content-Aware Image Retargeting for any Ratio
[ "Feihong Shen", "Chao Li", "Yifeng Geng", "Yongjian Deng", "Hao Chen" ]
NeurIPS.cc/2024/Conference
2410.22865
[ "https://github.com/fhshen2022/prunerepaint" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qWi33pPecC
@inproceedings{ hu2024most, title={Most Influential Subset Selection: Challenges, Promises, and Beyond}, author={Yuzheng Hu and Pingbang Hu and Han Zhao and Jiaqi Ma}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qWi33pPecC} }
How can we attribute the behaviors of machine learning models to their training data? While the classic influence function sheds light on the impact of individual samples, it often fails to capture the more complex and pronounced collective influence of a set of samples. To tackle this challenge, we study the Most Influential Subset Selection (MISS) problem, which aims to identify a subset of training samples with the greatest collective influence. We conduct a comprehensive analysis of the prevailing approaches in MISS, elucidating their strengths and weaknesses. Our findings reveal that influence-based greedy heuristics, a dominant class of algorithms in MISS, can provably fail even in linear regression. We delineate the failure modes, including the errors of influence function and the non-additive structure of the collective influence. Conversely, we demonstrate that an adaptive version of these heuristics which applies them iteratively, can effectively capture the interactions among samples and thus partially address the issues. Experiments on real-world datasets corroborate these theoretical findings, and further demonstrate that the merit of adaptivity can extend to more complex scenarios such as classification tasks and non-linear neural networks. We conclude our analysis by emphasizing the inherent trade-off between performance and computational efficiency, questioning the use of additive metrics such as the linear datamodeling score, and offering a range of discussions.
Most Influential Subset Selection: Challenges, Promises, and Beyond
[ "Yuzheng Hu", "Pingbang Hu", "Han Zhao", "Jiaqi Ma" ]
NeurIPS.cc/2024/Conference
2409.18153
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qTypwXvNJa
@inproceedings{ mellot2024geodesic, title={Geodesic Optimization for Predictive Shift Adaptation on {EEG} data}, author={Apolline Mellot and Antoine Collas and Sylvain Chevallier and Alexandre Gramfort and Denis Alexander Engemann}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qTypwXvNJa} }
Electroencephalography (EEG) data is often collected from diverse contexts involving different populations and EEG devices. This variability can induce distribution shifts in the data $X$ and in the biomedical variables of interest $y$, thus limiting the application of supervised machine learning (ML) algorithms. While domain adaptation (DA) methods have been developed to mitigate the impact of these shifts, such methods struggle when distribution shifts occur simultaneously in $X$ and $y$. As state-of-the-art ML models for EEG represent the data by spatial covariance matrices, which lie on the Riemannian manifold of Symmetric Positive Definite (SPD) matrices, it is appealing to study DA techniques operating on the SPD manifold. This paper proposes a novel method termed Geodesic Optimization for Predictive Shift Adaptation (GOPSA) to address test-time multi-source DA for situations in which source domains have distinct $y$ distributions. GOPSA exploits the geodesic structure of the Riemannian manifold to jointly learn a domain-specific re-centering operator representing site-specific intercepts and the regression model. We performed empirical benchmarks on the cross-site generalization of age-prediction models with resting-state EEG data from a large multi-national dataset (HarMNqEEG), which included $14$ recording sites and more than $1500$ human participants. Compared to state-of-the-art methods, our results showed that GOPSA achieved significantly higher performance on three regression metrics ($R^2$, MAE, and Spearman's $\rho$) for several source-target site combinations, highlighting its effectiveness in tackling multi-source DA with predictive shifts in EEG data analysis. Our method has the potential to combine the advantages of mixed-effects modeling with machine learning for biomedical applications of EEG, such as multicenter clinical trials.
Geodesic Optimization for Predictive Shift Adaptation on EEG data
[ "Apolline Mellot", "Antoine Collas", "Sylvain Chevallier", "Alexandre Gramfort", "Denis Alexander Engemann" ]
NeurIPS.cc/2024/Conference
2407.03878
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=qRnmLJQHgx
@inproceedings{ bachmann2024m, title={4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities}, author={Roman Bachmann and O{\u{g}}uzhan Fatih Kar and David Mizrahi and Ali Garjani and Mingfei Gao and David Griffiths and Jiaming Hu and Afshin Dehghan and Amir Zamir}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qRnmLJQHgx} }
Current multimodal and multitask foundation models, like 4M or UnifiedIO, show promising results. However, their out-of-the-box abilities to accept diverse inputs and perform diverse tasks are limited by the (usually small) number of modalities and tasks they are trained on. In this paper, we develop a single any-to-any model trained on tens of highly diverse modalities and by performing co-training on large-scale multimodal datasets and text corpora. This includes training on images and text along with several semantic and geometric modalities, feature maps from recent state of the art models like DINOv2 and ImageBind, pseudo labels of specialist models like SAM and 4DHumans, and a range of new modalities that allow for novel ways to interact with the model and steer the generation, for example, image metadata or color palettes. A crucial step in this process is performing discrete tokenization on various modalities, whether they are image-like, neural network feature maps, vectors, structured data like instance segmentation or human poses, or data that can be represented as text. Through this, we show the possibility of training one model to solve at least 3x more tasks/modalities than existing models and doing so without a loss in performance. In addition, this enables more fine-grained and controllable multimodal generation capabilities and allows studying the distillation of models trained on diverse data and objectives into one unified model. We scale the training to a three billion parameter and different datasets. The multimodal models and training code are open sourced at https://4m.epfl.ch/.
4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities
[ "Roman Bachmann", "Oğuzhan Fatih Kar", "David Mizrahi", "Ali Garjani", "Mingfei Gao", "David Griffiths", "Jiaming Hu", "Afshin Dehghan", "Amir Zamir" ]
NeurIPS.cc/2024/Conference
2406.09406
[ "" ]
https://huggingface.co/papers/2406.09406
7
13
2
9
[ "EPFL-VILAB/4M-21_XL", "EPFL-VILAB/4M-7_B_CC12M", "EPFL-VILAB/4M-21_B", "EPFL-VILAB/4M-7-T2I_XL_CC12M", "EPFL-VILAB/4M-21_L", "EPFL-VILAB/4M_tokenizers_ImageBind-H14_8k_224-448", "EPFL-VILAB/4M_tokenizers_rgb_16k_224-448", "EPFL-VILAB/4M-7-SR_L_CC12M", "EPFL-VILAB/4M_tokenizers_normal_8k_224-448", "EPFL-VILAB/4M_tokenizers_depth_8k_224-448", "EPFL-VILAB/4M_tokenizers_semseg_4k_224-448", "EPFL-VILAB/4M_tokenizers_CLIP-B16_8k_224-448", "EPFL-VILAB/4M-7_L_CC12M", "EPFL-VILAB/4M-7_B_COYO700M", "EPFL-VILAB/4M-7_XL_CC12M", "EPFL-VILAB/4M_tokenizers_sam-instance_1k_64", "EPFL-VILAB/4M_tokenizers_human-poses_1k_8", "EPFL-VILAB/4M_tokenizers_ImageBind-H14-global_8k_16_224", "EPFL-VILAB/4M_tokenizers_DINOv2-B14-global_8k_16_224", "EPFL-VILAB/4M_tokenizers_DINOv2-B14_8k_224-448", "EPFL-VILAB/4M-7_L_COYO700M", "EPFL-VILAB/4M-7_XL_COYO700M", "EPFL-VILAB/4M-7-T2I_B_CC12M", "EPFL-VILAB/4M-7-T2I_L_CC12M", "EPFL-VILAB/4M_tokenizers_edge_8k_224-512" ]
[]
[ "EPFL-VILAB/4M", "Omega02gdfdd/4M", "aroraaman/image-retrieval-using-apple-4M-21", "ReySajju742/4M", "visualizingjp/4M" ]
[ "EPFL-VILAB/4M-21_XL", "EPFL-VILAB/4M-7_B_CC12M", "EPFL-VILAB/4M-21_B", "EPFL-VILAB/4M-7-T2I_XL_CC12M", "EPFL-VILAB/4M-21_L", "EPFL-VILAB/4M_tokenizers_ImageBind-H14_8k_224-448", "EPFL-VILAB/4M_tokenizers_rgb_16k_224-448", "EPFL-VILAB/4M-7-SR_L_CC12M", "EPFL-VILAB/4M_tokenizers_normal_8k_224-448", "EPFL-VILAB/4M_tokenizers_depth_8k_224-448", "EPFL-VILAB/4M_tokenizers_semseg_4k_224-448", "EPFL-VILAB/4M_tokenizers_CLIP-B16_8k_224-448", "EPFL-VILAB/4M-7_L_CC12M", "EPFL-VILAB/4M-7_B_COYO700M", "EPFL-VILAB/4M-7_XL_CC12M", "EPFL-VILAB/4M_tokenizers_sam-instance_1k_64", "EPFL-VILAB/4M_tokenizers_human-poses_1k_8", "EPFL-VILAB/4M_tokenizers_ImageBind-H14-global_8k_16_224", "EPFL-VILAB/4M_tokenizers_DINOv2-B14-global_8k_16_224", "EPFL-VILAB/4M_tokenizers_DINOv2-B14_8k_224-448", "EPFL-VILAB/4M-7_L_COYO700M", "EPFL-VILAB/4M-7_XL_COYO700M", "EPFL-VILAB/4M-7-T2I_B_CC12M", "EPFL-VILAB/4M-7-T2I_L_CC12M", "EPFL-VILAB/4M_tokenizers_edge_8k_224-512" ]
[]
[ "EPFL-VILAB/4M", "Omega02gdfdd/4M", "aroraaman/image-retrieval-using-apple-4M-21", "ReySajju742/4M", "visualizingjp/4M" ]
1
poster
null
https://openreview.net/forum?id=qQlmONeI5k
@inproceedings{ hu2024empowering, title={Empowering Visible-Infrared Person Re-Identification with Large Foundation Models}, author={Zhangyi Hu and Bin Yang and Mang Ye}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qQlmONeI5k} }
Visible-Infrared Person Re-identification (VI-ReID) is a challenging cross-modal retrieval task due to significant modality differences, primarily caused by the absence of detailed color information in the infrared modality. The development of large foundation models like Large Language Models (LLMs) and Vision Language Models (VLMs) motivates us to investigate a feasible solution to empower VI-ReID performance with off-the-shelf large foundation models. To this end, we propose a novel Text-enhanced VI-ReID framework driven by Large Foundation Models (TVI-LFM). The basic idea is to enrich the representation of the infrared modality with textual descriptions automatically generated by VLMs. Specifically, we incorporate a pre-trained VLM to extract textual features from texts generated by VLM and augmented by LLM, and incrementally fine-tune the text encoder to minimize the domain gap between generated texts and original visual modalities. Meanwhile, to enhance the infrared modality with extracted textual representations, we leverage modality alignment capabilities of VLMs and VLM-generated feature-level filters. This allows the text model to learn complementary features from the infrared modality, ensuring the semantic structural consistency between the fusion modality and the visible modality. Furthermore, we introduce modality joint learning to align features of all modalities, ensuring that textual features maintain stable semantic representation of overall pedestrian appearance during complementary information learning. Additionally, a modality ensemble retrieval strategy is proposed to leverage complementary strengths of each query modality to improve retrieval effectiveness and robustness. Extensive experiments demonstrate that our method significantly improves retrieval performance on three expanded cross-modal re-identification datasets, paving the way for utilizing large foundation models in downstream data-demanding multi-modal retrieval tasks.
Empowering Visible-Infrared Person Re-Identification with Large Foundation Models
[ "Zhangyi Hu", "Bin Yang", "Mang Ye" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qPpVDzPhSL
@inproceedings{ su2024source, title={Source Code Foundation Models are Transferable Binary Analysis Knowledge Bases}, author={Zian Su and Xiangzhe Xu and Ziyang Huang and Kaiyuan Zhang and Xiangyu Zhang}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qPpVDzPhSL} }
Human-Oriented Binary Reverse Engineering (HOBRE) lies at the intersection of binary and source code, aiming to lift binary code to human-readable content relevant to source code, thereby bridging the binary-source semantic gap. Recent advancements in uni-modal code model pre-training, particularly in generative Source Code Foundation Models (SCFMs) and binary understanding models, have laid the groundwork for transfer learning applicable to HOBRE. However, existing approaches for HOBRE rely heavily on uni-modal models like SCFMs for supervised fine-tuning or general LLMs for prompting, resulting in sub-optimal performance. Inspired by recent progress in large multi-modal models, we propose that it is possible to harness the strengths of uni-modal code models from both sides to bridge the semantic gap effectively. In this paper, we introduce a novel probe-and-recover framework that incorporates a binary-source encoder-decoder model and black-box LLMs for binary analysis. Our approach leverages the pre-trained knowledge within SCFMs to synthesize relevant, symbol-rich code fragments as context. This additional context enables black-box LLMs to enhance recovery accuracy. We demonstrate significant improvements in zero-shot binary summarization and binary function name recovery, with a 10.3% relative gain in CHRF and a 16.7% relative gain in a GPT4-based metric for summarization, as well as a 6.7% and 7.4% absolute increase in token-level precision and recall for name recovery, respectively. These results highlight the effectiveness of our approach in automating and improving binary code analysis.
Source Code Foundation Models are Transferable Binary Analysis Knowledge Bases
[ "Zian Su", "Xiangzhe Xu", "Ziyang Huang", "Kaiyuan Zhang", "Xiangyu Zhang" ]
NeurIPS.cc/2024/Conference
2405.19581
[ "https://github.com/ziansu/prorec" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qOSFiJdVkZ
@inproceedings{ benjamin2024continual, title={Continual learning with the neural tangent ensemble}, author={Ari S Benjamin and Christian-Gernot Pehle and Kyle Daruwalla}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qOSFiJdVkZ} }
A natural strategy for continual learning is to weigh a Bayesian ensemble of fixed functions. This suggests that if a (single) neural network could be interpreted as an ensemble, one could design effective algorithms that learn without forgetting. To realize this possibility, we observe that a neural network classifier with N parameters can be interpreted as a weighted ensemble of N classifiers, and that in the lazy regime limit these classifiers are fixed throughout learning. We call these classifiers the *neural tangent experts* and show they output valid probability distributions over the labels. We then derive the likelihood and posterior probability of each expert given past data. Surprisingly, the posterior updates for these experts are equivalent to a scaled and projected form of stochastic gradient descent (SGD) over the network weights. Away from the lazy regime, networks can be seen as ensembles of adaptive experts which improve over time. These results offer a new interpretation of neural networks as Bayesian ensembles of experts, providing a principled framework for understanding and mitigating catastrophic forgetting in continual learning settings.
Continual learning with the neural tangent ensemble
[ "Ari S Benjamin", "Christian-Gernot Pehle", "Kyle Daruwalla" ]
NeurIPS.cc/2024/Conference
2408.17394
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=qNXRXUC90b
@inproceedings{ liu2024uncertaintyaware, title={Uncertainty-aware Fine-tuning of Segmentation Foundation Models}, author={Kangning Liu and Brian L. Price and Jason Kuen and Yifei Fan and Zijun Wei and Luis Figueroa and Krzysztof J. Geras and Carlos Fernandez-Granda}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qNXRXUC90b} }
The Segment Anything Model (SAM) is a large-scale foundation model that has revolutionized segmentation methodology. Despite its impressive generalization ability, the segmentation accuracy of SAM on images with intricate structures is often unsatisfactory. Recent works have proposed lightweight fine-tuning using high-quality annotated data to improve accuracy on such images. However, here we provide extensive empirical evidence that this strategy leads to forgetting how to "segment anything": these models lose the original generalization abilities of SAM, in the sense that they perform worse for segmentation tasks not represented in the annotated fine-tuning set. To improve performance without forgetting, we introduce a novel framework that combines high-quality annotated data with a large unlabeled dataset. The framework relies on two methodological innovations. First, we quantify the uncertainty in the SAM pseudo labels associated with the unlabeled data and leverage it to perform uncertainty-aware fine-tuning. Second, we encode the type of segmentation task associated with each training example using a $\textit{task prompt}$ to reduce ambiguity. We evaluated the proposed Segmentation with Uncertainty Model (SUM) on a diverse test set consisting of 14 public benchmarks, where it achieves state-of-the-art results. Notably, our method consistently surpasses SAM by 3-6 points in mean IoU and 4-7 in mean boundary IoU across point-prompt interactive segmentation rounds. Code is available at https://github.com/Kangningthu/SUM
Uncertainty-aware Fine-tuning of Segmentation Foundation Models
[ "Kangning Liu", "Brian L. Price", "Jason Kuen", "Yifei Fan", "Zijun Wei", "Luis Figueroa", "Krzysztof J. Geras", "Carlos Fernandez-Granda" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qLnXPVvwLx
@inproceedings{ qiao2024prism, title={Prism: A Framework for Decoupling and Assessing the Capabilities of {VLM}s}, author={Yuxuan Qiao and Haodong Duan and Xinyu Fang and Junming Yang and Lin Chen and Songyang Zhang and Jiaqi Wang and Dahua Lin and Kai Chen}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qLnXPVvwLx} }
Vision Language Models (VLMs) demonstrate remarkable proficiency in addressing a wide array of visual questions, which requires strong perception and reasoning faculties. Assessing these two competencies independently is crucial for model refinement, despite the inherent difficulty due to the intertwined nature of seeing and reasoning in existing VLMs. To tackle this issue, we present Prism, an innovative framework designed to disentangle the perception and reasoning processes involved in visual question solving. Prism comprises two distinct stages: a perception stage that utilizes a VLM to extract and articulate visual information in textual form, and a reasoning stage that formulates responses based on the extracted visual information using a Large Language Model (LLM). This modular design enables the systematic comparison and assessment of both proprietary and open-source VLM for their perception and reasoning strengths. Our analytical framework provides several valuable insights, underscoring Prism's potential as a cost-effective solution for vision-language tasks. By combining a streamlined VLM focused on perception with a powerful LLM tailored for reasoning, Prism achieves superior results in general vision-language tasks while substantially cutting down on training and operational expenses. Quantitative evaluations show that Prism, when configured with a vanilla 2B LLaVA and freely accessible GPT-3.5, delivers performance on par with VLMs $10 \times$ larger on the rigorous multimodal benchmark MMStar.
Prism: A Framework for Decoupling and Assessing the Capabilities of VLMs
[ "Yuxuan Qiao", "Haodong Duan", "Xinyu Fang", "Junming Yang", "Lin Chen", "Songyang Zhang", "Jiaqi Wang", "Dahua Lin", "Kai Chen" ]
NeurIPS.cc/2024/Conference
2406.14544
[ "https://github.com/sparksjoe/prism" ]
https://huggingface.co/papers/2406.14544
8
34
2
9
[ "Yuxuan-Qiao/PrismCaptioner-2B", "Yuxuan-Qiao/PrismCaptioner-7B" ]
[]
[]
[ "Yuxuan-Qiao/PrismCaptioner-2B", "Yuxuan-Qiao/PrismCaptioner-7B" ]
[]
[]
1
poster
null
https://openreview.net/forum?id=qKfiWNHp6k
@inproceedings{ yang2024recognize, title={Recognize Any Regions}, author={Haosen Yang and Chuofan Ma and Bin Wen and Yi Jiang and Zehuan Yuan and Xiatian Zhu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qKfiWNHp6k} }
Understanding the semantics of individual regions or patches of unconstrained images, such as open-world object detection, remains a critical yet challenging task in computer vision. Building on the success of powerful image-level vision-language (ViL) foundation models like CLIP, recent efforts have sought to harness their capabilities by either training a contrastive model from scratch with an extensive collection of region-label pairs or aligning the outputs of a detection model with image-level representations of region proposals. Despite notable progress, these approaches are plagued by computationally intensive training requirements, susceptibility to data noise, and deficiency in contextual information. To address these limitations, we explore the synergistic potential of off-the-shelf foundation models, leveraging their respective strengths in localization and semantics. We introduce a novel, generic, and efficient architecture, named RegionSpot, designed to integrate position-aware localization knowledge from a localization foundation model (e.g., SAM) with semantic information from a ViL model (e.g., CLIP). To fully exploit pretrained knowledge while minimizing training overhead, we keep both foundation models frozen, focusing optimization efforts solely on a lightweight attention-based knowledge integration module. Extensive experiments in open-world object recognition show that our RegionSpot achieves significant performance gain over prior alternatives, along with substantial computational savings (e.g., training our model with 3 million data in a single day using 8 V100 GPUs). RegionSpot outperforms GLIP-L by 2.9 in mAP on LVIS val set, with an even larger margin of 13.1 AP for more challenging and rare categories, and a 2.5 AP increase on ODinW. Furthermore, it exceeds GroundingDINO-L by 11.0 AP for rare categories on the LVIS minival set.
Recognize Any Regions
[ "Haosen Yang", "Chuofan Ma", "Bin Wen", "Yi Jiang", "Zehuan Yuan", "Xiatian Zhu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qK4iS49KDm
@inproceedings{ lee2024neural, title={Neural network learns low-dimensional polynomials with {SGD} near the information-theoretic limit}, author={Jason D. Lee and Kazusato Oko and Taiji Suzuki and Denny Wu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qK4iS49KDm} }
We study the problem of gradient descent learning of a single-index target function $f_*(\boldsymbol{x}) = \textstyle\sigma_*\left(\langle\boldsymbol{x},\boldsymbol{\theta}\rangle\right)$ under isotropic Gaussian data in $\mathbb{R}^d$, where the unknown link function $\sigma_*:\mathbb{R}\to\mathbb{R}$ has information exponent $p$ (defined as the lowest degree in the Hermite expansion). Prior works showed that gradient-based training of neural networks can learn this target with $n\gtrsim d^{\Theta(p)}$ samples, and such complexity is predicted to be necessary by the correlational statistical query lower bound. Surprisingly, we prove that a two-layer neural network optimized by an SGD-based algorithm (on the squared loss) learns $f_*$ with a complexity that is not governed by the information exponent. Specifically, for arbitrary polynomial single-index models, we establish a sample and runtime complexity of $n \simeq T = \Theta(d\cdot\mathrm{polylog} d)$, where $\Theta(\cdot)$ hides a constant only depending on the degree of $\sigma_*$; this dimension dependence matches the information theoretic limit up to polylogarithmic factors. More generally, we show that $n\gtrsim d^{(p_*-1)\vee 1}$ samples are sufficient to achieve low generalization error, where $p_* \le p$ is the \textit{generative exponent} of the link function. Core to our analysis is the reuse of minibatch in the gradient computation, which gives rise to higher-order information beyond correlational queries.
Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit
[ "Jason D. Lee", "Kazusato Oko", "Taiji Suzuki", "Denny Wu" ]
NeurIPS.cc/2024/Conference
2406.01581
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qInb7EUmxz
@inproceedings{ yanfan2024persistence, title={Persistence Homology Distillation for Semi-supervised Continual Learning}, author={YanFan and Yu Wang and Pengfei Zhu and Dongyue Chen and Qinghua Hu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qInb7EUmxz} }
Semi-supervised continual learning (SSCL) has attracted significant attention for addressing catastrophic forgetting in semi-supervised data. Knowledge distillation, which leverages data representation and pair-wise similarity, has shown significant potential in preserving information in SSCL. However, traditional distillation strategies often fail in unlabeled data with inaccurate or noisy information, limiting their efficiency in feature spaces undergoing substantial changes during continual learning. To address these limitations, we propose Persistence Homology Distillation (PsHD) to preserve intrinsic structural information that is insensitive to noise in semi-supervised continual learning. First, we capture the structural features using persistence homology by homological evolution across different scales in vision data, where the multi-scale characteristic established its stability under noise interference. Next, we propose a persistence homology distillation loss in SSCL and design an acceleration algorithm to reduce the computational cost of persistence homology in our module. Furthermore, we demonstrate the superior stability of PsHD compared to sample representation and pair-wise similarity distillation methods theoretically and experimentally. Finally, experimental results on three widely used datasets validate that the new PsHD outperforms state-of-the-art with 3.9% improvements on average, and also achieves 1.5% improvements while reducing 60% memory buffer size, highlighting the potential of utilizing unlabeled data in SSCL. Our code is available: https://github.com/fanyan0411/PsHD.
Persistence Homology Distillation for Semi-supervised Continual Learning
[ "YanFan", "Yu Wang", "Pengfei Zhu", "Dongyue Chen", "Qinghua Hu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qIkYlfDZaI
@inproceedings{ zou2024a, title={A Closer Look at the {CLS} Token for Cross-Domain Few-Shot Learning}, author={Yixiong Zou and Shuai Yi and Yuhua Li and Ruixuan Li}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qIkYlfDZaI} }
Vision Transformer (ViT) has shown great power in learning from large-scale datasets. However, collecting sufficient data for expert knowledge is always difficult. To handle this problem, Cross-Domain Few-Shot Learning (CDFSL) has been proposed to transfer the source-domain knowledge learned from sufficient data to target domains where only scarce data is available. In this paper, we find an intriguing phenomenon neglected by previous works for the CDFSL task based on ViT: leaving the CLS token to random initialization, instead of loading source-domain trained parameters, could consistently improve target-domain performance. We then delve into this phenomenon for an interpretation. We find **the CLS token naturally absorbs domain information** due to the inherent structure of the ViT, which is represented as the low-frequency component in the Fourier frequency space of images. Based on this phenomenon and interpretation, we further propose a method for the CDFSL task to decouple the domain information in the CLS token during the source-domain training, and adapt the CLS token on the target domain for efficient few-shot learning. Extensive experiments on four benchmarks validate our rationale and state-of-the-art performance. Our codes are available at https://github.com/Zoilsen/CLS_Token_CDFSL.
A Closer Look at the CLS Token for Cross-Domain Few-Shot Learning
[ "Yixiong Zou", "Shuai Yi", "Yuhua Li", "Ruixuan Li" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qGiZQb1Khm
@inproceedings{ sander2024watermarking, title={Watermarking Makes Language Models Radioactive}, author={Tom Sander and Pierre Fernandez and Alain Oliviero Durmus and Matthijs Douze and Teddy Furon}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qGiZQb1Khm} }
We investigate the radioactivity of text generated by large language models (LLM), \ie whether it is possible to detect that such synthetic input was used to train a subsequent LLM. Current methods like membership inference or active IP protection either work only in settings where the suspected text is known or do not provide reliable statistical guarantees. We discover that, on the contrary, it is possible to reliably determine if a language model was trained on synthetic data if that data is output by a watermarked LLM. Our new methods, specialized for radioactivity, detects with a provable confidence weak residuals of the watermark signal in the fine-tuned LLM. We link the radioactivity contamination level to the following properties: the watermark robustness, its proportion in the training set, and the fine-tuning process. For instance, if the suspect model is open-weight, we demonstrate that training on watermarked instructions can be detected with high confidence ($p$-value $< 10^{-5}$) even when as little as $5\%$ of training text is watermarked.
Watermarking Makes Language Models Radioactive
[ "Tom Sander", "Pierre Fernandez", "Alain Oliviero Durmus", "Matthijs Douze", "Teddy Furon" ]
NeurIPS.cc/2024/Conference
2402.14904
[ "https://github.com/facebookresearch/radioactive-watermark" ]
https://huggingface.co/papers/2402.14904
3
23
2
5
[]
[]
[]
[]
[]
[]
1
oral
null
https://openreview.net/forum?id=qEpi8uWX3N
@inproceedings{ tian2024hydralora, title={HydraLo{RA}: An Asymmetric Lo{RA} Architecture for Efficient Fine-Tuning}, author={Chunlin Tian and Zhan Shi and Zhijiang Guo and Li Li and Cheng-zhong Xu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qEpi8uWX3N} }
Adapting Large Language Models (LLMs) to new tasks through fine-tuning has been made more efficient by the introduction of Parameter-Efficient Fine-Tuning (PEFT) techniques, such as LoRA. However, these methods often underperform compared to full fine-tuning, particularly in scenarios involving complex datasets. This issue becomes even more pronounced in complex domains, highlighting the need for improved PEFT approaches that can achieve better performance. Through a series of experiments, we have uncovered two critical insights that shed light on the training and parameter inefficiency of LoRA. Building on these insights, we have developed HydraLoRA, a LoRA framework with an asymmetric structure that eliminates the need for domain expertise. Our experiments demonstrate that HydraLoRA outperforms other PEFT approaches, even those that rely on domain knowledge during the training and inference phases. Our anonymous codes are submitted with the paper and will be publicly available. Code is available: https://github.com/Clin0212/HydraLoRA.
HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning
[ "Chunlin Tian", "Zhan Shi", "Zhijiang Guo", "Li Li", "Cheng-zhong Xu" ]
NeurIPS.cc/2024/Conference
2404.19245
[ "https://github.com/clin0212/hydralora" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
oral
null
https://openreview.net/forum?id=qDuqp1nZZ6
@inproceedings{ sheffet2024differentially, title={Differentially Private Equivalence Testing for Continuous Distributions and Applications}, author={Or Sheffet and Daniel Omer}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qDuqp1nZZ6} }
We present the first algorithm for testing equivalence between two continuous distributions using differential privacy (DP). Our algorithm is a private version of the algorithm of Diakonikolas et al. The algorithm of Diakonikolas et al uses the data itself to repeatedly discretize the real line so that --- when the two distributions are far apart in ${\cal A}_k$-norm --- one of the discretized distributions exhibits large $L_2$-norm difference; and upon repeated sampling such large gap would be detected. Designing its private analogue poses two difficulties. First, our DP algorithm can not resample new datapoints as a change to a single datapoint may lead to a very large change in the descretization of the real line. In contrast, the (sorted) index of the discretization point changes only by $1$ between neighboring instances, and so we use a novel algorithm that set the discretization points using random Bernoulli noise, resulting in only a few buckets being affected under the right coupling. Second, our algorithm, which doesn't resample data, requires we also revisit the utility analysis of the original algorithm and prove its correctness w.r.t. the original sorted data; a problem we tackle using sampling a subset of Poisson-drawn size from each discretized bin. Lastly, since any distribution can be reduced to a continuous distribution, our algorithm is successfully carried to multiple other families of distributions and thus has numerous applications.
Differentially Private Equivalence Testing for Continuous Distributions and Applications
[ "Or Sheffet", "Daniel Omer" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qDfPSWXSLt
@inproceedings{ yang2024specgaussian, title={Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splatting}, author={Ziyi Yang and Xinyu Gao and Yang-Tian Sun and Yi-Hua Huang and Xiaoyang Lyu and Wen Zhou and Shaohui Jiao and XIAOJUAN QI and Xiaogang Jin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qDfPSWXSLt} }
The recent advancements in 3D Gaussian splatting (3D-GS) have not only facilitated real-time rendering through modern GPU rasterization pipelines but have also attained state-of-the-art rendering quality. Nevertheless, despite its exceptional rendering quality and performance on standard datasets, 3D-GS frequently encounters difficulties in accurately modeling specular and anisotropic components. This issue stems from the limited ability of spherical harmonics (SH) to represent high-frequency information. To overcome this challenge, we introduce Spec-Gaussian, an approach that utilizes an anisotropic spherical Gaussian (ASG) appearance field instead of SH for modeling the view-dependent appearance of each 3D Gaussian. Additionally, we have developed a coarse-to-fine training strategy to improve learning efficiency and eliminate floaters caused by overfitting in real-world scenes. Our experimental results demonstrate that our method surpasses existing approaches in terms of rendering quality. Thanks to ASG, we have significantly improved the ability of 3D-GS to model scenes with specular and anisotropic components without increasing the number of 3D Gaussians. This improvement extends the applicability of 3D GS to handle intricate scenarios with specular and anisotropic surfaces.
Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splatting
[ "Ziyi Yang", "Xinyu Gao", "Yang-Tian Sun", "Yi-Hua Huang", "Xiaoyang Lyu", "Wen Zhou", "Shaohui Jiao", "XIAOJUAN QI", "Xiaogang Jin" ]
NeurIPS.cc/2024/Conference
2402.15870
[ "" ]
https://huggingface.co/papers/2402.15870
0
0
0
9
[]
[]
[]
[]
[]
[]
1
poster
null
https://openreview.net/forum?id=qCpCy0EQAJ
@inproceedings{ ramkumar2024dynamic, title={Dynamic Neural Regeneration: Enhancing Deep Learning Generalization on Small Datasets}, author={Vijaya Raghavan T Ramkumar and Elahe Arani and Bahram Zonooz}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qCpCy0EQAJ} }
The efficacy of deep learning techniques is contingent upon access to large volumes of data (labeled or unlabeled). However, in practical domains such as medical applications, data availability is often limited. This presents a significant challenge: How can we effectively train deep neural networks on relatively small datasets while improving generalization? Recent works have explored evolutionary or iterative training paradigms, which reinitialize a subset of parameters to enhance generalization performance for small datasets. However, these methods typically rely on randomly selected parameter subsets and maintain fixed masks throughout training, potentially leading to suboptimal outcomes. Inspired by neurogenesis in the brain, we propose a novel iterative training framework, Dynamic Neural Regeneration (DNR), that employs a data-aware dynamic masking scheme to eliminate redundant connections by estimating their significance. This approach increases the model's capacity for further learning through random weight reinitialization. Experimental results demonstrate that our approach outperforms existing methods in accuracy and robustness, highlighting its potential for real-world applications where data collection is challenging.
Dynamic Neural Regeneration: Enhancing Deep Learning Generalization on Small Datasets
[ "Vijaya Raghavan T Ramkumar", "Elahe Arani", "Bahram Zonooz" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qCJ1dq5M7N
@inproceedings{ borse2024foura, title={Fou{RA}: Fourier Low-Rank Adaptation}, author={Shubhankar Borse and Shreya Kadambi and Nilesh Prasad Pandey and Kartikeya Bhardwaj and Viswanath Ganapathy and Sweta Priyadarshi and Risheek Garrepalli and Rafael Esteves and Munawar Hayat and Fatih Porikli}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qCJ1dq5M7N} }
While Low-Rank Adaptation (LoRA) has proven beneficial for efficiently fine-tuning large models, LoRA fine-tuned text-to-image diffusion models lack diversity in the generated images, as the model tends to copy data from the observed training samples. This effect becomes more pronounced at higher values of adapter strength and for adapters with higher ranks which are fine-tuned on smaller datasets. To address these challenges, we present FouRA, a novel low-rank method that learns projections in the Fourier domain along with learning a flexible input-dependent adapter rank selection strategy. Through extensive experiments and analysis, we show that FouRA successfully solves the problems related to data copying and distribution collapse while significantly improving the generated image quality. We demonstrate that FouRA enhances the generalization of fine-tuned models thanks to its adaptive rank selection. We further show that the learned projections in the frequency domain are decorrelated and prove effective when merging multiple adapters. While FouRA is motivated for vision tasks, we also demonstrate its merits for language tasks on commonsense reasoning and GLUE benchmarks.
FouRA: Fourier Low-Rank Adaptation
[ "Shubhankar Borse", "Shreya Kadambi", "Nilesh Prasad Pandey", "Kartikeya Bhardwaj", "Viswanath Ganapathy", "Sweta Priyadarshi", "Risheek Garrepalli", "Rafael Esteves", "Munawar Hayat", "Fatih Porikli" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=qAP6RyYIJc
@inproceedings{ sutton2024stealth, title={Stealth edits to large language models}, author={Oliver Sutton and Qinghua Zhou and Wei Wang and Desmond Higham and Alexander N. Gorban and Alexander Bastounis and Ivan Y Tyukin}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=qAP6RyYIJc} }
We reveal the theoretical foundations of techniques for editing large language models, and present new methods which can do so without requiring retraining. Our theoretical insights show that a single metric (a measure of the intrinsic dimension of the model's features) can be used to assess a model's editability and reveals its previously unrecognised susceptibility to malicious *stealth attacks*. This metric is fundamental to predicting the success of a variety of editing approaches, and reveals new bridges between disparate families of editing methods. We collectively refer to these as *stealth editing* methods, because they directly update a model's weights to specify its response to specific known hallucinating prompts without affecting other model behaviour. By carefully applying our theoretical insights, we are able to introduce a new *jet-pack* network block which is optimised for highly selective model editing, uses only standard network operations, and can be inserted into existing networks. We also reveal the vulnerability of language models to stealth attacks: a small change to a model's weights which fixes its response to a single attacker-chosen prompt. Stealth attacks are computationally simple, do not require access to or knowledge of the model's training data, and therefore represent a potent yet previously unrecognised threat to redistributed foundation models. Extensive experimental results illustrate and support our methods and their theoretical underpinnings. Demos and source code are available at https://github.com/qinghua-zhou/stealth-edits.
Stealth edits to large language models
[ "Oliver Sutton", "Qinghua Zhou", "Wei Wang", "Desmond Higham", "Alexander N. Gorban", "Alexander Bastounis", "Ivan Y Tyukin" ]
NeurIPS.cc/2024/Conference
2406.12670
[ "https://github.com/qinghua-zhou/stealth-edits" ]
https://huggingface.co/papers/2406.12670
0
0
0
7
[]
[]
[ "qinghua-zhou/stealth-edits" ]
[]
[]
[ "qinghua-zhou/stealth-edits" ]
1
poster
null
https://openreview.net/forum?id=q9dKv1AK6l
@inproceedings{ mei2024small, title={Small steps no more: Global convergence of stochastic gradient bandits for arbitrary learning rates}, author={Jincheng Mei and Bo Dai and Alekh Agarwal and Sharan Vaswani and Anant Raj and Csaba Szepesvari and Dale Schuurmans}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=q9dKv1AK6l} }
We provide a new understanding of the stochastic gradient bandit algorithm by showing that it converges to a globally optimal policy almost surely using \emph{any} constant learning rate. This result demonstrates that the stochastic gradient algorithm continues to balance exploration and exploitation appropriately even in scenarios where standard smoothness and noise control assumptions break down. The proofs are based on novel findings about action sampling rates and the relationship between cumulative progress and noise, and extend the current understanding of how simple stochastic gradient methods behave in bandit settings.
Small steps no more: Global convergence of stochastic gradient bandits for arbitrary learning rates
[ "Jincheng Mei", "Bo Dai", "Alekh Agarwal", "Sharan Vaswani", "Anant Raj", "Csaba Szepesvari", "Dale Schuurmans" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=q9RLsvYOB3
@inproceedings{ zhong2024flexplanner, title={FlexPlanner: Flexible 3D Floorplanning via Deep Reinforcement Learning in Hybrid Action Space with Multi-Modality Representation}, author={Ruizhe Zhong and Xingbo Du and Shixiong Kai and Zhentao Tang and Siyuan Xu and Jianye HAO and Mingxuan Yuan and Junchi Yan}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=q9RLsvYOB3} }
In the Integrated Circuit (IC) design flow, floorplanning (FP) determines the position and shape of each block. Serving as a prototype for downstream tasks, it is critical and establishes the upper bound of the final PPA (Power, Performance, Area). However, with the emergence of 3D IC with stacked layers, existing methods are not flexible enough to handle the versatile constraints. Besides, they typically face difficulties in aligning the cross-die modules in 3D ICs due to their heuristic representations, which could potentially result in severe data transfer failures. To address these issues, we propose FlexPlanner, a flexible learning-based method in hybrid action space with multi-modality representation to simultaneously handle position, aspect ratio, and alignment of blocks. To our best knowledge, FlexPlanner is the first learning-based approach to discard heuristic-based search in the 3D FP task. Thus, the solution space is not limited by the heuristic floorplanning representation, allowing for significant improvements in both wirelength and alignment scores. Specifically, FlexPlanner models 3D FP based on multi-modalities, including vision, graph, and sequence. To address the non-trivial heuristic-dependent issue, we design a sophisticated policy network with hybrid action space and asynchronous layer decision mechanism that allow for determining the versatile properties of each block. Experiments on public benchmarks MCNC and GSRC show the effectiveness. We significantly improve the alignment score from 0.474 to 0.940 and achieve an average reduction of 16% in wirelength. Moreover, our method also demonstrates zero-shot transferability on unseen circuits.
FlexPlanner: Flexible 3D Floorplanning via Deep Reinforcement Learning in Hybrid Action Space with Multi-Modality Representation
[ "Ruizhe Zhong", "Xingbo Du", "Shixiong Kai", "Zhentao Tang", "Siyuan Xu", "Jianye HAO", "Mingxuan Yuan", "Junchi Yan" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=q7TxGUWlhD
@inproceedings{ wang2024nagent, title={N-agent Ad Hoc Teamwork}, author={Caroline Wang and Arrasy Rahman and Ishan Durugkar and Elad Liebman and Peter Stone}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=q7TxGUWlhD} }
Current approaches to learning cooperative multi-agent behaviors assume relatively restrictive settings. In standard fully cooperative multi-agent reinforcement learning, the learning algorithm controls *all* agents in the scenario, while in ad hoc teamwork, the learning algorithm usually assumes control over only a *single* agent in the scenario. However, many cooperative settings in the real world are much less restrictive. For example, in an autonomous driving scenario, a company might train its cars with the same learning algorithm, yet once on the road, these cars must cooperate with cars from another company. Towards expanding the class of scenarios that cooperative learning methods may optimally address, we introduce $N$*-agent ad hoc teamwork* (NAHT), where a set of autonomous agents must interact and cooperate with dynamically varying numbers and types of teammates. This paper formalizes the problem, and proposes the *Policy Optimization with Agent Modelling* (POAM) algorithm. POAM is a policy gradient, multi-agent reinforcement learning approach to the NAHT problem, that enables adaptation to diverse teammate behaviors by learning representations of teammate behaviors. Empirical evaluation on tasks from the multi-agent particle environment and StarCraft II shows that POAM improves cooperative task returns compared to baseline approaches, and enables out-of-distribution generalization to unseen teammates.
N-agent Ad Hoc Teamwork
[ "Caroline Wang", "Arrasy Rahman", "Ishan Durugkar", "Elad Liebman", "Peter Stone" ]
NeurIPS.cc/2024/Conference
2404.10740
[ "https://github.com/carolinewang01/naht" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=q5CkneUn6K
@inproceedings{ liu2024enhancing, title={Enhancing {LLM}{\textquoteright}s Cognition via Structurization}, author={Kai Liu and Zhihang Fu and Chao Chen and Wei Zhang and Rongxin Jiang and Fan Zhou and Yaowu Chen and Yue Wu and Jieping Ye}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=q5CkneUn6K} }
When reading long-form text, human cognition is complex and structurized. While large language models (LLMs) process input contexts through a causal and sequential perspective, this approach can potentially limit their ability to handle intricate and complex inputs effectively. To enhance LLM’s cognition capability, this paper presents a novel concept of context structurization. Specifically, we transform the plain, unordered contextual sentences into well-ordered and hierarchically structurized elements. By doing so, LLMs can better grasp intricate and extended contexts through precise attention and information-seeking along the organized structures. Extensive evaluations are conducted across various model architectures and sizes (including a series of auto-regressive LLMs as well as BERT-like masking models) on a diverse set of NLP tasks (e.g., context-based question-answering, exhaustive hallucination evaluation, and passage-level dense retrieval). Empirical results show consistent and significant performance gains afforded by a single-round structurization. In particular, we boost the open-sourced LLaMA2-70B model to achieve comparable performance against GPT-3.5-Turbo as the halluci- nation evaluator. Besides, we show the feasibility of distilling advanced LLMs’ language processing abilities to a smaller yet effective StruXGPT-7B to execute structurization, addressing the practicality of our approach. Code is available at https://github.com/alibaba/struxgpt.
Enhancing LLM’s Cognition via Structurization
[ "Kai Liu", "Zhihang Fu", "Chao Chen", "Wei Zhang", "Rongxin Jiang", "Fan Zhou", "Yaowu Chen", "Yue Wu", "Jieping Ye" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=q3XavKPorV
@inproceedings{ yuan2024selfplay, title={Self-Play Fine-tuning of Diffusion Models for Text-to-image Generation}, author={Huizhuo Yuan and Zixiang Chen and Kaixuan Ji and Quanquan Gu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=q3XavKPorV} }
Fine-tuning Diffusion Models remains an underexplored frontier in generative artificial intelligence (GenAI), especially when compared with the remarkable progress made in fine-tuning Large Language Models (LLMs). While cutting-edge diffusion models such as Stable Diffusion (SD) and SDXL rely on supervised fine-tuning, their performance inevitably plateaus after seeing a certain volume of data. Recently, reinforcement learning (RL) has been employed to fine-tune diffusion models with human preference data, but it requires at least two images (``winner'' and ``loser'' images) for each text prompt. In this paper, we introduce an innovative technique called self-play fine-tuning for diffusion models (SPIN-Diffusion), where the diffusion model engages in competition with its earlier versions, facilitating an iterative self-improvement process. Our approach offers an alternative to conventional supervised fine-tuning and RL strategies, significantly improving both model performance and alignment. Our experiments on the Pick-a-Pic dataset reveal that SPIN-Diffusion outperforms the existing supervised fine-tuning method in aspects of human preference alignment and visual appeal right from its first iteration. By the second iteration, it exceeds the performance of RLHF-based methods across all metrics, achieving these results with less data. Codes are available at \url{https://github.com/uclaml/SPIN-Diffusion/}.
Self-Play Fine-tuning of Diffusion Models for Text-to-image Generation
[ "Huizhuo Yuan", "Zixiang Chen", "Kaixuan Ji", "Quanquan Gu" ]
NeurIPS.cc/2024/Conference
2402.10210
[ "" ]
https://huggingface.co/papers/2402.10210
4
30
4
4
[ "UCLA-AGI/SPIN-Diffusion-iter3", "UCLA-AGI/SPIN-Diffusion-iter1", "UCLA-AGI/SPIN-Diffusion-iter2" ]
[]
[ "UCLA-AGI/SPIN-Diffusion-demo-v1" ]
[ "UCLA-AGI/SPIN-Diffusion-iter3", "UCLA-AGI/SPIN-Diffusion-iter1", "UCLA-AGI/SPIN-Diffusion-iter2" ]
[]
[ "UCLA-AGI/SPIN-Diffusion-demo-v1" ]
1
poster
null
https://openreview.net/forum?id=pzJjlnMvk5
@inproceedings{ kairanda2024neuralclothsim, title={NeuralClothSim: Neural Deformation Fields Meet the Thin Shell Theory}, author={Navami Kairanda and Marc Habermann and Christian Theobalt and Vladislav Golyanik}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=pzJjlnMvk5} }
Despite existing 3D cloth simulators producing realistic results, they predominantly operate on discrete surface representations (e.g. points and meshes) with a fixed spatial resolution, which often leads to large memory consumption and resolution-dependent simulations. Moreover, back-propagating gradients through the existing solvers is difficult and they hence cannot be easily integrated into modern neural architectures. In response, this paper re-thinks physically plausible cloth simulation: We propose NeuralClothSim, i.e., a new quasistatic cloth simulator using thin shells, in which surface deformation is encoded in neural network weights in form of a neural field. Our memory-efficient solver operates on a new continuous coordinate-based surface representation called neural deformation fields (NDFs); it supervises NDF equilibria with the laws of the non-linear Kirchhoff-Love shell theory with a non-linear anisotropic material model. NDFs are adaptive: They 1) allocate their capacity to the deformation details and 2) allow surface state queries at arbitrary spatial resolutions without re-training. We show how to train NeuralClothSim while imposing hard boundary conditions and demonstrate multiple applications, such as material interpolation and simulation editing. The experimental results highlight the effectiveness of our continuous neural formulation.
NeuralClothSim: Neural Deformation Fields Meet the Thin Shell Theory
[ "Navami Kairanda", "Marc Habermann", "Christian Theobalt", "Vladislav Golyanik" ]
NeurIPS.cc/2024/Conference
2308.12970
[ "https://github.com/navamikairanda/neuralclothsim" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=pyqPUf36D2
@inproceedings{ peng2024pseudoprivate, title={Pseudo-Private Data Guided Model Inversion Attacks}, author={Xiong Peng and Bo Han and Feng Liu and Tongliang Liu and Mingyuan Zhou}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=pyqPUf36D2} }
In model inversion attacks (MIAs), adversaries attempt to recover private training data by exploiting access to a well-trained target model. Recent advancements have improved MIA performance using a two-stage generative framework. This approach first employs a generative adversarial network to learn a fixed distributional prior, which is then used to guide the inversion process during the attack. However, in this paper, we observed a phenomenon that such a fixed prior would lead to a low probability of sampling actual private data during the inversion process due to the inherent distribution gap between the prior distribution and the private data distribution, thereby constraining attack performance. To address this limitation, we propose increasing the density around high-quality pseudo-private data—recovered samples through model inversion that exhibit characteristics of the private training data—by slightly tuning the generator. This strategy effectively increases the probability of sampling actual private data that is close to these pseudo-private data during the inversion process. After integrating our method, the generative model inversion pipeline is strengthened, leading to improvements over state-of-the-art MIAs. This paves the way for new research directions in generative MIAs.
Pseudo-Private Data Guided Model Inversion Attacks
[ "Xiong Peng", "Bo Han", "Feng Liu", "Tongliang Liu", "Mingyuan Zhou" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=pwRVGRWtGg
@inproceedings{ huang2024apathetic, title={Apathetic or Empathetic? Evaluating {LLM}s' Emotional Alignments with Humans}, author={Jen-tse Huang and Man Ho LAM and Eric John Li and Shujie Ren and Wenxuan Wang and Wenxiang Jiao and Zhaopeng Tu and Michael Lyu}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=pwRVGRWtGg} }
Evaluating Large Language Models’ (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. After a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes seven LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest iterations, such as GPT-4, Mixtral-8x22B, and LLaMA-3.1. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, i.e., EmotionBench, are publicly available at https://github.com/CUHK-ARISE/EmotionBench.
Apathetic or Empathetic? Evaluating LLMs' Emotional Alignments with Humans
[ "Jen-tse Huang", "Man Ho LAM", "Eric John Li", "Shujie Ren", "Wenxuan Wang", "Wenxiang Jiao", "Zhaopeng Tu", "Michael Lyu" ]
NeurIPS.cc/2024/Conference
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=pwLdvYIMrF
@inproceedings{ yeongbin2024trainattention, title={Train-Attention: Meta-Learning Where to Focus in Continual Knowledge Learning}, author={Seo Yeongbin and Dongha Lee and Jinyoung Yeo}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=pwLdvYIMrF} }
Previous studies on continual knowledge learning (CKL) in large language models (LLMs) have predominantly focused on approaches such as regularization, architectural modifications, and rehearsal techniques to mitigate catastrophic forgetting. However, these methods naively inherit the inefficiencies of standard training procedures, indiscriminately applying uniform weight across all tokens, which can lead to unnecessary parameter updates and increased forgetting. To address these shortcomings, we propose a novel CKL approach termed Train-Attention-Augmented Language Model (TAALM), which enhances learning efficiency by dynamically predicting and applying weights to tokens based on their usefulness. This method employs a meta-learning framework that optimizes token importance predictions, facilitating targeted knowledge updates and minimizing forgetting. Also, we observe that existing benchmarks do not clearly exhibit the trade-off between learning and retaining, therefore we propose a new benchmark, LAMA-ckl, to address this issue. Through experiments conducted on both newly introduced and established CKL benchmarks, TAALM proves the state-of-the-art performance upon the baselines, and also shows synergistic compatibility when integrated with previous CKL approaches. The code and the dataset are available online.
Train-Attention: Meta-Learning Where to Focus in Continual Knowledge Learning
[ "Seo Yeongbin", "Dongha Lee", "Jinyoung Yeo" ]
NeurIPS.cc/2024/Conference
2407.16920
[ "https://github.com/ybseo-academy/TAALM" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster
null
https://openreview.net/forum?id=pwKkNSuuEs
@inproceedings{ wen2024abstracted, title={Abstracted Shapes as Tokens - A Generalizable and Interpretable Model for Time-series Classification}, author={Yunshi Wen and Tengfei Ma and Tsui-Wei Weng and Lam M. Nguyen and Anak Agung Julius}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024}, url={https://openreview.net/forum?id=pwKkNSuuEs} }
In time-series analysis, many recent works seek to provide a unified view and representation for time-series across multiple domains, leading to the development of foundation models for time-series data. Despite diverse modeling techniques, existing models are black boxes and fail to provide insights and explanations about their representations. In this paper, we present VQShape, a pre-trained, generalizable, and interpretable model for time-series representation learning and classification. By introducing a novel representation for time-series data, we forge a connection between the latent space of VQShape and shape-level features. Using vector quantization, we show that time-series from different domains can be described using a unified set of low-dimensional codes, where each code can be represented as an abstracted shape in the time domain. On classification tasks, we show that the representations of VQShape can be utilized to build interpretable classifiers, achieving comparable performance to specialist models. Additionally, in zero-shot learning, VQShape and its codebook can generalize to previously unseen datasets and domains that are not included in the pre-training process. The code and pre-trained weights are available at https://github.com/YunshiWen/VQShape.
Abstracted Shapes as Tokens - A Generalizable and Interpretable Model for Time-series Classification
[ "Yunshi Wen", "Tengfei Ma", "Tsui-Wei Weng", "Lam M. Nguyen", "Anak Agung Julius" ]
NeurIPS.cc/2024/Conference
[ "https://github.com/yunshiwen/vqshape" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
poster